[ { "url": "https://blog.skyvia.com", "product_name": "Unknown", "content_type": "Blog", "content": "Find an article... SEARCH TOPICS [Data Integration 148](https://skyvia.com/blog/category/data-integration/) [Data Loader 28](https://skyvia.com/blog/category/data-loader/) [Analytics & Reporting 12](https://skyvia.com/blog/category/analytics-and-reporting/) [Integration 4](https://skyvia.com/blog/category/integration/) BY CONNECTORS [Salesforce 48](https://skyvia.com/blog/tag/salesforce/) [CSV 25](https://skyvia.com/blog/tag/csv/) [HubSpot 8](https://skyvia.com/blog/tag/hubspot/) [SQL Server 7](https://skyvia.com/blog/tag/sql-server/) [MySQL 7](https://skyvia.com/blog/tag/mysql/) [PostgreSQL 5](https://skyvia.com/blog/tag/postgresql/) [OData 4](https://skyvia.com/blog/tag/odata/) [Amazon S3 4](https://skyvia.com/blog/tag/amazon-s3/) [Dynamics 365 3](https://skyvia.com/blog/tag/dynamics-365/) [NetSuite 3](https://skyvia.com/blog/tag/netsuite/)" }, { "url": "https://skyvia.com/blog/airtable-to-bigquery-data-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to connect Airtable to BigQuery By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/airtable-to-bigquery-data-integration/#respond) 2278 February 7, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fairtable-to-bigquery-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect+Airtable+to+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fairtable-to-bigquery-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/airtable-to-bigquery-data-integration/&title=How+to+connect+Airtable+to+BigQuery) Collaborative services, such as Airtable, and data warehouses, such as BigQuery, are included in the base toolkit set of practically any company. But why connect Airtable to BigQuery? And what benefits can businesses obtain from such integration? Connecting Airtable to BigQuery is essential to extend the functionality of the Airtable beyond its limits. Integrating it into BigQuery guarantees advanced analytics and data processing. In this article, you\u2019ll discover the two main methods for integrating Airtable data into BigQuery. The first uses the manual data loading approach, while the second method uses the professional data integration tool Skyvia. Table of Contents Understanding Airtable and BigQuery Why Connect Airtable to BigQuery Method 1: Manual Airtable Integration with BigQuery Method 2: Integrating Airtable with BigQuery using Skyvia Comparative Analysis Conclusion Understanding Airtable and BigQuery Before plunging into all the technical stuff about the Airtable to BigQuery integration, let\u2019s have a fresh overview of each tool\u2019s strengths. Airtable Overview Airtable is an innovative cloud-based collaborative platform that combines spreadsheet and database functionality. Similar to spreadsheets, this tool has a grid structure, but rows are called records, as in a database. Another database-like feature is the admission of text, image, and date formats as field types, while spreadsheets recognize only numerical formats. Key features Simultaneous collaboration on the same base (Airtable\u2019s principal unit for organizing and structuring data) by several users. Preconfigured templates for specific purposes (project management, content planning, inventory tracking, etc.). Capabilities for integration with other services via in-app connections or third-party tools, such as [Skyvia](https://skyvia.com/) . Airtable is frequently used among sales and marketing teams, project managers, and e-commerce specialists. BigQuery Overview BigQuery is Google\u2019s product for data warehousing, designed to store large amounts of data. It also allows users to perform complex analytics over data and utilize built-in machine learning algorithms. Key features Serverless architecture deprives users of the need to manage data storage and processing architecture, as Google handles everything automatically. Supports real-time analytics by ingesting data as it arrives. Uses SQL dialect for querying data. Natively integrates with other Google services, greatly assisting non-tech specialists in data transfer, analysis, and other tasks. Overall, BigQuery is a powerful tool for organizations looking to perform analytics and gain insights from large and complex datasets without the need to manage the underlying infrastructure. Why Connect Airtable to BigQuery Airtable to BigQuery integration allows users to extend the functionality of each tool. By connecting BigQuery and Airtable, businesses obtain multiple advantages, and here they are: Data warehousing. Export Airtable data to BigQuery to consolidate all your data in a data warehouse. It enhances the overall data management within the organization and improves data consistency. Advanced analytics. Connect Airtable to BigQuery to enjoy complex SQL queries, machine learning, and data visualization, which assist businesses in gaining deeper insights from their data. Security. Migrating Airtable data to BigQuery protects sensitive business data. Cost. BigQuery has a free plan with up to 10 GB of storage and 1TB of queries. Then, the pay-as-you-go model applies, depending on querying, analysis, and machine learning intensity. Airtable also offers a free plan with unlimited bases but with restrictions on the record quantity and attachments\u2019 size in a base. Collaboration. Migrating data from BigQuery to Airtable means that users can effectively interact with the enriched high-quality data. Looks fantastic, right? Data migration between BigQuery and Airtable is bi-directional, so let\u2019s explore the methods of performing this. Method 1: Manual Airtable Integration with BigQuery Note that there is no native connection between tools, but the manual integration in one of the following ways is possible: Using API Working with CSV files The first method is rather complex, requiring advanced knowledge of programming languages, middleware, and many other IT-related concepts. If you have such expertise, check [Airtable API](https://support.airtable.com/docs/api) and [BigQuery API](https://cloud.google.com/bigquery/docs/reference/rest) for details. Given the complexity of the first approach, we\u2019ll focus on the manual integration using CSV files. It\u2019s rather trivial but, at the same time, simple, and you can check this by following the setup instructions below. In Airtable Log into Airtable and select the needed base. Click on Grid View and select Download CSV from the menu. Browse the location on your computer where the CSV file will be saved. In BigQuery Log into BigQuery with your Google account and go to BigQuery Studio. Click Load file under the Add your own data section. Select the CSV file with the Airtable data from your computer and fill in the required fields in the Create table window. Click Create table . Navigate to the dataset where the table with Airtable data was created. Click PREVIEW to see the data. Despite its simplicity, this method is rather slow and doesn\u2019t align with the modern data generation pace. Moreover, it\u2019s prone to various kinds of errors and data loss . Method 2: Integrating Airtable with BigQuery using Skyvia [Skyvia](https://skyvia.com/) is a universal cloud SaaS platform for data integration, workflow automation, backup, and query. Its interface is [second to none](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , allowing users to perform all data-related procedures using 170+ data sources in minutes. This service has the following products: Data Integration contains various data import, export, synchronization, and replication tools. It provides powerful data mapping and transformation features for building compound data integration pipelines. Automation has triggers that apply the needed actions on the destination side once a particular event happens on the source side. Query is an online SQL client that allows users to execute SQL queries against cloud or relational data. Backup allows users to back up data from supported cloud resources automatically on schedule or manually at any time. Connect is an API-server-as-a-service tool for quickly creating web API endpoints to ensure access to data from anywhere. Despite the rich functionality of Skyvia, the Data Integration product best suits the Airtable BigQuery integration. In particular, we focus on the Replication and Import scenarios to ensure data transfer between Airtable and BigQuery. Prerequisites Airtable account BigQuery account Ready-to-go connectors in Skyvia: [See how to set up the Airtable connector](https://docs.skyvia.com/connectors/cloud-sources/airtable_connections.html) [See how to set up the BigQuery connector](https://docs.skyvia.com/connectors/databases/google_bigquery_connections.html) Replication Scenario Skyvia\u2019s [Replication tool](https://skyvia.com/data-integration/replication) allows users to bulk load and update data into a DWH owing to ELT. Skyvia supports all popular data warehouses, including [BigQuery](https://skyvia.com/connectors/google-bigquery) , [Amazon Redshift](https://skyvia.com/connectors/redshift) , [Snowflake](https://skyvia.com/connectors/snowflake) , etc. Let\u2019s have a look at [Airtable Data replication to Google BigQuery](https://skyvia.com/data-integration/replicate-airtable-to-google-bigquery) using Skyvia. [Log into your Skyvia account](https://app.skyvia.com/) . Click +New->Replication in the upper menu. Select Airtable as the source and BigQuery as the target. Select the objects for replication. Select Incremental Updates if you want to load new or modified records to a DWH over time. Click Schedule to set up timing for regular Airtable data updates into BigQuery. Click Save . Note that the free plan allows for one update per day, but if the near-real-time replication is critical for you, other pricing plans allow for more frequent updates. [Check out here](https://skyvia.com/pricing/) . Click Create . Click Run to start replication and observe the progress in the Monitor tab. After the replication is complete, you can see the results in BigQuery. Benefits of replication: Flexible scheduling for automated incremental data updates. Automated schema creation on the first replication. Complete or partial replication. Detailed logging. Import Scenario Skyvia\u2019s [Import](https://skyvia.com/data-integration/import) can perform ETL and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) scenarios, depending on the direction of data transfer and sources involved. In our case, we observe Airtable updates with enriched data (Task Score column added) from BigQuery. [Log into your Skyvia account](https://app.skyvia.com/) . Click +New->Import in the upper menu. Select BigQuery under Source and Airtable under Target . NOTE: In this example, the Task Score column was added to the table in BigQuery. So, we need to add the corresponding column table in Airtable before importing enriched data from BigQuery. Click Add task to select the object for import and select the needed DML operation: INSERT copies records from BigQuery to Airtable without checking for duplicates. UPDATE updates the existing records in Airtable. UPSERT isn\u2019t supported for BigQuery. DELETE removes the records by the specified primary key. Map source and target columns to match the data structures. Click Schedule to set up regular imports. Click Save . Click Run to start the integration and observe the progress in the Monitor tab. Once the process is complete, see the results in Airtable. Comparative Analysis We crafted a comparison table highlighting the differences between manual and automatic integration. It presents the strengths and weaknesses of each approach. Skyvia Manual Integration Advantages \u2013 Ease of use \u2013 Has suitable plans for everyone \u2013 Setup in minutes \u2013 170+ connectors \u2013 Powerful data transformations and mapping \u2013 Automatic incremental data updates \u2013 Easy to perform \u2013 Free of charge Disadvantages \u2013 Has limitations for scheduling frequency under the free plan \u2013 Supports only CSV file format \u2013 No incremental updates \u2013 Very slow \u2013 Possibilities of data loss While the manual method is relatively simple and doesn\u2019t require deep technical knowledge, it contains lots of pitfalls, including such critical ones as possible data loss and low speed. Meanwhile, Skyvia effectively addresses these drawbacks by offering near-real-time data integration, precision, and powerful data mapping capabilities with an intuitive interface. Conclusion Airtable helps companies to effectively track inventory, manage customer relationships, monitor project execution, etc. It doesn\u2019t have visualization or analytical capabilities, though loading its data into the BigQuery data warehouse provides access to such an option. Here, we have reviewed two methods for data exchange between Airtable and BigQuery \u2013 manual integration and Skyvia. Seasoned developers can also explore API documentation of both tools and set up custom integrations, which takes time and effort. As time is money in the modern world, consider using Skyvia to bring Airtable and BigQuery together and transfer data in both directions to boost your business\u2019s overall productivity. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fairtable-to-bigquery-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect+Airtable+to+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fairtable-to-bigquery-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/airtable-to-bigquery-data-integration/&title=How+to+connect+Airtable+to+BigQuery) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/api-vs-sftp-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) SFTP vs API Integrations: Which Is the Best to Choose? By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/api-vs-sftp-integration/#respond) 1446 August 8, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fapi-vs-sftp-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=SFTP+vs+API+Integrations%3A+Which+Is+the+Best+to+Choose%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fapi-vs-sftp-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/api-vs-sftp-integration/&title=SFTP+vs+API+Integrations%3A+Which+Is+the+Best+to+Choose%3F) Integrating services with APIs is undoubtedly the most popular method for connecting different online services. SFTP integration is another popular way of exchanging data that\u2019s often used by organizations. Given the popularity of these approaches, how to know which one is better for you? SFTP integration could be compared to an email service, where you can send digital messages and receive a response after some time. API integration is more like a phone call, where you dial a number to contact a person or organization and get an immediate response. With these metaphors in mind, you now have a vague idea of how both integration approaches work. This article aims to investigate SFTP vs. API integration in detail and give a hint on when each is more appropriate. Table of Contents What is SFTP Integration? Setting Up SFTP Integration What is API Integration? Implementing API Integration Main Similarities of SFTP and API SFTP vs API: Major Differences Final Thoughts What is SFTP Integration? Let\u2019s start with file transfer protocol (FTP) to better understand [SFTP integration](https://skyvia.com/blog/understanding-sftp-automation/) . FTP\u2019s primary purpose is to transfer files between a client and a server. Meanwhile, SFTP does the same, but in a much more protected way, as S stands for secure owing to SSH (Secure Shell) protocol. SFTP also encrypts data before sending it over a channel. How SFTP Integration Works The most common SFTP integration instance is the connection between an SFTP server and an application or system. You can send files from other systems to an SFTP server and manage access permissions to those files. The same can be done in the reverse direction when sending files from the server to another system. Overall, SFTP integration is an excellent option for transferring multiple files in bulk. Common Uses of SFTP Integration There\u2019s a variety of integration scenarios involving SFTP servers, and each business chooses the most suitable one for its particular objective. Here are some typical use cases and popular workflows involving SFTP integration. Data backup and recovery. SFTP server is an excellent place for storing data copies from business-critical applications, such as Salesforce, HubSpot, QuickBooks, etc. Businesses can be sure that the data stored on the SFTP server is safe and secure. Data sharing. An SFTP server can be a storage pool for sharing reports or occasional files that can be accessed only by authorized stakeholders. Copying data from legacy systems. Most applications and systems of the previous generation aren\u2019t compatible with the modern ones. So, one of the proven methods to migrate data from legacy systems is to copy it to an SFTP server first and then integrate it into the destination app. Benefits of SFTP Integration Apart from the security it offers for data transfer and storage, SFTP grants other significant benefits. They range from incredible data volumes to the impossibility of modifying data during transit. So, let\u2019s have a look at each of the advantages SFTP integration offers. Data integrity. SFTP guarantees that files on the input will have the same content as on the output. It uses hash codes that compare the file versions before and after transit. Authorized control. Only authorized users can access the contents of the SFTP server. File access control. It\u2019s possible to set permissions for each file on the SFTP server, guaranteeing role-based access. Batch transfer. SFTP integration allows users to send large files in bulk. SFTP Integration Limitations As any coin has heads and tails, SFTP integration has benefits and limitations. Those aren\u2019t drawbacks but just certain restrictions that prevent SFTP integration from being a suitable and effective solution in certain cases. No real-time data access. Since SFTP can operate only [batch data](https://skyvia.com/learn/what-is-batch-processing) , it doesn\u2019t support real-time streams. Data can be sent to an SFTP server or in the opposite direction according to preset time intervals. No scalability. Since SFTP supports only [point-to-point integration](https://skyvia.com/blog/point-to-point-integration-pros-cons/) , managing data flows might be challenging as you add more integration scenarios. File format restrictions. SFTP integration supports CSV and other flat files, so transferring complex data structures or unstructured data isn\u2019t possible. Setting Up SFTP Integration The steps for the integration are similar regardless of the application or database aiming to connect to an SFTP server: Generating SSH keys. Copying public SSH key to the remote server. Establishing and testing connection with the SFTP server. Transferring files to and from the SFTP server. You\u2019ll need to establish an individual SFTP integration point for each application or service. This takes pretty much time and requires lots of manual effort. As a solution, try [Skyvia cloud platform](https://skyvia.com/) , which is a convenient way of connecting an SFTP server with other apps in a unified environment. Skyvia also offers a range of data-related solutions, including data integration, SaaS backup, OData and SQL point creation, workflow automation, and data query. Skyvia\u2019s Data Integration product offers several zero-code tools for data exchange between SFTP and [180+ other data sources](https://skyvia.com/connectors) . [Import](https://skyvia.com/data-integration/import) is a visual-based wizard that allows users to create [ETL pipelines](https://skyvia.com/learn/what-is-reverse-etl) and [Reverse ETL pipelines](https://skyvia.com/learn/what-is-reverse-etl) . You can load files from and to SFTP server, apply transformations to data, and configure mapping settings. [Export](https://skyvia.com/data-integration/export) is a visual-based wizard allowing users to send their cloud app data to an SFTP server in CSV files. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) is a visual pipeline designer with a drag-and-drop interface and a variety of components, allowing users to build more complex data flows involving several data sources and multistage transformations. You\u2019ll need to perform only several steps to set up an SFTP integration with Skyvia. Log into your [Skyvia account](https://app.skyvia.com/) or create a new one. [Set up the SFTP connector](https://docs.skyvia.com/connectors/file-storages/sftp_connections.html) . Set up the integration scenario . In your Skyvia account, go to the +Create New section and select the integration scenario of your choice. For instance, you can send files from an SFTP server to Salesforce and vice versa by selecting the integration scenario (Import, Export, or Data Flow) and setting it up as shown below. What is API Integration? [Application Programming Interface](https://www.ibm.com/topics/api) or simply API is the technology allowing two software programs to communicate and interact with each other. Modern APIs adhere to specific standards (usually HTTP or REST), which makes them self-descriptive and developer-friendly. APIs have a standardized schema for interaction through a series of requests and responses. For instance, one program sends a request to the API with specific instructions. This request is made in the file format defined by API. Then, the API receives these requests, understands them, communicates with the relevant system or database, retrieves the needed information, and sends it back as a response. How API Integration Works The way an API works is usually explained in terms of the client-server architecture, where the application sending a request is a client, and the one sending a response is a server. APIs can be defined in different ways, depending on when and why they were created. For instance, SOAP APIs use Simple Object Access Protocol where client and server exchange messages using XML. REST APIs define a set of functions like GET, PUT, DELETE, SET, etc. and rely on HTTP for data exchange. API integration allows businesses to create an ecosystem where various software components work together to achieve business goals. It\u2019s possible to connect multiple APIs to enable data exchange between different applications and systems. For example, when a suite booking is made via the hotel app, this operation needs to be reflected on the hotel profile on the Booking.com website. This is where API is used to communicate data from the app to the website, so the actual data on room availability is present on both platforms. Such communication ensures data synchronization and prevents the chance of double booking of the same room for the same date. In fact, API is like an Esperanto language that was artificially constructed more than a century ago to enable international communication. Common Uses of API Integration API integration is used almost everywhere today because it tends to improve user experience, content management, etc. Let\u2019s examine some of the most popular business use cases for API integration. Inserting payment getaways into e-commerce websites. Such integration is very convenient for customers as they can pay for the services or goods on the platform where they ordered them. There\u2019s a variety of payment getaways, so each e-commerce website can choose credit cards, bitcoins, etc. Real-time monitoring of IoT devices. The API integration makes it possible to connect devices to the monitoring applications. This ensures alerts on the system malfunction and enables timely intervention. Data integration between CRM and other marketing systems. API enables data exchange between different marketing applications, providing a solid base for targeted marketing campaigns. For instance, you may want to [send Salesforce data to Marketo](https://skyvia.com/blog/marketo-salesforce-integration/#What-is-Adobe-Marketo-Engage) for refining marketing automation approaches. Synching inventory stocks. The API integration allows an e-commerce website to exchange data with an inventory management system in the real time. That way, you\u2019ll get the latest information on the stock levels in both systems. Benefits of API Integration Along with the tremendous possibilities for connecting services across the web, API integrations offer a range of other benefits for businesses, from automation to customization. Real-time data handling. The architecture of APIs enables real-time data exchange. So, the response is sent almost immediately after the request submission. Automated data transfers. Automate manual tasks, maximizing productivity and minimizing human error. Functionality extension. Benefit from the possibilities that go beyond the standard functionality of your application through the additon of new functionality. Data granularity. Request specific details, such as user contact or address, instead of getting the entire customer profile. API Integration Limitations Even though API integration seems to be an ideal solution for bringing different software services together, it also has some limitations. Those are NOT drawbacks but rather constraints that become tangible in certain use cases. No big data support. If you need to transfer large data volumes, API integration will require more computational resources. This might impact the speed of data transfer and exchange, imposing constraints on real-time conception. Dependence on internet connectivity. As APIs primarily rely on HTTP, a stable internet connection is crucial. The integration won\u2019t work properly in case of network interruptions. Complexity of integration. Setting up API integration is complex as it requires deep knowledge of the API structure and software functionality. Access limitation. If you want to build API integration, you\u2019ll need access to APIs and documentation of the service of interest. Some companies offer open API access, while others provide API only on request. Implementing API Integration The steps for API integration are more or less the same for each business case. Here are some fundamental steps that can be a skeleton of the process: Define your integration needs. First, understand which services you want to bring to your application or service. That might be a payment gateway or YouTube video player. Look into API documentation. The most popular online services have extensive API documentation and guidelines that can be accessed publicly. However, this might be different for niche services. So, carefully explore the API documentation and see whether you can get support from the application provider. Start integration. Obtain access to API keys and ensure secure management for them. Start creating HTTP requests to access API endpoints using the official API documentation. Execute testing. Check whether your application successfully communicates with the API and retrieves the needed data. Ensure your app can handle errors and provide clear messages to users. Main Similarities of SFTP and API After having explored SFTP and API integrations, it\u2019s time to compare them. This analysis will help you understand which method is better for a specific business use case. So, here are some things both SFTP and API integrations have in common: Secure data transfer. Both approaches employ security protocols and features to protect data on transfer. Automation options. SFTP and API integrations noticeably reduce the manual effort and the probability of human error. Robust error handling. Both methods show an advanced approach to error handling by providing detailed messages to users. SFTP vs API: Major Differences Let\u2019s see the key differences between SFTP and API. Here are some of the notable and distinctive ones: Data access frequency. API integration allows a service to request the needed data and receive it as soon as needed, guaranteeing real-time data exchange. Meanwhile, with SFTP integration, users can send or receive access data only at specific intervals. Data volumes processed. SFTP works on batches and can handle large data volumes. Meanwhile, API usually operates on granular data of small sizes. Scalability . As an organization grows, it might be more difficult to handle multiple SFTP integrations, each for a different data source. API integration offers a greater degree of scalability for business. Maintenance cost . SFTP integration is usually associated with lower implementation and maintenance costs. APIs usually require high investment costs, especially when there\u2019s a need for high-frequency API calls, support, and advanced features that go with highly paid pricing plans. Flexibility . APIs ensure customized data interactions, whereas SFTP follows a straightforward transfer process. Configuration complexity. API integration configuration is a time-consuming process that requires high technical expertise. SFTP integration setup requires SSH key pair generation and exchange, communication with a remote server, and not much coding. To simplify things, use universal data platforms like Skyvia to set up a no-code SFTP integration at a low cost. Final Thoughts Let\u2019s say that SFTP and API integrations have different target audiences. SFTP integration is more suitable for businesses that aim to send files in bulk over a secure channel at certain intervals in time. Meanwhile, API integration is more about the near real-time data exchange between software services. SFTP integration is suitable for: API integration is suitable for: \u2013 Data archiving \u2013 Regular backups \u2013 Sharing flat files with stakeholders or partners. \u2013 Extending the native functionality of proprietary applications \u2013 Data exchange in real time Companies with limited IT resources and expertise could also benefit from SFTP integration since the setup process is simpler than that of APIs. Things become even more accessible with [data integration platforms](https://skyvia.com/blog/data-integration-tools/) like Skyvia. It allows you to connect an SFTP server to 190+ apps, databases, and data warehouses for data integration. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fapi-vs-sftp-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=SFTP+vs+API+Integrations%3A+Which+Is+the+Best+to+Choose%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fapi-vs-sftp-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/api-vs-sftp-integration/&title=SFTP+vs+API+Integrations%3A+Which+Is+the+Best+to+Choose%3F) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/asana-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Asana and Salesforce Integration: 2 Easiest Methods By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/asana-salesforce-integration/#respond) 10535 October 7, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fasana-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Asana+and+Salesforce+Integration%3A+2+Easiest+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fasana-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/asana-salesforce-integration/&title=Asana+and+Salesforce+Integration%3A+2+Easiest+Methods) When it comes to sales automation, there\u2019s no room for error, and clearly defined processes are a must. However, CRM systems, marketing automation, task management, reporting tools, etc., are tricky to work smoothly together. Sometimes, even native integrations aren\u2019t the answer. [Sales teams could spend more than 60% of their time on different management tasks instead of selling itself](https://www.salesforce.com/blog/15-sales-statistics/) . The right way to orchestrate the various services and improve sales performance is to eliminate the unnecessary hassle and gain the correct integration. This guide is an example of how to integrate two services Asana to Salesforce, successfully. Let\u2019s see how Salesforce and Asana integration can boost team collaboration by applying either native integration or integration using a third-party service, which provides a more advanced solution for high-quality work. Table of contents What is Salesforce? What is Asana? Key benefits of Integration Asana and Salesforce Method 1: Native Asana for Salesforce Connector to Integrate Data Method 2: Cloud Integration Asana and Salesforce using Skyvia Summary What is Salesforce? [Salesforce](https://skyvia.com/connectors/salesforce) is one of the leaders among CRM system providers for any sized business. It\u2019s cloud-based software, very popular for managing customer bases. It also offers an extensive feature set (data analytics, IoT devices, etc.) with an in-depth view of all customer interactions. Key features: Open-source code makes it easy to integrate multiple products. There are thousands of custom-built solutions for businesses over the [Salesforce AppExchange marketplace](https://appexchange.salesforce.com/) . Account and Contact Management. The service allows tracking of all customer activities (customer communications, internal account discussions, etc.). Visual Workflow provides an easy-to-use interface with a drag-and-drop feature to automate business processes. Opportunity Management. Get an overview of all essential details needed to work on the sales, like the stage, products, etc. What is Asana? [Asana](https://skyvia.com/connectors/asana) is a popular service for project management. It allows different departments that work on the same projects and tasks to communicate and manage processes. The service lets to create a project, set deadlines for a project, and more. It also allows teams to work at full strength as it supports integration with many well-known programs, including Tableau, Power BI, Slack, Salesforce, and many more. Key features: Task management. The service gives a complete overview of projects to see achievements easily. Workflow builder. It helps to get clarity by systemizing requests, automating tasks, and overall improving processes by applying templates for workflows. App integration. For cross-functional teams, staying in sync with each other is a must-have, so Asana offers more than 280 connections with popular services. Customization. With the service, any user can schedule and structure work conveniently by using lists, boards, calendars, etc. Key Benefits of Integration Asana and Salesforce Useful link: [Overview of Asana for Salesforce Integration](https://www.youtube.com/watch?v=J4nvkvhM-Qk) on YouTube. Salesforce and Asana integration results in a much more organized CRM and enables sales professionals to collaborate with colleagues from other departments throughout the sales cycle. Associate a Salesforce record to an Asana task or project, and you can quickly run actions in Asana inside Salesforce or, vice versa, execute the Salesforce actions within the Asana interface: Pre-sale flows. For requesting pre-sales processes to various teams (legal, support, etc.). Keep track of the progress of the tasks connected to specific Salesforce records. Action on connected Asana tasks (view task details, comment, attach files, complete tasks, download files). Monitor details of the Asana tasks in Salesforce without having to open Asana. Post-sales processes \u2014 reduce manual work, avoid missing critical context, and monitor the progress of post-sale implementation. Get rich previews of the data from Salesforce. These previews contain data on the Salesforce objects where tasks/projects were created. Be aware that by default, information corresponds to the field\u2019s order of the Salesforce Record details page. For customizing the layouts, you can change the settings inside the Salesforce page layouts. There are two possible ways of Salesforce and Asana integration: either integrate natively with the application or use third-party software. However, no code is perfect, and though there\u2019s a native integration, sometimes it cannot meet your team\u2019s exact needs. Wherever you need a more flexible, robust solution, look for another service to improve performance. Method 1: Native Asana for Salesforce Connector to Integrate Data Check-in before integration Before integrating, make sure you\u2019ve reviewed your accounts and user permissions. The native integration can work only with premium accounts. Asana requirements Asana integration for Salesforce is available to all Business and Enterprise Asana users. It\u2019s optimized for the [Salesforce Lightning](https://skyvia.com/blog/salesforce-connect-guide/) version and compatible with Salesforce Classic. Note that the installation and configuration steps for Classic and Lightning versions may differ. Salesforce requirements Only the Salesforce administrator can install Asana integration for Salesforce. Step 1 . Open the link in the [Salesforce AppExchange](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000FR4NTUA1) and click the button Get it now . Note! You cannot install the application from Asana, only from the app exchange from Salesforce. Step 2 . Set up security levels and user permissions. Choose where to install Asana for Salesforce: within Organization or Sandbox , approve the third-party access and confirm the installation. Step 3 . Link your Asana account with your Salesforce account and grant permission to access Asana data. Step 4 . Configure Asana settings: link your accounts, add an Asana component, or set up automatic project creation. If you want to create an Asana project based on Salesforce objects, you can use the process builder under the App Settings category. In the editor for automatic project creation, you can choose the Opportunity stage (value proposition, close won, etc.), meaning that this stage will trigger a new Asana project from a [custom template](https://asana.com/guide/team/advanced/create-use-asana-templates) . Here, you can also select where you want the project to be created and what Asana custom template must be used. Please, be aware that new projects in Asana will be named automatically (Opportunity name + Asana template name). For more information, check [Asana for the Salesforce integration user manual](https://docs.google.com/document/d/1O71zc8qjJw2vdqJf-B61EtlZ4BKijScMWspg9sfVTH4/edit#) . Important! If you have problems while installing the Asana package in Salesforce, try to change your browser. The installation usually takes at least several minutes. Due to the Asana privacy model: if two users look at the Asana component for the same Salesforce record, they\u2019ll see a different set of tasks/projects, getting incorrect data. There aren\u2019t any notifications via Salesforce of changes regarding Asana tasks/projects. All notifications come only from Asana notifications (email, web, or mobile app). Pros and cons This method has limitations and often works with errors, especially when two users are working with the platforms simultaneously. You must have admin access and profiles to set up, configure, and monitor the connection. Method 2: Cloud Integration Asana and Salesforce Using Skyvia Depending on your needs, Skyvia offers different scenarios of Asana to Salesforce integration. For example, if you need to transfer data partially, it\u2019s better to choose [data import by Skyvia](https://skyvia.com/data-integration/import) . In short, you need to create two import packages from Asana to Salesforce and vice versa. For more information about this method, please [read our documentation](https://docs.skyvia.com/data-integration/import/) . However, in this guide, we describe the most demanded by our clients: cloud integration via data synchronization that automatically synchronizes data between two connectors. As an example, we connect tasks between Asana and Salesforce. The first synchronization run transfers all data from source to target and from target to source. After that, the following synchronization processes only new or updated records, as each scheduled run starts with checking both connectors for any data changes. Important! Skyvia has limits in synchronization for Asana: only Tasks and Projects objects are supported. For the data import feature, there are no such limitations. You must take three easy steps to get cloud integration applying Skyvia. You need to establish connections between the two services and Skyvia, and the final step is to synchronize the data. Set up integration with Asana. Set up integration with Salesforce. Set up data synchronization between Asana and Salesforce. Note! You must [create a Skyvia account](https://app.skyvia.com/login) if you don\u2019t have it yet. You can also try Skyvia with a free trial version. Step 1. Asana connection [Sign in](https://app.skyvia.com/login) with Skyvia and click +NEW in the top menu. Then, click the Connection button in the menu on the left. In the opened Select Connector page, select Asana service. The default name of a new connection is Untitled . Click it to rename the connection, for example, to the Asana test . Click Sign In with Asana and enter credentials for your Asana account. After that, click Create Connection . After the Asana connection is created and saved, go back to the connection page. Note! For more information, check the [connecting to Asana](https://docs.skyvia.com/connectors/cloud-sources/asana_connections.html) Skyvia documentation. Step 2. Salesforce connection The same actions as with the Asana connection are required for Salesforce. Click the Connection button, open the Select Connector page, and choose Salesforce service. Change the default name from Untitled by clicking on it. Then fill in the information required: Environment : choose the environment type of Salesforce to export data from. Authentication : choose the authentication method for connecting to Salesforce. Depending on your chosen authentication type, either click the Sing in with Salesforce button or specify your Salesforce account e-mail, password, and security token. Create a connection by clicking on the green button. Note! For more information, check the [connecting to Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) Skyvia documentation. Step 3. Asana and Salesforce synchronization For Asana and Salesforce integration, click +NEW in the top menu. In the Integration column, click Synchronization . Rename the synchronization package by clicking and editing the package name as the default one is Untitled . As the Source , in the Connection drop-down list, choose Asana. As the Target , in the Connection list, choose Salesforce from the drop-down list. Now click Add new button on the right to open Task Editor for data synchronization. In the Task Editor , select objects that should be synchronized, and click Next step . Now you need to map source fields to target fields. In synchronization tasks, you need to specify the mapping for both directions. Note! Some columns with the same names are mapped automatically by the service. Then, you have to map target fields to source fields. After configuring mapping for both directions, click Save . After we have created the synchronization package, let\u2019s keep the data in sync automatically. For this, click Schedule on the top left and configure the frequency of the package running. Click Now to put the schedule into action immediately or select At a specific time and specify the necessary date and time you want the schedule to be enabled. Click Run on the top panel to start integration. You can check whether the results are successful or not on the Monitor tab. Pros and cons Skyvia platform can be used from anywhere in the world. As it\u2019s a cloud service, you save storage space and don\u2019t jeopardize the company\u2019s security. You can decide what objects to integrate and what operation to perform. Summary With the integration between Salesforce and Asana, sales teams gain a comprehensive view of how they\u2019re doing. They can onboard new customers faster, improve sales productivity, streamline project execution, and win more deals. In this article, we explain how you can integrate Asana and Salesforce using two methods: integration from Asana or a custom solution from Skyvia. The integration from Asana is challenging and problematic with a limited set of features, while the custom integration with Skyvia provides many more advanced features. Still, the solution depends on your business requirements and needs. More articles about Salesforce integrations: [Salesforce and Amazon S3 Integration: Implementation Guide](https://skyvia.com/blog/salesforce-to-amazon-s3-integration/) [Snowflake to Salesforce Integration: Best Code-Free Connectors](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [How to Connect Salesforce to AWS Redshift Manually or with Connector](https://skyvia.com/blog/salesforce-to-amazon-redshift-integration/) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fasana-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Asana+and+Salesforce+Integration%3A+2+Easiest+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fasana-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/asana-salesforce-integration/&title=Asana+and+Salesforce+Integration%3A+2+Easiest+Methods) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/automate-bulk-csv-imports-to-shopify/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Automate CSV Bulk Imports of Products or Orders to Shopify By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/automate-bulk-csv-imports-to-shopify/#respond) 2116 March 21, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fautomate-bulk-csv-imports-to-shopify%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Automate+CSV+Bulk+Imports+of+Products+or+Orders+to+Shopify&url=https%3A%2F%2Fblog.skyvia.com%2Fautomate-bulk-csv-imports-to-shopify%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/automate-bulk-csv-imports-to-shopify/&title=How+to+Automate+CSV+Bulk+Imports+of+Products+or+Orders+to+Shopify) Filling in the product details for each and every item in Shopify might take ages. Updating product characteristics and prices at once might also take days of human labor. A great alternative to manual work is to bulk import CSV to Shopify with a list of all products and their parameters, including photos. With CSV import to Shopify, order updates and customer information actualization can be automated. This article discusses possible ways to [automate Shopify CSV import](https://skyvia.com/data-integration/shopify-csv-file-import-and-export) , contrasting them with the manual method. It also introduces built-in functionality in Shopify and Skyvia service for CSV bulk loading. Table of Contents Introduction to Shopify CSV Import Different Kinds of Data Importable from CSV to Shopify Manually Import CSV into Shopify Disadvantages of Manual Method Automated CSV Import to Shopify using Skyvia How to Import Products to Shopify How to Import Orders to Shopify Services Suitable for Automated Import Key Features of Skyvia Summary Introduction to Shopify CSV Import Due to its structure, CSV has become very common on the Internet. Two programs can exchange data by importing or exporting CSV files if they support this format. Shopify has an in-built option for manually uploading CSV files . This method is suitable for importing products, customers, contracts, and other objects. It works best for minor updates of already existing items. However, the manual method is ineffective when major updates or bulk additions are necessary. For instance, when [switching to Shopify from Magento](https://skyvia.com/blog/magento-to-shopify-migration/) or another e-commerce platform, CSV data upload appears too tedious, requiring too much effort and attention. Therefore, a need to streamline the process of loading tons of data emerges. Different Kinds of Data Importable from CSV to Shopify A range of instances can be imported to Shopify via CSV either manually or using tools. Products. A detailed overview of how to import/export products is provided above. Inventory. The file structure for inventory is similar to that of products but contains a slightly different set of columns. [See details on inventory CSV import](https://help.shopify.com/en/manual/sell-in-person/shopify-pos/inventory-management/export-import-inventory-csv-pos#import-inventory) . Customers. Similarly to products, there\u2019s a CSV template file with the structure. It helps to create customer lists and then upload them into Shopify. [See details here](https://help.shopify.com/en/manual/customers/import-export-customers) . Contracts. It\u2019s possible to import contracts into the Subscription section of the Shopify online store from other apps. [See details here](https://help.shopify.com/en/manual/products/purchase-options/shopify-subscriptions/import-export/importing-and-exporting) . The above-mentioned objects can be loaded into the Shopify store via CSV file. However, many other instances can\u2019t be integrated in the same way. Here comes Skyvia to import orders, customers, inventory levels, products, etc. Moreover, with mapping and transformation applied, almost any data from Shopify can be exported to other systems. Manually Import CSV into Shopify Shopify offers in-built functionality for manually importing CSV files within the Product section. There are two common scenarios for uploading information about products into Shopify: Relocating to Shopify from another e-commerce platform. Making updates to your current Shopify store. Now, look at each case\u2019s specific instructions for manual CSV file upload. NOTE: The CSV file size mustn\u2019t exceed 15 MB in size. Otherwise, consider Skyvia for importing large files. Importing Products from Another Store NOTE: Before uploading a CSV file with product data into Shopify, check whether it corresponds to the needed format. For that, [download a sample CSV file](https://beta-help.shopify.com/en/manual/products/import-export/using-csv#product-csv-file-format) , check whether the structure of your file corresponds to that in the sample, and make adjustments if needed. Log into your Shopify account. Go to the Products section. Click Import in the upper-right corner of the page. Click Add file and select the needed CSV file with the product data from your computer. Select Overwrite product with matching handles to substitute details for already existing products with those from the CSV file. Click Upload and review to check the import details. Click Import products . Once the import process is complete, you\u2019ll receive a confirmation email to the account associated with the Shopify online store. Making Bulk Product Updates Before making mass updates to the products on Shopify, it\u2019s necessary to export them into a CSV file first. Log into your Shopify account. Go to the Products section. Click Export in the upper-right corner of the page. Select the export options for the products to be exported. Select the CSV file for Excel option. Click Export products . The file arrives at your email address. Download it from there, make the appropriate changes to product values using the spreadsheet program, and import the file to Shopify. Go to the Products section. Click Import in the upper-right corner of the page. Click Add file and select the needed CSV file with the product data from the computer. Select Overwrite product with matching handles . Click Upload and review to check the import details. Click Import products . Disadvantages of Manual Method Even though the manual method is rather transparent and simple to implement, it mightn\u2019t be the best option for uploading multiple CSV files into Shopify. The major disadvantage is that it can\u2019t be automated in any way \u2013 every time there comes an update to a product or new items are added, everything has to be done manually.\u00a0 Another drawback of the manual method is the absence of error detection when the wrong structured CSV file gets uploaded. Automated CSV Import to Shopify using Skyvia Specific data automation and integration tools are needed to optimize the transfer of bulk data into Shopify. Let\u2019s have a look at how to streamline CSV import with [Skyvia](https://skyvia.com/) \u2013 a universal SaaS platform designed for multiple data-related tasks. It allows users to build conventional and complex data integration pipelines, perform bi-directional sync, load data into a data warehouse, automate CSV import, etc. How to Import Products to Shopify Let\u2019s introduce a real use case for importing CSV with Products data into Shopify. One of Skyvia\u2019s clients has an online shop cooperating with multiple vendors supplying goods for it. Vendors regularly send product data as CSV files to the FTP server, and the website admin needs to load them to Shopify onwards. As the total number of product items exceeds 200,000, it might take ages to update everything manually every time a supplier adds new items or changes existing product characteristics. Luckily, there\u2019s Skyvia for CSV import into Shopify to automate everything and ensure a hands-free experience. It helps connect Shopify and FTP, set mapping, and schedule data transfer. NOTE: Some Shopify objects, such as Products and Orders , have a compound structure since several fields include nested objects. When importing CSV files into Shopify using Skyvia, it\u2019s necessary to map them in JSON format, which is complicated and time-consuming. So, it\u2019s better to use a template product CSV file or download existing products from Shopify into a CSV file to use as a base for updates and adding new items. If your vendors are regularly updating their stock and sending it to you, you may also ask them to follow the CSV file structure. To import data from CSV to Shopify, proceed as follows: [Log into Skyvia](https://app.skyvia.com/registers/?) . Configure the [Shopify connector](https://skyvia.com/connectors/shopify) . Go to +NEW->Import in the upper menu. Select CSV upload manually or CSV from storage service as a source. Select the Shopify connector as a target. Click Add task . Select the source CSV file and set mapping between CSV and Shopify. Save the task. Repeat step 6 for other CSV files, if any. To ensure products get updated regularly in Shopify , click Schedule and define the timing parameters for the integration. Click Run to start the import. All new products and updates are already under the Products section. If your vendors regularly update products and upload refreshed CSV files to Dropbox, Amazon S3, FTP, SFTP, or Azure File storage, Skyvia allows users to configure automatic file uploading using the [file mask option](https://docs.skyvia.com/data-integration/import/how-to-guides/how-to-import-CSV-files-via-file-masks.html) . In the Task Editor , under the Source Definition tab, click Use File Mask under CSV Mode . Specify the folder to load CSV files from, the file mask, and the timezone. Although file masking isn\u2019t included with a free plan, it is included with many other useful options in the paid version of Skyvia. The cost starts at $15 and depends on the number of data records operated. How to Import Orders to Shopify In case there are multiple channels for exposing and selling goods online, it\u2019s worth synchronizing order information across various points of sale. The solution is to first export order data from Etsy, eBay, or other marketplaces into a CSV file and then import it into Shopify using Skyvia. [Log into Skyvia](https://app.skyvia.com/login?) . Go to +NEW->Import in the upper menu. Select CSV upload manually as a source. Select the Shopify connector as a target. Click Add task . Select the CSV file with orders and set mapping between CSV and Shopify. Save the task. Click Run to start the import. Now, the orders are added to Shopify. Services Suitable for Automated Import As mentioned above, Skyvia allows data to be transferred from other platforms, online services, and applications to Shopify. Skyvia supports 180+ tools in various categories: CRM software (Salesforce, HubSpot, Dynamics 365, Zoho CRM, etc.) Accounting software (FreshBooks, QuickBooks, Xero, etc.) E-commerce platforms (Shopify, Magento, WooCommerce, etc.) Marketing automation tools (Mailchimp, Constant Contact, etc.) Find the list of all supported connectors [here](https://skyvia.com/connectors/) . Just select the integration scenario from the menu, make several clicks, and schedule the data transfer. Key Features of Skyvia The flagship product of Skyvia is called [Data Integration](https://skyvia.com/data-integration/) . It comprises several tools, each designed for a particular integration scenario: [Export](https://skyvia.com/data-integration/export) . This platform can extract data from cloud apps into CSV files directly to computer or supported file storage systems, including FTP, SFTP, DropBox, Box, Google Drive, OneDrive, Amazon S3, and Azure File Storage. [Import](https://skyvia.com/data-integration/import) . Skyvia uses CSV files as a source for importing data to apps, data warehouses, and databases. [Advanced Data Pipeline / Data Pipeline Designer](https://skyvia.com/data-integration/) . Skyvia offers Data Flow and Control Flow tools for more complex data integration scenarios. Data Flow can operate CSV files both for sophisticated import and export operations. Note that advanced pipelines are available under paid Skyvia plans. Feel free to check them! Skyvia\u2019s Data Integration functionality also includes tools for [data warehousing](https://skyvia.com/data-integration/replication) and [data synchronization](https://skyvia.com/data-integration/synchronization) . Advantages of Skyvia No-code user interface Only a web browser is required Advanced transformation and mapping Filtering conditions DML operations supported Compliance with security standards and regulatory requirements Informative documentation and tutorials Case Studies Skyvia imports data to Shopify from CSV files or other services, though its capabilities go far beyond the boundaries of this use case. This SaaS platform can transfer data in another direction \u2013 from Shopify to other online and offline tools. Integrate Shopify and Salesforce [Connect Shopify and QuickBooks](https://skyvia.com/blog/shopify-quickbooks-online-integration/) [Integrate Shopify with HubSpot](https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/) Summary [Loading data into Shopify](https://skyvia.com/data-integration/shopify-csv-file-import-and-export) becomes necessary when products are added or updated or when the entire online store gets migrated from another e-commerce platform. When minor changes and updates to existing products are needed, using the manual CSV upload method within Shopify is fine. However, store migration or bulk updates certainly need a touch of automation. Skyvia platform, designed for transferring data between different sources, could be your best helper. It automatically loads data from CSV files and matches it to Shopify fields. Moreover, it can establish the connection between Shopify and other apps or databases for the import/export of orders, contacts, products, and other objects. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fautomate-bulk-csv-imports-to-shopify%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Automate+CSV+Bulk+Imports+of+Products+or+Orders+to+Shopify&url=https%3A%2F%2Fblog.skyvia.com%2Fautomate-bulk-csv-imports-to-shopify%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/automate-bulk-csv-imports-to-shopify/&title=How+to+Automate+CSV+Bulk+Imports+of+Products+or+Orders+to+Shopify) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/aws-etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 AWS ETL Tools for 2025: Enhance Your Data Integration By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/aws-etl-tools/#respond) 5096 May 15, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Faws-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+AWS+ETL+Tools+for+2025%3A+Enhance+Your+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Faws-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/aws-etl-tools/&title=Top+10+AWS+ETL+Tools+for+2025%3A+Enhance+Your+Data+Integration) The explosion of data has changed the rules. Every team \u2014 from analytics to engineering \u2014 is under pressure to unify, clean, and move massive volumes of data faster than ever. And when your infrastructure runs on AWS, the stakes are even higher. Whether you\u2019re storing data in S3 , running analytics in Redshift , or managing applications in RDS , your ETL pipeline is the backbone of performance, scalability, and insight. But here\u2019s the catch: not all [ETL tools](https://skyvia.com/blog/etl-tools/) are created equal \u2014 especially in the AWS ecosystem. Choosing the right AWS ETL tool means balancing performance and cost, ensuring compatibility with your architecture, avoiding vendor lock-in, and picking a platform your team can actually work with. Get it wrong, and you risk bloated costs, pipeline failures, or worse \u2014 inaccurate data. That\u2019s why in this guide, we\u2019ll cut through the noise and give you a side-by-side comparison of the best ETL tools for AWS \u2014 including both native services like AWS Glue and third-party platforms like Skyvia , Hevo , and Stitch . You\u2019ll learn: What AWS-native and external ETL tools can do (and where they differ) Key selection criteria: speed, ease of use, pricing models, extensibility Which tools suit your use case: batch vs real-time, SQL vs no-code, etc. By the end, you\u2019ll have a clear framework to choose the best ETL tool for your AWS environment \u2014 whether you\u2019re running a lean startup or a multi-petabyte enterprise stack. Table of contents What is AWS ETL Top AWS ETL Tools: A Comparative Review (2025) Skyvia AWS Glue AWS Data Pipeline Fivetran Stitch Hevo Data Talend Data Fabric Informatica Integrate.io Airbyte How to Choose the Best AWS ETL Tool Native AWS ETL Tools Included in the ETL Process Conclusion FAQ What is AWS ETL In data processing, Extract, Transform, and Load (ETL) means extracting data from various sources, transforming it to make it usable and insightful, and loading it into a destination like a database, data warehouse, or data lake. A key advantage of starting ETL on the cloud is its scalability and cheaper solutions compared to their on-premise counterparts. Amazon Web Services (AWS) provides many native services for extracting, transforming, and loading data within the AWS ecosystem. Each tool is designed for different purposes and provides its own set of supported data sources, use cases, and pricing models. Let\u2019s discuss the most popular native AWS ETL tools in detail and discover their advantages and limitations. Top AWS ETL Tools: A Comparative Review (2025) With a growing mix of native AWS services and third-party platforms, choosing the right ETL tool can be overwhelming. This section breaks down the top options, comparing their strengths, limitations, and ideal use cases. 1. Skyvia [G2 Crowd: 4.8/5](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) Skyvia is a no-code, cloud-based data integration platform that supports ETL, ELT, and reverse ETL \u2014 all through a browser. While Fivetran focuses primarily on ELT and AWS Glue targets enterprise Spark-based workloads, Skyvia stands out with its intuitive visual interface, wide connector coverage (200+), and all-in-one integration toolkit for AWS and SaaS ecosystems. It\u2019s especially suited for teams without deep data engineering resources who still need robust ETL capabilities. Pros No-code visual interface with drag-and-drop pipeline builder Supports [200+ connectors](https://skyvia.com/connectors) , including AWS Redshift, RDS, and S3 Includes reverse ETL and cloud backup features \u2014 unlike Fivetran, which focuses only on ELT Free tier available with full-featured functionality for small-scale needs Cons Doesn\u2019t support real-time streaming; best suited for scheduled syncs Advanced features like custom scripting and logging require a Professional plan Fewer community tutorials and templates than older, enterprise-focused platforms Pricing Starts at $79/month for Standard Data Integration Professional plans start at $199/month, and a tailor-made offer is available for enterprise needs. Best For SMBs and mid-sized teams looking for a visual, all-in-one ETL/ELT platform with deep AWS + SaaS integration \u2014 especially when dedicated data engineering resources are limited. 2. AWS Glue G2 Crowd: 4.2/5 AWS Glue is a serverless , Spark-based ETL service native to the AWS ecosystem. Unlike Skyvia and Fivetran, which offer low-code/no-code interfaces and broader third-party connectivity, Glue is designed for high-performance processing of large datasets within AWS infrastructure. It\u2019s best suited for enterprises already operating at scale and managing complex batch workflows. Pros Deep native integration with 70+ AWS services, including S3, Redshift, Athena, and Lake Formation Serverless architecture automatically scales with workloads New Spark 3.5 engine in Glue 5.0 improves job performance by ~32% Built-in metadata management through Glue Data Catalog Cons Steep learning curve and AWS-specific complexity Costs can grow quickly \u2014 $0.44 per DPU-hour can add up with large workloads No built-in support for reverse ETL or external SaaS destinations Pricing Pay-as-you-go model starting at $0.44 per DPU-hour Glue 5.0 introduces optimizations that can reduce costs by up to 22% compared to earlier versions Best For Large-scale enterprises already embedded in AWS needing powerful, serverless ETL with Spark \u2014 ideal for batch processing at petabyte scale and tight AWS-native integration. 3. AWS Data Pipeline G2 Crowd: 4.1/5 AWS Data Pipeline is one of the oldest AWS-native orchestration tools, built for scheduled data movement and transformation. Compared to newer options like AWS Glue or third-party tools like Skyvia, Data Pipeline offers more granular control over scheduling and retries, but lacks modern interfaces, real-time streaming, and SaaS integrations. It\u2019s best for legacy batch workflows with a consistent cadence. Pros Built-in support for AWS-native services: S3, RDS, DynamoDB, Redshift Offers advanced scheduling, retry logic, and dependency-based orchestration Supports running custom scripts across AWS or on-prem infrastructure Cost-effective for low-frequency, batch-style ETL workflows Cons Outdated UI and limited documentation compared to newer AWS tools like Glue Doesn\u2019t support modern streaming use cases or real-time data sync Monitoring and logging require manual configuration Slower development pace; not ideal for long-term futureproofing Pricing $1.00/month for low-frequency pipelines; $3.00/month for high-frequency ones Additional AWS resource usage (EC2, S3, etc.) billed separately Best For Teams already committed to AWS, looking for reliable, low-cost [batch ETL](https://skyvia.com/blog/batch-etl-processing/) orchestration that doesn\u2019t require real-time delivery or frequent scaling. 4. Fivetran G2 Crowd: 4.2/5 Fivetran is a fully managed ELT-first platform that emphasizes speed, simplicity, and automation. Unlike AWS Glue, which requires Spark knowledge, or Skyvia, which allows transformation before load, Fivetran pushes raw data into warehouses like Redshift first \u2014 leaving downstream tools to transform. It shines for its maintenance-free connectors and seamless schema evolution, but lacks the transformation flexibility of visual ETL tools. Pros Fully managed SaaS platform with zero-maintenance pipeline setup Extensive library of prebuilt connectors for SaaS tools, databases, and data warehouses Supports ELT with automatic schema mapping and transformation staging Robust incremental syncing and automated error resolution Cons Pricing based on Monthly Active Rows (MAR) can scale quickly with frequent updates/deletes Less flexibility for custom transformations compared to visual ETL tools like Skyvia Some connectors may lag behind in supporting newer APIs or custom fields Reverse ETL and data export workflows are limited Pricing Starts at ~$120/month for low-volume usage Enterprise plans vary based on usage and support needs; can reach several thousand per month at scale Best For Organizations with a mature data stack prioritizing automation, reliability, and low-lift integration \u2014 especially for syncing cloud apps into Redshift or Snowflake. 5. Stitch G2 Crowd: 4.4/5 Stitch is a lightweight, cloud-native ELT tool built for simplicity and speed. Compared to more comprehensive platforms like Talend or Skyvia, Stitch focuses on data ingestion \u2014 especially useful for small to mid-sized teams wanting fast, no-fuss replication into AWS services like Redshift and S3. While it lacks deep transformation or data governance features, it stands out with its transparent pricing, speed to deploy, and strong compliance credentials. Pros Easy-to-use, no-code setup with over 140 supported connectors Predictable row-based pricing model \u2014 unlike usage-based models with unpredictable costs SOC 2 Type II, HIPAA, GDPR, and ISO 27001 compliant \u2014 suited for regulated industries Based on the open-source Singer standard, allowing extensibility with custom connectors 14-day full-featured free trial and detailed documentation Cons Minimal built-in transformation; better suited for ELT pipelines \u2014 complex logic needs external tools No support for row-level filtering or advanced schema validation Some connectors may experience sync lags with large data volumes or changing APIs No free tier beyond trial; entry pricing may deter smaller teams Pricing Standard Plan : $83.33\u2013$100/month (up to 5M rows) Advanced : $1,250/month Premium : $2,500/month (annual billing) Enterprise : Custom pricing Free Trial : 14 days (no credit card) Best For Data teams prioritizing simplicity, compliance, and broad SaaS/app connector coverage over complex transformation needs. Ideal for mid-sized companies replicating data to AWS Redshift or S3 on a regular cadence. 6. Hevo Data G2 Crowd: 4.3/5 Hevo Data is a fully managed, no-code ETL/ELT tool focused on real-time data replication . Unlike Stitch or Skyvia, which primarily support scheduled loads, Hevo\u2019s change data capture (CDC) support makes it ideal for low-latency AWS pipelines . It combines flexibility and real-time sync with solid monitoring, schema handling, and automated transformations \u2014 though complex logic might still require scripting. Pros Real-time sync with automated schema mapping and CDC Supports 150+ data sources including AWS, GCP, SaaS apps, and databases Built-in transformation layer with basic enrichment capabilities Robust monitoring, error handling, and alerting dashboard No-code interface, yet allows advanced data flow configuration Cons UI can get cluttered in advanced multi-source/multi-destination pipelines Advanced transformations may require external logic or scripting Pricing can scale rapidly with many connectors or large event volumes Pricing Free Tier : 1M events/month Starter Plan : ~$239/month (20M events/month) Business & Enterprise : Custom quotes based on volume, SLA, and support Offers scaling based on events, not rows \u2014 good for sync-heavy use cases Best For Teams needing real-time or near-real-time data pipelines into AWS services like Redshift, S3, or Snowflake \u2014 especially for analytics dashboards and live ops monitoring. Strong fit for data-driven orgs without in-house ETL engineering . 7. Talend Data Fabric G2 Crowd: 4.4/5 Talend Data Fabric is a comprehensive data integration and governance suite , designed for enterprises with complex, regulated, or hybrid cloud environments. Unlike Skyvia or Stitch, Talend provides built-in data quality, MDM, and lineage tools \u2014 making it suitable for teams needing end-to-end data oversight. While powerful, it comes with a steep learning curve and enterprise-grade pricing. Pros Complete data management suite: ETL + Data Quality + Governance + MDM Supports multi-cloud and hybrid deployments across AWS, Azure, GCP, and on-prem Strong data transformation capabilities, including cleansing, masking, and enrichment Includes data lineage and audit trails , critical for compliance-heavy industries Backed by a large open-source community (Talend Open Studio) and enterprise support Cons Higher complexity and longer onboarding time compared to no-code tools like Hevo or Skyvia Total cost of ownership is high \u2014 includes licensing, infrastructure, and support costs Some features gated behind premium tiers or available only in enterprise edition Pricing No public pricing; contact sales for custom quotes Talend Open Studio (free, open source) available with limited support/features Best For Enterprises needing deep transformation, strict data governance, and hybrid cloud control \u2014 especially in regulated industries (finance, healthcare, manufacturing) with compliance and audit requirements. 8. Informatica G2 Crowd: 4.4/5 Informatica is an enterprise-grade platform with decades of data integration expertise. Unlike open-source options like Airbyte or lightweight solutions like Stitch, Informatica excels in complex, governed ETL/ELT pipelines for large-scale enterprises. Its suite includes AI-powered automation, granular security controls, and deep metadata management \u2014 ideal for hybrid and multi-cloud environments requiring bulletproof compliance and end-to-end governance. Pros Proven enterprise leader with robust ETL and data governance capabilities Extensive connector ecosystem covering AWS, SaaS, legacy, and on-prem systems Advanced features for data quality, lineage, masking, and cataloging Highly scalable and secure, built for enterprise and regulated industries Cons Steep learning curve and complex architecture not suited for smaller teams High total cost of ownership; requires dedicated IT/data ops resources Some modules/features are gated behind premium tiers Pricing Contact sales for quote Cloud Integration entry-level plans start around $2,000/month Free trials available for select products Best For Large enterprises that require robust ETL + governance + security , particularly across hybrid or multi-cloud deployments . Ideal for regulated sectors like healthcare, finance, or government. 9. Integrate.io G2 Crowd: 4.3/5 Integrate.io is a no-code/low-code ETL platform that emphasizes ease of use for cloud data movement. While Talend offers more enterprise-grade governance, Integrate.io shines with its drag-and-drop simplicity and support for real-time streaming , which makes it appealing to data teams without heavy engineering overhead. It\u2019s a flexible middle ground between lightweight tools like Stitch and heavyweight platforms like Informatica. Pros Intuitive visual interface with no-code pipeline builder Supports 140+ data sources, including AWS, cloud apps, and SQL/NoSQL databases Handles both batch ETL and real-time streaming workflows Transparent pricing model with usage-based scaling Built-in transformation and enrichment capabilities Cons Advanced custom workflows may require scripting or API workarounds UI can feel constrained for very complex, branching data logic Pricing can escalate with high-volume or high-frequency jobs Pricing Starts at $1,999/month Free trial available Usage-based scale-up available for high-volume needs Best For SMBs and mid-market orgs seeking a visual, low-maintenance ETL tool to move data into AWS services (Redshift, S3, etc.) \u2014 without building pipelines from scratch. 10. Airbyte G2 Crowd: 4.5/5 Airbyte is an open-source ELT platform known for its flexibility and rapidly expanding connector library. Unlike Skyvia or Hevo, which are fully managed SaaS platforms, Airbyte provides self-hosted and cloud options \u2014 giving teams full control over connectors, transformations, and deployment. It\u2019s a go-to solution for organizations with dev resources looking to avoid vendor lock-in and build customized integrations at scale . Pros Open-source and self-hosted by default \u2014 free to use and fully customizable 350+ connectors and counting, with a strong developer community Supports CDC, incremental sync, and ELT into AWS targets Cloud-managed option available for teams without DevOps overhead Cons Self-hosted version requires infrastructure management and monitoring Cloud version is newer and still maturing in terms of enterprise features Some connectors may lack full parity with commercial vendors Pricing Open Source : Free (self-hosted) Cloud : Starts at $2.50/credit ; includes free tier for low-volume usage Enterprise : Custom pricing available Best For Teams seeking open-source extensibility , or companies with unique connector needs not met by proprietary tools. Ideal for orgs that want to own their ETL pipeline architecture and scale affordably with internal DevOps support. How to Choose the Best AWS ETL Tool Selecting the right AWS ETL tool is more than a technical decision \u2014 it\u2019s a strategic choice that can shape your data infrastructure\u2019s scalability, agility, and total cost of ownership. In 2025, with more tools and hybrid architectures than ever, it\u2019s crucial to align your choice with your data volumes, team capabilities, and long-term goals. Here are the key factors to consider when evaluating your ETL options: 1. Assess Your Data Volume and Load Frequency Start with the basics: how much data do you process, and how often? If you\u2019re working with real-time data streams (e.g., IoT sensors, in-app user events), prioritize ETL tools with streaming or near real-time ingestion support like Hevo or Airbyte. For scheduled batch jobs , such as daily sales reports or weekly data syncs, tools like AWS Glue or Skyvia may offer a better price-performance balance. The right tool should scale seamlessly as your data grows \u2014 without hitting performance bottlenecks. 2. Match Tools to Transformation Complexity Not all ETL pipelines are created equal. If your workflows involve simple data cleaning, type conversions, or formatting , lightweight tools with visual designers (e.g., Skyvia, Stitch) will suffice. For complex joins, aggregations, or advanced logic , choose platforms like Informatica or Talend , which offer powerful transformation engines and support for custom scripting. Some tools are ELT-first (like Fivetran), while others offer true in-pipeline ETL , which may better suit data governance and compliance workflows. 3. Map Your Data Sources and Destinations Where your data lives \u2014 and where it\u2019s going \u2014 matters. If you\u2019re fully embedded in AWS (Redshift, S3, RDS), native services like Glue or Data Pipeline offer tight integration and minimal friction. For hybrid or multi-cloud environments , look for tools with a broad range of connectors (e.g., Hevo, Airbyte, Skyvia) and support for cross-platform transfers. Choose based on current architecture and future platform flexibility . 4. Prioritize Ease of Use vs. Customization Evaluate your team\u2019s skill set and bandwidth. No-code or low-code tools like Integrate.io, Skyvia, or Hevo are great for teams with limited data engineering resources. Developer-focused platforms like Airbyte or AWS Glue offer deeper control but require more technical overhead. The right balance depends on whether your team wants rapid delivery or fine-tuned customization . 5. Understand the True Cost of Ownership [ETL pricing](https://skyvia.com/blog/etl-cost/) models vary widely \u2014 by row, by event, by compute usage. Tools like Stitch and Fivetran price by Monthly Active Rows or events, which can escalate rapidly with high-volume updates. Others like AWS Glue charge per DPU-hour, while Skyvia uses predictable flat-tier pricing. Always consider not just base cost , but also scaling behavior, support costs, and overage risks. 6. Check Monitoring, Alerting, and Error Handling When something breaks \u2014 and it will \u2014 visibility is key. Look for tools with built-in dashboards, logging, and retry mechanisms . Platforms like Hevo and Skyvia offer intuitive error tracking and alerting, while Airbyte\u2019s open-source model gives full control over log pipelines. This is essential for production-grade pipelines and SLA compliance . 7. Decide If You Need Real-Time Processing If your use case involves live dashboards, customer behavior modeling, or fraud detection , batch ETL won\u2019t cut it. Choose tools with real-time CDC (Change Data Capture) capabilities, such as Hevo, Airbyte, or Fivetran (for certain sources). For less time-sensitive workloads, scheduled batch ETL tools may be more cost-efficient. 8. Evaluate Infrastructure Control Needs Do you want to control the compute layer or let the tool handle it? Serverless options like AWS Glue minimize infrastructure management. If you prefer custom compute tuning or need to optimize cost/performance trade-offs, consider tools like Talend or self-hosted Airbyte . This choice often depends on your organization\u2019s DevOps maturity and compliance requirements. 9. Verify Integration with Storage and BI Tools Your ETL tool should fit cleanly into your broader data stack. Ensure compatibility with AWS-native storage (S3, Redshift, RDS) and analytics platforms (Athena, QuickSight, etc.) . Some tools, like Skyvia and Hevo , also offer reverse ETL , enabling you to sync transformed data back into operational systems. Native AWS ETL Tools Included in the ETL Process For organizations working primarily within the AWS ecosystem, native ETL tools offer tight service integration, scalability, and cost-efficiency \u2014 especially when building pipelines around Redshift, S3, or RDS. Here are five core AWS-native tools commonly used in ETL workflows, along with their ideal usage scenarios: 1. AWS Glue Best for: Serverless, large-scale ETL, and data lake preparation AWS Glue is AWS\u2019s flagship ETL tool, purpose-built for transforming, cleaning, and moving data across the AWS stack. It automatically catalogs source data using the Glue Data Catalog and leverages Apache Spark under the hood for distributed processing. Glue is serverless, so it handles job orchestration, scaling, and infrastructure, reducing DevOps overhead. It\u2019s especially useful for preparing semi-structured data (JSON, Parquet, etc.) from S3 into analytics-ready formats for Redshift, Athena, or SageMaker . When to use it: Batch ETL at scale Building data lakes or pipelines for ML/BI Transforming large semi-structured datasets 2. AWS Glue DataBrew Best for: No-code data prep for analysts and citizen data users AWS Glue DataBrew brings data transformation to non-technical users through a visual, no-code interface . With over 250 built-in operations (deduplication, normalization, joins, etc.), users can prep data without writing a line of code. It integrates seamlessly with the Glue Data Catalog, S3, Redshift, and SageMaker \u2014 making it a great option for cleaning datasets before feeding them into ML workflows or BI dashboards. When to use it: Analyst-friendly data prep with no-code UX Cleaning and standardizing structured or semi-structured data Preparing input datasets for machine learning pipelines 3. AWS Data Pipeline Best for: Scheduled batch workflows with AWS-native orchestration AWS Data Pipeline is a workflow automation tool for orchestrating ETL jobs across AWS and on-prem systems. You can schedule tasks, define dependencies, and add retry logic \u2014 similar to tools like Apache Airflow, but tightly coupled with AWS. While more manual and less modern than Glue, it remains effective for regular batch processes like nightly backups, table syncs, or job chaining across EC2, EMR, RDS, and S3. When to use it: Time-based ETL workflows Moving data between AWS services or from on-prem systems Lightweight orchestration of SQL scripts or EMR jobs 4. Amazon EMR (Elastic MapReduce) Best for: Custom, code-heavy big data ETL at scale Amazon EMR is a managed cluster platform for running open-source big data frameworks like Apache Spark, Hadoop, Hive, and Presto . It offers full control over cluster configuration and supports highly customizable ETL workflows. Unlike Glue\u2019s abstracted serverless model, EMR is ideal when you need to fine-tune performance, install custom libraries , or migrate existing Spark/Hadoop jobs to AWS. When to use it: Custom big data pipelines with complex transformations Migrating legacy Spark/Hadoop ETL jobs to the cloud Scenarios requiring full infrastructure control and optimization 5. Amazon Kinesis Data Firehose Best for: Real-time ingestion and delivery to AWS targets Kinesis Data Firehose is AWS\u2019s real-time streaming ingestion tool, used to capture and load high-velocity data into S3, Redshift, or OpenSearch . It handles buffering, batching, and retry logic automatically and supports lightweight data transformations using AWS Lambda. While not a full ETL tool, Firehose is a key component in streaming data architectures \u2014 particularly for log collection, clickstream analysis, or IoT event delivery . When to use it: Real-time ingestion of streaming data Delivering event data to S3, Redshift, or analytics platforms Lightweight transformations at ingestion Conclusion There\u2019s no one-size-fits-all solution when it comes to AWS ETL tools. The \u201cbest\u201d tool isn\u2019t the one with the most features \u2014 it\u2019s the one that fits your data architecture, team skills, use cases, and growth plans . Native AWS services like Glue, Data Pipeline, and Firehose offer deep integration and serverless scaling for teams already invested in the AWS ecosystem. Meanwhile, third-party platforms like Skyvia, Fivetran, Hevo , and Airbyte provide broader connectivity, intuitive interfaces, and specialized capabilities that can dramatically reduce development time and overhead. The key is to evaluate tools against the factors we covered \u2014 data volume, transformation complexity, ease of use, cost, monitoring, infrastructure control, and integration needs . By doing so, you\u2019ll be better positioned to choose an ETL platform that delivers both immediate results and long-term scalability. Whether you\u2019re building real-time pipelines, migrating legacy systems, or designing modern data lakes \u2014 choosing the right ETL solution is foundational to your AWS data strategy. FAQ for AWS ETL Tools How do I choose the right ETL tool for my needs on AWS? Choosing the right ETL tool depends on your specific requirements: 1.\u00a0For end-to-end ETL processes, AWS Glue is ideal due to its automation capabilities and serverless architecture. 2.\u00a0For big data processing and advanced transformations, Amazon EMR offers more power with frameworks like Hadoop and Spark. 3.\u00a0If you need to automate data workflows, the AWS Data Pipeline allows you to schedule and manage data tasks. 4.\u00a0For data preparation tasks like cleaning and normalizing, AWS Glue DataBrew provides a user-friendly interface for non-technical users. 5.\u00a0If you need to handle real-time data transformations, AWS Lambda can process data as events occur. What are the limitations of using AWS Glue for ETL tasks? While AWS Glue is powerful, it has some limitations: 1.\u00a0Connector Limitations. It primarily supports AWS-hosted data sources and has fewer connectors for non-AWS data sources. 2.\u00a0Coding Requirements. Advanced transformations may require coding in Python or Scala. 3.\u00a0Learning Curve. Though Glue Studio simplifies things, fully leveraging the tool\u2019s capabilities can take time. How is AWS Lambda used for ETL, and what are its benefits and drawbacks? AWS Lambda is used for ETL tasks that require event-driven processing, such as transforming data in response to file uploads in Amazon S3 or updates in DynamoDB. It\u2019s a serverless tool that automatically scales with the number of requests. Benefits It\u2019s serverless, and there is no need to manage infrastructure. Suitable for real-time data transformations. Drawbacks Limited capabilities for full-scale ETL tasks. Users need to understand how to trigger Lambda functions based on event-driven design. How do pricing models differ between native AWS ETL tools? Each AWS ETL tool has a unique pricing model: 1.\u00a0AWS Glue. Charges based on data processing units (DPUs) used during ETL jobs. 2.\u00a0AWS Data Pipeline. Pricing depends on the number of tasks and data processing activities. 3.\u00a0Amazon EMR. Costs vary based on cluster usage, data transfer, and storage. 4.\u00a0AWS Glue DataBrew. Charges are based on the number of data rows processed. 5.\u00a0AWS Lambda. Costs are determined by the number of requests and the execution time of each function. How do the third-party ETL solutions compare to Native AWS Tools? 1.\u00a0Ease of Use. Third-party solutions like Skyvia and Fivetran offer no-code or low-code interfaces, making them easier for non-technical users compared to some native AWS tools like AWS Glue, which may require scripting for complex transformations. 2.\u00a0Integration Capabilities. Third-party tools often support a broader range of data sources, including non-AWS platforms. For example, Skyvia and Talend connect to various cloud services, databases, and even on-premises systems, while AWS Glue focuses mainly on AWS-hosted data sources. 3.\u00a0Customization and Flexibility. Informatica and Talend offer extensive customization for data transformations, while native AWS tools may have limitations on advanced data workflows. 4.\u00a0Cost Considerations. Native AWS tools typically have usage-based pricing, while third-party solutions may have subscription-based models that can be more predictable for budgeting. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Faws-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+AWS+ETL+Tools+for+2025%3A+Enhance+Your+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Faws-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/aws-etl-tools/&title=Top+10+AWS+ETL+Tools+for+2025%3A+Enhance+Your+Data+Integration) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/batch-etl-processing/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Batch ETL vs Streaming ETL: The Only Guide You Need By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/batch-etl-processing/#respond) 4632 March 20, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbatch-etl-processing%2F) [Twitter](https://twitter.com/intent/tweet?text=Batch+ETL+vs+Streaming+ETL%3A+The+Only+Guide+You+Need&url=https%3A%2F%2Fblog.skyvia.com%2Fbatch-etl-processing%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/batch-etl-processing/&title=Batch+ETL+vs+Streaming+ETL%3A+The+Only+Guide+You+Need) Data has become a staple for many modern businesses. We collect more insights and have become more data-driven in critical decisions. And there\u2019s an ever-growing trend to process them faster. You can do it with Extract, Load, and Transform (ELT) or use Extract, Transform, and Load (ETL). This article will focus on Batch ETL and its close cousin, Streaming ETL. ETL batch processing is a more traditional but solid approach. Meanwhile, stream processing is the newer kid on the block. Find out what you really need in this in-depth comparison. Table of contents What is ETL (Extract, Transform, Load)? Batch ETL Processing: How it Works Streaming ETL Processing: How it Works Batch Processing vs. Streaming Processing: Key Differences Use Cases of Batch ETL Use Cases of Streaming ETL When to Choose Batch ETL vs. Streaming ETL? Examples of ETL Batch Processing Using Skyvia Batch ETL Example 1: Replicating Salesforce to Azure SQL Database Batch ETL Example 2: Using Skyvia Data Flow to Load Salesforce Data to Azure SQL Database Conclusion Let\u2019s dive in. What is ETL (Extract, Transform, Load)? Global insights creation has grown to [more than 180 zettabytes in 2025](https://www.statista.com/statistics/871513/worldwide-data-created/) . And that\u2019s a lot. Since [the 1970s](https://en.wikipedia.org/wiki/Extract,_transform,_load) , ETL has been a [data pipeline](https://skyvia.com/blog/what-is-data-pipeline/) that processes all this info in three simple steps. Extract the information from various sources. It can be flat files like CSV or Excel. Or from relational databases like Oracle or SQL Server. And it can also be third-party sources like Salesforce or HubSpot. Transform it by cleaning, filtering, summarizing, and more. Load it to the appropriate target. Here are some specific examples of typical ETL processes today: Processing of monthly sales data into a data warehouse. Gathering daily fingerprint scans with date and time from a biometrics machine. This approach is helpful for employee time and attendance. Combining client profiles for two merging companies. ETL can also be batch or streaming. This method is perfect for information that needs to be processed immediately. The following section describes further how a batch ETL works. Batch ETL Processing: How it Works The batch ETL definition is simple. It refers to data processing in batches that cover a time period. The following steps describe batch ETL processing: Volumes of data are extracted and processed from the source. Extraction is possible via a query language like [SQL](https://en.wikipedia.org/wiki/SQL) or a [bulk or batch API](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_intro.htm) for [SaaS](https://en.wikipedia.org/wiki/Software_as_a_service) like Salesforce. Extraction covers a batch window or a time range. Processing is triggered manually or scheduled. Let\u2019s review two ETL batch processing examples that happen through a stream of time. The first one goes like this: The administrator ran the batch process manually. He extracted server logs like CPU utilization, disk space, RAM usage, and more. Coverage is within a batch window; logs from last night. It could be from 7:00 PM to 11:59 PM. Meanwhile, the ETL scheduler triggered another batch process at the start of the next day. And here\u2019s how it worked: The process was automated to run every first of the month at the appropriate time or as needed. Extracted data covers posted accounting entries from July 1 to July 31 of the current year. Note : You can perform these sorts of ETL batch processes using modern [ETL tools](https://skyvia.com/blog/etl-tools/) . But batch ETL can be challenging, as you will see later. First, let\u2019s describe what streaming ETL is. Streaming ETL Processing: How it Works Streaming ETL is a [data](https://skyvia.com/blog/connect-salesforce-to-sql-server/) pipeline that starts when a record is available in the source. It\u2019s sometimes called real-time ETL or stream processing. However, it\u2019s not literally real-time because it takes a fraction of a second to extract, transform, and load the whole thing. The following steps describe streaming ETL processing: Data is continuously extracted from the source as soon as it is generated. Here, sources include IoT devices, transactional databases, event logs, and APIs. And data is ingested in real-time instead of being collected in batches. Streaming ingestion is performed using event-driven architectures. Te chnologies like Apache Kafka, AWS Kinesis, or Google Pub/Sub handle high-velocity data streams in this step. Data flows into a processing pipeline without waiting for accumulation . Data transformation occurs in real-time. Operations like filtering, aggregation, and enrichment happen as the data flows through the pipeline. Transformations are applied immediately to ensure up-to-date insights. Processed data is continuously loaded into a target system. Here , storage options include data warehouses (Snowflake, BigQuery), databases, or cloud storage (AWS S3, Google Cloud Storage) . Note: Some streaming ETL workflows trigger real-time actions instead of storing data (e.g., fraud detection alerts). Real-world examples of streaming ETL include: Credit card fraud detection . When you swipe the credit card, a transaction is sent to a fraud detection system for analysis. Then, a deny or approve status is returned depending on several factors. Location tracking. Imagine you\u2019ve ordered something on Amazon and want to track it to the destination. GPS devices send real-time location information to monitor truck movements on a map. Real-time patient monitoring . Medical equipment and devices monitor patients\u2019 vital signs to respond to emergencies quickly. This technology is widely used in intensive care units (ICUs), emergency rooms, remote patient monitoring systems, and wearable health devices to ensure continuous health tracking and timely medical intervention. Streaming ETL processes\u2019 architecture may also involve a stream processing platform. For example: In a streaming ETL process, a data source, like an IoT device , transactional [database](https://skyvia.com/blog/best-data-pipeline-tools/) , or application , continuously generates real-time data. This data is then sent to a stream processing platform like [Apache Kafka](https://kafka.apache.org/) , a central hub for collecting and managing the incoming data streams. From there, users can ingest the data, apply transformations, and perform real-time analytics. Finally, the processed data can be stored in a target system, such as a data warehouse , a database , or a cloud storage solution , making it available for further analysis, reporting, or machine learning applications. Batch Processing vs. Streaming Processing: Key Differences Batch and streaming processing serve different purposes in data workflows, each catering to distinct [data management](https://skyvia.com/blog/export-data-from-salesforce-to-excel/) and analytics needs. In this section, we\u2019ll explore the key differences between these two processing methods to help users determine the best fit for the data strategy. 1. The Approach to Processing the Information Batch operates by collecting and processing insights in large, scheduled batches. It is suitable for scenarios where data is accumulated over time and does not require immediate action, such as financial reconciliations or business intelligence reporting. Streaming processing, in contrast, handles [data](https://skyvia.com/blog/salesforce-quickbooks-integration/) as it arrives, enabling real-time analysis and action. It is widely used in applications where instant info and responses are critical, such as fraud detection, live monitoring systems, and recommendation engines. This approach is ideal for financial services , healthcare , logistics , and media platforms that require real-time decision-making. 2. Latency The batch has inherently high latency due to its scheduled execution. The time lag between data generation and processing can range from minutes to hours, depending on the batch frequency. This makes it well-suited for industries like finance , retail, and manufacturing , where periodic data updates are sufficient. Streaming processing minimizes latency by continuously processing incoming data, often within milliseconds or seconds. Such a method is ideal for time-sensitive applications where immediate decision-making is required. It is widely used in cybersecurity , telecommunication s, and real-time analytics , where rapid response is critical. 3. Data Volume and Velocity The batch handles large volumes of accumulated information at once. So, it\u2019s efficient for historical data analysis, reporting, and periodic updates. Streaming processing excels at managing high-velocity streams, ensuring data is processed as generated. It is particularly beneficial for use cases like IoT sensor data analysis , social media feeds , and stock market transactions . 4. Scalability Batch systems can scale but often require careful resource allocation to handle large datasets effectively. Scaling is typically achieved by adding more computational power or optimizing batch scheduling. Streaming processing offers superior scalability by dynamically adjusting to fluctuating data loads. It is often implemented using distributed systems that support horizontal scaling , ensuring continuous and efficient insights handling. 5. Complexity Implementing a batch is simpler, following a predefined workflow with predictable data sets and structured processing steps. So it\u2019s easier to debug and manage the process. Streaming processing is more complex due to its real-time nature. It requires advanced info-handling techniques, including event-driven architectures, windowing strategies, and stateful processing to ensure accurate and timely insights. 6. Infrastructure Requirements Typically, batch operates on DWHs and requires less computational power, as processing occurs periodically. Streaming processing also leverages cloud solutions like data lakes, real-time ingestion frameworks, and distributed computing , similar to batch processing. However, maintaining continuous data streams at scale generally incurs higher costs due to the need for low-latency processing and high-speed storage. Such infrastructure is typically necessary for enterprises dealing with massive real-time data volumes, such as financial services , large-scale IoT deployments , and high-frequency trading platforms . 7. Error Handling In batches, errors are detected and corrected after processing the entire one. While this allows for thorough validation, it also means errors can delay subsequent processes. Streaming processing necessitates immediate error detection and correction to maintain data integrity. It incorporates real-time monitoring and automated recovery mechanisms to prevent inaccurate data from affecting decision-making. 8. Flexibility The batch is relatively rigid, requiring significant adjustments to accommodate [information](https://skyvia.com/blog/salesforce-to-salesforce-integration/) formats , sources , or processing logic changes. Streaming is highly adaptable, allowing for dynamic data transformations and integrations . It can quickly adjust to evolving business needs and real-time analytical requirements. The table below briefly compares the most common criteria for using both methods. Criterion Batch ETL Streaming ETL Processing Approach Processes data in scheduled batches. Processes data continuously in real-time. Latency High. Low. Data Handling Large volumes of accumulated data. High-velocity, continuous data streams. Scalability Limited. Highly scalable, adjusts dynamically. Complexity Simpler, predefined workflows. More complex, real-time processing challenges. I nfrastructure Needs Traditional DWHs, less demanding. Advanced, real-time computing infrastructure. Error Handling Detects and corrects errors post-processing. Immediate error detection and correction. Flexibility Less adaptable to changes. Highly adaptable to dynamic data needs. Use Cases of Batch ETL Let\u2019s walk through the most common scenarios where batch ETL is optimal. Data Warehousing and Business Intelligence Organizations use batch ETL to aggregate, transform, and load large datasets into data warehouses for reporting and analytics. Common services that support batch ETL include [AWS Glue](https://aws.amazon.com/glue/) , [Azure Data Factory](https://azure.microsoft.com/en-us/services/data-factory/) , and [Skyvia](https://skyvia.com/) , which offer automated scheduling, transformation capabilities, and seamless integration with cloud storage and databases. This method supports business intelligence by providing cleaned and structured data for dashboards, historical data analysis, and trend forecasting. Financial and Accounting Reporting Companies process financial transactions and records in batches to generate periodic reports, such as monthly balance sheets, quarterly earnings reports, and tax compliance filings. Batch ETL ensures data accuracy and consistency by aggregating large volumes of transactional data before analysis. Customer Data Integration Batch ETL helps merge customer data from multiple sources, ensuring accurate and consistent customer profiles for marketing analysis, as well as customer segmentation strategies. This approach enables businesses to refine targeting efforts and improve customer experience based on historical insights. ETL for Compliance and Audit Healthcare, finance, and other industries rely on batch ETL to process and store large volumes of data for regulatory compliance, audits, and historical record-keeping. By automating compliance data processing, organizations can meet legal obligations, pass audits efficiently, Payroll Processing and HR Analytics Payroll systems use batch ETL to calculate salaries, deductions, and employee benefits based on collected data, ensuring accurate and timely payroll processing. HR teams also leverage batch ETL to analyze workforce trends, monitor employee performance, and optimize talent management strategies. Supply Chain Management Businesses process orders, inventory data, and sales records in batches to optimize supply chain operations, demand forecasting, and pricing strategies. Batch ETL helps businesses identify stock shortages, predict future demand, and streamline logistics planning to minimize disruptions. Data Migration and System Upgrades Organizations use batch ETL for bulk data transfers when migrating databases, upgrading systems, or consolidating legacy data into new platforms without disrupting daily operations.\u00a0 ensures that historical data is accurately preserved while enabling organizations to transition smoothly to modern cloud or hybrid infrastructures. Use Cases of Streaming ETL The use cases below illustrate the transformative power of streaming ETL in enabling immediate decision-making and enhancing operational efficiency. Real-Time Fraud Detection Banks and financial institutions use streaming ETL to detect fraudulent transactions as they occur, minimizing financial losses and security risks. By analyzing transaction patterns in real time, suspicious activity can be flagged instantly, preventing fraud before it happens. IoT and Sensor Data Processing Manufacturing, healthcare, transportation industry, etc., rely on real-time data ingestion from IoT devices to monitor equipment performance, track health metrics, and optimize logistics. This approach ensures predictive maintenance, reduces downtime, and enhances overall efficiency. Cybersecurity and Threat Detection Security systems use streaming ETL to analyze network traffic and detect potential cyber threats, responding to anomalies in real-time to prevent breaches.\u00a0 Automated threat detection allows businesses to react instantly to unauthorized access or unusual activity, improving security. Personalized Recommendations Streaming ETL enables e-commerce and entertainment platforms to deliver personalized content recommendations by analyzing user interactions in real-time. \u00a0 method helps businesses increase engagement and conversions by showing the right products, movies, or music at the right moment. Stock Market and Financial Trading High-frequency trading systems depend on real-time info processing to execute trades based on live market conditions and trends. Even a fraction of a second can make a difference in financial gains or losses, making streaming ETL essential for fast decision-making. Log and Event Monitoring IT teams use streaming ETL to monitor system logs, detect failures, and automate alerts to ensure operational stability. This ability helps businesses proactively address technical issues before they impact users or cause downtime. Smart City and Traffic Management Governments and urban planners use real-time insights from traffic sensors to optimize road management and reduce congestion. By adjusting traffic signals and rerouting vehicles based on live data, cities can improve traffic flow and reduce delays. When to Choose Batch ETL vs. Streaming ETL? So, how to choose between batch ETL and streaming ETL? The answer depends on: Company\u2019s information needs. Business priorities. Infrastructure capabilities. If the organization works with large datasets on a set schedule and doesn\u2019t require immediate insights, batch ETL is a practical and cost-effective solution. It simplifies data management and is convenient for structured reporting, compliance, and periodic analysis. But streaming ETL is the better choice if your business demands: Real-time analytics. Fast decision-making. Continuous data updates. This approach is crucial in the finance, cybersecurity, and IoT industries, where real-time insights can significantly impact. The table below outlines key selection criteria to help people determine the most suitable ETL approach. Selection Criteria Choose Batch ETL when Choose Streaming ETL when Data Processing Schedule Data can be processed at scheduled intervals. Immediate processing is required. Data Type Consistency Data formats and sources remain consistent. Data is diverse and evolves frequently. Latency Requirements Some delay in data processing is acceptable. Real-time data processing is critical. Budget Constraints A company has a limited budget for infrastructure and computing. An organization can invest in real-time infrastructure and expertise. Complexity Tolerance Users prefer a simpler, predefined workflow. Users can manage advanced real-time processing challenges. Business Goals The focus is on structured, periodic reporting and compliance. The strategy requires quick insights and adaptive decision-making. Examples of ETL Batch Processing Using Skyvia Performing batch ETL in Skyvia is simple and efficient, offering multiple options depending on each user\u2019s needs. Skyvia provides several solutions to the ETL process. Let\u2019s look at samples of how it solves appropriate cases of ETL Batch Processing. [Data Replication](https://docs.skyvia.com/data-integration/replication/index.html) . This ELT solution is designed to copy data between cloud applications and relational databases. This approach is perfect for data analytics use cases. [Data Import](https://docs.skyvia.com/data-integration/import/tutorials/index.html) . Facilitates ETL between two data sources, allowing for seamless data movement in either direction and vast abilities of mapping and transformations. This method also supports [reverse ETL](https://skyvia.com/learn/reverse-etl-tools) , where a data warehouse acts as the source and a cloud app is the target. It\u2019s useful when copies of large data sets need to be kept up-to-date in the database or DWH. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/index.html) . Defines how data moves from source to destination, undergoing extraction, transformation, and loading (ETL). It includes: Source Connection. Extracting data from databases, cloud storage, or APIs. Data Transformation. Cleaning, filtering, aggregating, or enriching data before loading. Destination Mapping. Storing transformed data in a data warehouse, database, or another system. Enables complex ETL workflows, allowing the integration of multiple data sources and conditional transformations. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/how-to-design-control-flow.html) . While Data Flow handles the movement of data, Control Flow manages the execution sequence, dependencies, and conditions within the ETL pipeline. This allows for: Task Scheduling and Automation. Ensuring ETL jobs run at the right time. Error Handling and Retry Logic. Managing failures and ensuring data consistency. Conditional Workflows. Running specific tasks based on predefined rules. [ETL Data Pipeline Designer](https://www.youtube.com/watch?v=U8Zbk03E58Q) : A Visual Approach to Data Workflow helps users build, manage, and orchestrate their ETL processes without coding. These tools offer drag-and-drop interfaces to define Data Flow pipelines, meaning mapping and transforming data from source to destination, and Control Flow orchestration, meaning automating dependencies and job execution. How to Build an ETL Pipeline with Batch Processing: Step-by-Step Guide from Skyvia When integrating data between systems, there are two primary approaches : data replication and data flow . Each serves a different purpose, depending on how frequently data updates are needed and whether transformations are required before loading . [Data Replication](https://docs.skyvia.com/data-integration/replication/) copies data from one system to another without modifications to ensure a consistent backup, supports reporting needs, and allows easy access to historical info. It\u2019s best for users who need a straightforward, scheduled sync to maintain data consistency across systems. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) moves and transforms data in real-time or near real-time, enabling enrichment, filtering, and automation of workflows. This approach is ideal for users who require immediate data updates, complex transformations, and automated business processes. Let\u2019s take a closer look at how each works. A great example is extracting data from [Salesforce](https://skyvia.com/connectors/salesforce) and loading it into the [Azure SQL Database](https://azure.microsoft.com/en-us/products/azure-sql/database/#overview) . Let\u2019s review two examples of how to do it. Data Replication As discussed earlier, batch ETL is widely used for data warehousing and business intelligence, enabling businesses to aggregate and transform large datasets efficiently. One of the everyday use cases is data replication, where data is regularly extracted from a source system and loaded into a target database for analysis. The first step is to go to the [Skyvia sign-up](https://app.skyvia.com/register?) and create the two connections needed for our batch ETL examples. Creating the Salesforce Connection In Skyvia, click + Create NEW > Connection . Type Salesforce in the Select Connector box. Click the Salesforce icon. Note : You need a valid Salesforce account, or this step won\u2019t work. So, check the configuration below. Name your connection. Select the production environment. Select OAuth 2.0. You\u2019ll get a security key (OAuth token) after signing in. Then, click Test Connection . Click Create Connection. When it\u2019s done, click Save Connection to finalize the settings. Not e: Don\u2019t uncheck the Use Bulk API checkbox in Advanced Settings. This will use batch API calls. That will let you avoid reaching your [API request limits and allocations](https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_api.htm) . Then, you\u2019re ready to use this connection in a Skyvia package. Creating the Azure SQL Database Connection After you\u2019re [registered in Skyvia](https://app.skyvia.com/register) on the previous steps, let\u2019s create the Azure account to make this connection work. Note : In this step, a fully managed [PaaS database](https://skyvia.com/blog/what-is-ipaas/) and a table where the Salesforce records will be loaded must be created and ready to work. You also need to whitelist Skyvia\u2019s IP addresses in Azure. One way to do it is by enabling public network access with firewall rules. See below. Create a new Azure SQL Server Connection Click + Create NEW > Connection . Type SQL Server in the Select Connector box. Click the SQL Server icon. Configure the Azure SQL Server Connection In the Azure SQL database, click Connection Strings . You\u2019re going to need the server\u2019s name, User ID, and database name, as it\u2019s shown here. Then, go to Skyvia and set the appropriate parameters (see the screen below). Test Connection will succeed if you use the correct credentials. Then, click Create Connection to save the parameters. Batch ETL Example 1: Replicating Salesforce to Azure SQL Database It\u2019s time to build our easy example: a Skyvia Replication Integration. The following shows Salesforce to Azure SQL Server database replication. Name your replication package. Specify \u201c Salesforce \u201d as the Source . Specify \u201c Azure-MSSQL \u201d as the Target . Select the Salesforce objects you want to replicate. Validate and save your package. Then, test it by clicking Run in the top-right corner of the page. Use external tools like SQL Server Management Studio to check the replicated data. Then, you can [schedule this package](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) to run at a set time. It can be a one-time or recurring schedule. It\u2019s all up to you. See below. Batch ETL Example 2: Using Skyvia Data Flow to Load Salesforce Data to Azure SQL Database Our next example is a more flexible ETL solution using a [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . We are going to use a simple example so this solution is easier to understand. Then, you can use this knowledge to do complex Control Flows. To do this, we will extract the Contacts from Salesforce. Then, add a new column as a transformation. Finally, store the results in Azure SQL. We will use the dummy Contacts data created when you register to Salesforce. So, we expect seven names with initials present in Azure after the package runs. Anyway, here are the steps: Creating a Control Flow Click + Create NEW > Control Flow. Name your Control Flow (in the screenshot below, the name is Salesforce-Contacts-to-Azure). Drag and drop an Action Component between Start and Stop . Drag and drop a Data Flow between Action and Stop . Configuring the Action Component For simplicity, we will use a full-load ETL to avoid duplicates. You can also do an [incremental load](https://blog.devart.com/incremental-load-in-ssis.html#what-are-full-load-and-incremental-load-in-etl) in Skyvia, but we won\u2019t explain it here. So, the first action will remove all the records in the target table. Click the Action icon. Name the Action . Select the Azure-MSSQL connection. Select Execute Command for the Action . Specify the Command Text . Configuring the Data Flow Click the Data Flow component to start. Click Open Data Flow . Then, a new page will appear with a blank Data Flow. Drag and drop a Source . Drag and drop an Extend transformation . Drag and drop a Target . Connect the arrows to each component. Configuring the Source Then, click the Source icon and name it Select the Salesforce connection. Select Execute Query for the Action . Click Open in Editor. Find the Contacts table and click it. Drag the Contacts * to the Result Fields box, then click Apply . Configuring the Extend transformation Click the [Extend transformation](https://docs.skyvia.com/data-integration/data-flow/components.html#extend) icon. Name the Extend transformation. Click the pencil icon in Output Schema . Specifying the expression for the new column Click + to add a new column. Name the new column. Enter the expression to extract name initials using the left function. Click Apply . Configuring the Azure Target Click the Target icon. Name the Target. Select the Azure-MSSQL connection. Select Insert for the Action . Click dbo.[salesforce-contacts] or the name of your table in Azure. Click the pencil icon in Parameters . A pop-up window will appear. Click the Auto Mapping icon, then click Apply . Click Create in the upper right corner to save the work. Then, click Run . The result of the batch ETL is shown below. Compare it to the Salesforce screenshot earlier. This simple Control Flow can include more data flows that handle other Salesforce tables. You can make a parallel execution or run it in succession. You can also make a schedule to run based on the appropriate needs. The configuration is the same as the one for the replication package earlier. Conclusion If you work with large data volumes that don\u2019t require real-time updates, batch ETL processing is a good choice. While it provides simplicity and efficiency, selecting the right tool can make a bit of magic. If you\u2019re planning a batch ETL project, [Skyvia](https://www.trustradius.com/products/skyvia/reviews) offers a user-friendly, flexible solution that meets a variety of ETL needs, like data migration, integration, or synchronization. The platform simplifies all these processes. So, you may focus on insights rather than infrastructure. Why not try it and see how such a system can streamline the data workflows? FAQ to ETL Batch Processing What is the key difference between Batch ETL and Streaming ETL? Batch ETL\u00a0processes data in scheduled intervals, handling large data sets at once. In contrast,\u00a0Streaming ETL\u00a0processes data in real-time or near real-time, handling individual records or small batches continuously. When should I use Batch ETL instead of Streaming ETL? Batch ETL is ideal when: \u2013 Data freshness is not critical (e.g., daily reports, historical analysis). \u2013 Processing large data volumes efficiently is a priority. \u2013 The data sources don\u2019t generate updates frequently. \u2013 Cost optimization is a key concern since batch jobs are often cheaper. What are the best use cases for Streaming ETL? Streaming ETL is best for: \u2013 Real-time analytics (e.g., fraud detection, stock market analysis). \u2013 Event-driven applications (e.g., IoT, monitoring logs). \u2013 Continuous data ingestion from sources like Kafka or IoT devices. \u2013 Low-latency use cases where instant processing is required. What are the main challenges of Streaming ETL compared to Batch ETL? Streaming ETL comes with challenges such as: \u2013 Higher complexity in implementation and maintenance. \u2013 Increased infrastructure costs due to always-on processing. \u2013 Handling late-arriving or out-of-order data. \u2013 Ensuring exact-once processing and consistency. How do costs compare between Batch ETL and Streaming ETL? Batch ETL is typically\u00a0more cost-efficient\u00a0since it runs periodically, optimizing resource usage. On the other hand, Streaming ETL requires\u00a0constant resource allocation, which can lead to\u00a0higher operational costs\u00a0depending on volume and latency requirements. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbatch-etl-processing%2F) [Twitter](https://twitter.com/intent/tweet?text=Batch+ETL+vs+Streaming+ETL%3A+The+Only+Guide+You+Need&url=https%3A%2F%2Fblog.skyvia.com%2Fbatch-etl-processing%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/batch-etl-processing/&title=Batch+ETL+vs+Streaming+ETL%3A+The+Only+Guide+You+Need) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-celigo-competitors/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) The Ultimate List of Celigo Competitors in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-celigo-competitors/#respond) 3427 June 2, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-celigo-competitors%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Ultimate+List+of+Celigo+Competitors+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-celigo-competitors%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-celigo-competitors/&title=The+Ultimate+List+of+Celigo+Competitors+in+2025) Have you ever had a feeling of dealing with an avalanche of data that only speeds up and never slows down? Probably, yes. That\u2019s not just a mirage as the amount of digital data constituted 33 zettabytes in total by 2018, while this number reached [97 zettabytes](https://www.statista.com/statistics/871513/worldwide-data-created/) in 2022. Believe it or not, the data volume has tripled over 4 years. So what does it mean? Digital data comes from everywhere \u2013 social media, surveillance systems, business applications, and so on. And companies have to deal with all that, which seems scary. The good news is that data integration and processing tools provide all the instruments to benefit from the data stream for the business. By now, around [67% of enterprises](https://www.forbes.com/sites/louiscolumbus/2020/03/29/the-state-of-enterprise-data-integration-2020/) employ [data integration](https://skyvia.com/blog/data-integration-tools/) tools in their workflows to aggregate data and assess it. Celigo is one of those as it combines data from various applications, which opens new horizons for automating business processes. So how would you benefit from such tools as Celigo or its alternatives? Everything is simple \u2013 imagine yourself in the library where books from multiple publishing houses are brought together and perfectly arranged based on genre. Similarly to a library, Celigo contains orchestration mechanisms to put data in order. In this article, you\u2019ll find out which particular data integration scenarios and businesses Celigo would be suitable for. You\u2019ll also discover Celigo alternatives that would be applicable to other data integration scenarios. As a result, based on the description of the strengths and weaknesses of Celigo and its alternatives provided in this article, you\u2019ll decide which tool would be perfect for your particular case. Table of Contents About Celigo Best Celigo Alternatives Skyvia Hevo Fivetran Stitch Matillion Talend Jitterbit Conclusion About Celigo As companies mature, new systems and applications are added to business processes. In most cases, these applications need to share data with each other. Building API integration processes could be slow, time-consuming, and costly, so here comes Celigo to put everything in order. [Celigo](https://www.celigo.com/) is a complete integration [Platform-as-a-Service (IPaaS)](https://skyvia.com/blog/what-is-ipaas/) for IT and business users. It makes business processes automated across applications by offering advanced orchestration, transformation, and customization settings. Thus, you can set up a data flow across applications for synchronizing, replicating, analyzing, or performing any other operations with data. The Power of Celigo This tool helps businesses to arrange and coordinate their operations. It gathers data from various apps providing a general overview of all the company\u2019s processes. Also, Celigo brings a wide range of other benefits described below in detail. User-friendly. This platform requires a low-code experience only, which would be suitable even for users with primary technical expertise. Scalable. With this IPaaS platform, integrate data from as many data sources as your business needs. Resource-saving. Celigo saves plenty of human work and monetary resources. Reliable. This tool is extremely secure as it provides advanced encryption and authentication features for data protection. Accurate. This IPaaS platform guarantees a considerable reduction in data errors. Empowering Businesses with Celigo Advantage Services provided by Celigo are dedicated to resolving business-related tasks within various sectors and company departments. Below, we provide several examples of how this tool could be implemented in work scenarios. Business Analysis As the platform has plenty of pre-build automation, non-technical or business users won\u2019t need to apply much effort to tune everything up. They\u2019ll only have to design business logic for outlining the rules of [data extraction](https://skyvia.com/blog/top-data-extraction-tools/) from different applications. The data ingested with Celigo is later used in BI applications for building dashboards and driving conclusions out of it. The system implements AI for automatic error detection in data to prevent erroneous analytic findings. Finance Taking all financial operations under control is challenging for businesses. This usually involves manual [data transfer](https://skyvia.com/blog/data-migration-tools/) which might cause faults, data duplicates, and delays in financial reports. Celigo comes to resolve such an issue by integrating [CRM with ERP](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) applications for ensuring financial integrity for businesses. As a result, businesses get a holistic overview of the financial situation and detect any misalignments between sales and finance. Human resources Recruiters and HR managers have to deal with lots of people on a daily basis. This consequently leads to enormous data amounts about candidates for open job positions, current employees, and HR processes. Celigo makes it possible to take everything in order and automate HR processes: employee data management, hiring, and onboarding. Best Celigo Alternatives Even though Celigo is a great IPaaS platform with multiple functions, it\u2019s not the best fit for each and every data integration scenario. That\u2019s why we\u2019d like to present some Celigo alternatives for working with data. Those are Celigo competitors, in fact, that also have data integration in mind but offer some extra features. Skyvia: Top-notch Celigo Alternative for Your Business One of the best Celigo alternatives for data ingestion and [data transformation](https://skyvia.com/blog/best-data-transformation-tools/) is the [Skyvia](https://skyvia.com/) cloud platform. It allows you to integrate data from cloud apps, databases, and data warehouses. Skyvia would be a win-win fit for any business as it embeds [ELT, ETL,](https://skyvia.com/blog/elt-vs-etl/) and [Reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) solutions. It also ensures data synchronization, workflow automation, and building advanced [data pipelines](https://skyvia.com/blog/best-data-pipeline-tools/) . This set of functions makes business processes work as smoothly as a cogwheel. Being a universal data platform, Skyvia offers lots of data integration scenarios. However, we\u2019ll focus on the two ETL scenarios: conventional Data Import involving two platforms and advanced Data Flow for elaborating on complex data pipelines. Data Import This is one of the most popular and widely used scenarios of data integration. It allows data migration between different platforms \u2013 loading CSV files as well as data from cloud apps and databases. To properly set up the [Data Import](https://docs.skyvia.com/data-integration/import/configuring-import.html) integration, you\u2019ll need to indicate several parameters for the data transfer path, batch size, scheduling, etc. As Skyvia performs all those processes based on the ETL technology, data transformation and mapping would also be needed in most cases. This is defined via the task settings coming along with the data import parameters mentioned above. Data Flow This scenario is a perfect solution for integrating multiple data sources and performing advanced data transformations on its loading. Data flow is similar to data import in the sense that data is migrated between sources. Contrary to Data Import, Data Flow is a many-to-many solution, which means it could be particularly suitable for working with multiple data warehouses and cloud apps at once. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) is defined in a graphical way: use blocks for building a data flow path and data transformation logic. For instance, specify which exact columns should be transferred from one data source to another. Skyvia also offers a data orchestration tool named [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) that comes along with Data Flow for managing complex data integration scenarios. Control Flow offers more flexibility to visually configure a data flow diagram as well as specify complex pre- and post-integration procedures. The Skyvia Advantage: Key Rewards for Your Enterprise Using Skyvia as the data [integration solution](https://skyvia.com/data-integration/) supplies your business with the following benefits: Configurability. This system allows you to build any process for data transfer, transformation, and loading. Actual data. Skyvia provides an opportunity to load the most recent data into your data warehouse based on predefined scheduling (up to one time per minute), so you\u2019ll always have the latest data at hand. High availability. Being a cloud-based application, Skyvia can be accessed from anywhere and at any time. It needs only a web browser for access and requires no further maintenance. Flexible pricing. This tool has a free plan for an unlimited period of time, which would be particularly suitable for small businesses. Those who need a wider feature set and larger storage capacity can enjoy a [flexible pricing model](https://skyvia.com/pricing/) applicable to medium-sized businesses and enterprises. Hevo [Hevo](https://hevodata.com/) is a data integration platform implementing [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) scenarios for the financial, retail, and e-commerce sectors. Hevo offers two major products for operating with various data-related tasks: Hevo Pipeline and Hevo Activate. The first solution is the product for ETL and ELT data integration, while Hevo Activate is particularly designed for working with [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) scenarios. Hevo solutions offer a user-friendly interface, so the data pipeline and transformations could be easily designed via drag-and-drop mechanisms. Alternatively, there is a possibility to use Python coding for defining the needed operations with data. Integrations are possible within cloud-to-cloud applications only. Comparison to Skyvia The products [Skyvia and Hevo](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Hevo Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow Data extraction, ETL, reverse ETL, ELT Connectors 200+ 150+ Work mode No-code approach, visual ETL data pipeline designer Python code, visual drag-and-drop diagrams Developer tools REST connector for data sources with REST API Hevo API Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used Event-based pricing Fivetran [Fivetran](https://www.fivetran.com/) acts as the ELT tool, so companies prioritizing rapid data ingestion from various platforms would value this tool. Initial setup and establishing connections between sources and target data storages would take some time and coding. The implementation of Fivetran products would be particularly suitable for marketing, financial, and sales departments. Also, the teams of business data analytics and engineers would benefit from Fivetran. For those who want to further transform the obtained data, Fivertan suggests its transformation solution for the dbt Core. Comparison to Skyvia The products [Skyvia and Fivetran](https://skyvia.com/etl-tools-comparison/fivetran-vs-celigo) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Fivetran Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow Data extraction, ELT Connectors 200+ 360+ Work mode No-code approach, visual ETL data pipeline designer Low-code approach Developer tools REST connector for data sources with REST API REST API available on certain pricing plans Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used Volume-based pricing Stitch When there is a need to connect all the data generated from outside and inside some businesses, [Stitch](https://www.stitchdata.com/) is there to help. This platform collects data from cloud apps and SaaS platforms into large data warehouses and data lakes the company uses. United data gathered from every possible tool with Stitch and processed by analytics software provides an overview of business perspectives. Marketing and product development departments could greatly benefit from that. Comparison to Skyvia The products [Skyvia and Stitch](https://skyvia.com/etl-tools-comparison/stitchdata-vs-celigo) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Stitch Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow Data extraction, ELT Connectors 200+ 140+ Work mode No-code approach, visual ETL data pipeline designer No-code approach Developer tools REST connector for data sources with REST API Stitch Connect API Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used Rows-based pricing Matillion [Matillion](https://www.matillion.com/) offers two principal solutions for working with data: batch loader and ETL tool. The first one works on an ELT basis and allows ingesting data from databases and popular applications. Meanwhile, the ETL tool employs mechanisms for data ingestion, its transformation, and further loading into the cloud. Data scientist teams would particularly benefit from Matillion because it supports big data. Matillion ETL integrates with major cloud providers and their analytical solutions for processing high volumes of data. Comparison to Skyvia The products [Skyvia and Mat](https://skyvia.com/etl-tools-comparison/celigo-vs-matillion) [i](https://skyvia.com/etl-tools-comparison/celigo-vs-matillion) [llion](https://skyvia.com/etl-tools-comparison/celigo-vs-matillion) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Matillion Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow Data extraction, ETL, data synchronization, ELT Connectors 200+ 140+ Work mode No-code approach, visual ETL data pipeline designer Low-code or no-code approach Developer tools REST connector for data sources with REST API Matillion ETL API available on REST-based API Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used No trial Consumption-based Talend [Talend](https://www.talend.com/) product is designed to ensure end-to-end data management \u2013 from data ingestion to data governance. This low-code platform allows analyzing data for quality and cleansing it before analysis. Talend Data Fabric supports on-premises implementation as well as cloud-based one. Talend tool would be applicable for sales, marketing, operations, and product development departments. Moreover, this tool combines data from all those units, providing the overall business strategy review to decide whether any adjustments are needed. Comparison to Skyvia The products [Skyvia and Talend](https://skyvia.com/etl-tools-comparison/celigo-vs-talend) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Talend Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, data export, data querying, data backup Data integration, data quality check, data governance Connectors 200+ 1000+ Work mode No-code approach, visual ETL data pipeline designer Low-code approach Developer tools REST connector for data sources with REST API Talend Cloud API Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used Subscription-based Jitterbit [Jitterbit](https://www.jitterbit.com/) is a sophisticated platform for the integration of multiple applications within the enterprise. Jitterbit is the IPaaS system that combines all the necessary data for business analysts who participate in the lion\u2019s share of decision-making processes. Integrate data to Jitterbit IPaaS using pre-built connectors or by integrating applications with Jitterbit API. Jitterbit\u2019s principal functions include IT service management, EDI integration, HR management, and employee expense management. Thus, this tool would be highly applicable to IT professionals, marketing specialists, HR managers, and operations departments. Comparison to Skyvia The products [Skyvia and Jitterbit](https://skyvia.com/etl-tools-comparison/jitterbit-vs-celigo) are juxtaposed based on five parameters \u2013 see the comparison table below. Parameter Skyvia Jitterbit Operations with data Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow Application integration Connectors 200+ 120+ Work mode No-code approach, visual ETL data pipeline designer Low-code approach Developer tools REST connector for data sources with REST API Jitterbit\u2019s Harmony API Pricing Free plan for any Skyvia product. Then the pricing is calculated based on data volume and features used Subscription-based Conclusion We\u2019ve presented Celigo and its alternatives such as Skyvia, Talend, and many similar solutions in this article. Each of those makes data understanding easier because of aggregating everything into one place. We also recommend having a look at the [ETL tools comparison](https://skyvia.com/etl-tools-comparison/) with a detailed description of product features. This will help to make a proper evaluation and decide on which particular solution is capable of resolving business tasks. If you need a holistic solution for working with data, we recommend considering Skyvia. This platform is fully featured as it provides [ETL tools](https://skyvia.com/blog/etl-tools/) for those who value consistency and ELT tools for those who need instant data load into the central data warehouse. Moreover, Skyvia has other data-related functions businesses can benefit from. Try Skyvia now to see how it helps to streamline each and every business process! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-celigo-competitors%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Ultimate+List+of+Celigo+Competitors+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-celigo-competitors%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-celigo-competitors/&title=The+Ultimate+List+of+Celigo+Competitors+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-crm-data-enrichment-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best CRM Data Enrichment Tools [2025] By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-crm-data-enrichment-tools/#respond) 2321 February 9, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-data-enrichment-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+CRM+Data+Enrichment+Tools+%5B2025%5D&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-data-enrichment-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-crm-data-enrichment-tools/&title=Best+CRM+Data+Enrichment+Tools+%5B2025%5D) As the online world accelerates, marketers and salespeople have to act quickly. They need to gather and record contact details along with demographic and behavioral data of their prospects. A CRM with ample data about consumers becomes a backbone of efficiently targeted campaigns, increased conversions, and improved brand loyalty. Is there enough data in your CRM system? That isn\u2019t a trivial question, but CRM lead enrichment can help answer it and advance the customer database. CRM enrichment tools replenish gaps by adding all the necessary details and verifying existing datasets. That way, marketers and sales departments obtain holistic user profiles. This article unveils superior CRM data enrichment tools and their importance for businesses. Also, you\u2019ll get to know which exact data types could be enriched and how. Table of contents What Is CRM Data Enrichment? CRM Enrichment: Underlying Reasons Key Data Types for CRM Enrichment TOP Tools for Lead Data Enrichment Skyvia Clearbit ZoomInfo FullContact LeadGenius Dun & Bradstreet Demandbase Pipl Segment Final Thoughts What Is CRM Data Enrichment? Generally, [data enrichment](https://skyvia.com/blog/data-enrichment-services/) stands for appending and enhancing existing data records. CRM data enrichment focuses on the customer base in particular. It complements individual customer and B2B company profiles with new data and verifies the existing details with the help of dedicated CRM data enrichment tools. Companies often aim to obtain a 360-degree view of their prospects during the first stages of the [customer journey](https://skyvia.com/blog/how-to-successfully-implement-a-data-driven-customer-journey/) . CRM lead enrichment tools help them get to know their prospects better and provide personalized offers. Such an approach tends to increase engagement, conversion rate, and purchases. Lead enrichment for CRM considers the following data types: Contact data. Takes third-party verified databases that contain industry, actual emails, and other details for CRM contact enrichment. Demographics data. Includes such information as age, gender, location, etc. of a person. Transactional data. Contains information about expenses, transactions, and purchases. Behavioral patterns data. Comes from other online sources used by a company (websites, social media) and shows users\u2019 habits on the internet. Depending on the actual contents of the customer database, the CRM lead enrichment process includes either all this data or only certain categories. Note that this classification is generic and is more applicable to individual customers. However, you\u2019ll find more detailed info on data types for B2B lead enrichment in the next section. Here\u2019s an example of the CRM before enrichment. Here\u2019s an example of the CRM after enrichment. CRM Enrichment: Underlying Reasons CRM data enrichment brings value to multiple departments, but it\u2019s critical for sales and marketing specifically. Salespeople claim that the significance of accurate data is a key player in sales personalization, lead conversion, and improvement of the initial experience with the brand. Marketers acknowledge that the first-party data about customers is never enough to gain the expected results. So, they rely on CRM data enrichment to add clarity and details to existing datasets to gain better customer acquisition and retention rates. Enhancement of Total Customer Experience through Personalization The amount of information is growing, so it\u2019s worth addressing customers with concise and personalized messages to get noticed. Recent statistics show that [42% of customers](https://startupbonsai.com/personalization-statistics/) might even get frustrated when they receive generic emails or some information that\u2019s of no interest to them. Luckily, CRM lead enrichment allows marketers to tailor personalized offers and improve customer experience with the brand. Advancements in Lead Account Scoring Salespeople don\u2019t like to waste their time on processes that are less likely to lead to deals, so they heavily rely on lead account scores for prioritization. CRM lead enrichment improves lead score accuracy, which allows sales professionals to focus on the leads with a high likelihood of conversion. Key Data Types for CRM Enrichment We\u2019ve already introduced some examples of customer data enrichment for CRM. Here, we\u2019d like to talk in detail about B2B lead enrichment. Contact Data Specific to B2B Enterprises Unlike companies working with individual customers, B2B enterprises have a much more complex structure. Consequently, they have more departments and more points of contact. For instance, Alstom, a global leader in the transportation sector, has different communication channels for customers, investors, suppliers, media, and talents. In such cases, verified datasets simplify B2B data enrichment as they already contain all the needed information about B2B companies. Corporate Profile Data This data type reveals the number of employees, years in the market, industry, financial gains, awards, and ratings. All this helps to estimate the average lead score within the B2B lead enrichment process. Technographic Data It contains information about the technology stack, including infrastructure, network, software applications, etc., used by the enterprise. Such data in CRM helps technological companies offer corresponding solutions to their prospective and current clients. Predictive Purchase Intent Data Such data advances marketing strategies by providing insights into when clients are ready to make a purchase. Predictive purchase intent data helps marketers anticipate customer needs and perform the right communications at the right time. As a result, it tends to increase in conversion rates. There are no ready-made datasets with predictive purchase intent data. Companies exploit data warehouses with embedded machine learning algorithms to generate it. To send such predictive purchase data to a CRM, you\u2019ll need [data integration tools](https://skyvia.com/blog/data-integration-tools/) for that. TOP Tools for Lead Data Enrichment In this section, we provide an overview of the top tools that would help you integrate enhanced data into a CRM. Some of them can also load data into a DWH for predicting purchase intent. Skyvia [Skyvia](https://skyvia.com/) is a cloud SaaS platform designed for a set of data-related tasks: data integration, workflow automation, query, and backup. It has 170+ pre-made connectors ( [20+ CRM connectors](https://skyvia.com/connectors/#crm) ) to be used in [data pipelines](https://skyvia.com/blog/10-best-data-pipeline-tools/) . Along with that, [G2 ranks Skyvia](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) as the top user-friendly solution due to its intuitive drag-and-drop interface. Skyvia offers a range of tools for CRM data enrichment in its [Data Integration](https://skyvia.com/data-integration/) product: Import . This tool allows businesses to perform CRM lead enrichment by loading data from cloud apps, databases, and CSV files into a customer database. It applies data mapping settings that ensure consistency between the data structures of different sources. Replication . It copies data into a DWH and refreshes it regularly with incremental updates. Then, it\u2019s possible to perform analysis or apply machine learning algorithms (for predictions on purchase intent) inside a data warehouse. Data Flow + Control Flow . Data Flow enables the creation of complex [ETL pipelines](https://skyvia.com/blog/etl-architecture-best-practices/) with compound [data transformations](https://skyvia.com/learn/what-is-data-transformation) . Control Flow orchestrates the data integration task order, performs pre- or post-integration tasks, and can even handle the error processing logic. This scenario is suitable when several sources supply data for CRM. Top features Being completely cloud-based , Skyvia doesn\u2019t require any on-site software deployment. A wizard-based solution with a no-code approach is ideal for non-techs. The Monitor tab is included in every scenario to keep track of the integration process. Scheduling parameters for integration ensures regular data updates with no hustle and bustle. Pricing The cost for the Data Integration product starts from $0 for a Free plan and has all the functions for CRM data enrichment. Then, the cost ranges depending on the number of records, schedule integrations, and mapping features involved in enriching a CRM. If you\u2019re interested in other Skyvia products, [check the pricing details here](https://skyvia.com/pricing/) . Clearbit [Clearbit](https://clearbit.com/) is a customer intelligence platform designed with marketing and sales teams in mind. This tool gathers data about B2B companies from 250+ generic and proprietary sources and converts it into clearly structured datasets. Businesses can enrich their CRMs with those datasets and outreach to \u2018hot leads\u2019. Top features Real-time data enrichment and refreshment. Ample records with 100+ B2B attributes. Data quality control with ML and AI algorithms. Integration with CRM and Sales systems. Pricing There\u2019s no fixed pricing model in Clearbit. Contact sales to get the personalized plan. ZoomInfo [ZoomInfo](https://www.zoominfo.com/) is a modern platform that provides business profile data and contacts to companies on a subscription basis. It has four different products, each of which is designed for a certain team: SalesOS, MarketingOS, OperationsOS, and TalentOS. Top features of MarketingOS Data collection about prospects from abandoned web forms. Email address validation. Company profile records containing 4.9K attributes. Automated outreach campaigns. Pricing The pricing for each unit starts from around $10,000 per year. Contact sales to discover the exact cost for your particular case. FullContact [FullContact](https://www.fullcontact.com/) is an identity resolution company that helps businesses build complete profiles of their customers. It offers several solutions for customer recognition, CRM data enrichment, profile data verification, and other identity-related procedures. All this not only helps to address the right customers in the appropriate way but also to prevent digital fraud. Top features Unification of the prospect and customer data. Identification of the website visitors. Appending customer and prospect profile data. Verifying personal profiles to identify fraudulent-prone instances. Managing the first-party data. Pricing The price of the solution depends on each individual case. LeadGenius [Lead Genius](https://www.leadgenius.com/) is a SaaS platform that provides account data sources for a specific region. This includes lead information about individual prospects and B2B companies. As a result, businesses can focus on particular markets and get insights faster. Top features Powerful verification capabilities that help businesses identify ICP. Marketing strategy shaping for higher engagement and conversion rates. Lead data enrichment to determine its potential. Pricing The cost of LeadGenius is crafted individually, so you need to book a demo for details. Dun & Bradstreet [Dun&Bradstreet](https://www.dnb.com/) is a global company providing commercial data services and analytics. This service is known for its extensive commercial database containing information about various companies from many countries worldwide. Top features Prospect identification and engagement. Data cleansing. Improving compliance with regulatory requirements. Risk management. Pricing Has customer prices, so you need to contact sales to get a quote. Demandbase [Demandbase](https://www.demandbase.com/) is a B2B marketing platform that provides an account-based marketing solution. It focuses on targeting and engaging specific accounts, which helps businesses identify their target audience and personalize marketing efforts. It also helps to align the efforts of sales and marketing teams to promote revenue growth. Top features Automatic identification and prioritization of the most suitable accounts. Integration with CRM systems and marketing automation platforms. Targeted advertising campaigns. Reporting to determine the effectiveness of account-based marketing. Pricing There\u2019s a pricing calculator on the website, which shows the approximate cost of the tool depending on the company\u2019s objectives and accounts to target. Pipl [Pipl](https://pipl.com/) is a SaaS company providing contact, work, and social information about people to businesses to enrich their CRMs. It also verifies online identities and juxtaposes them to digital ones, which helps in preventing digital crimes. Pipl solution best serves marketing teams and agencies, e-commerce companies, and fraud investigation teams. Top features More than 5+ billion trusted identities available. Trust score calculation for each user. Fake email and social media accounts detection. Fraudulent transactions unveiling. Pricing Contact Sales for detailed information about pricing. Segment [Segment](https://segment.com) is a customer data platform (CDP) that gathers customer contact information along with their interactions on the website. This tool includes a dedicated module for data enrichment to help companies append their customer profiles and personalize targeting. Top features Data collection from various sources, including websites and mobile applications. Unified customer profile creation. Analytics and reporting. Pricing Segment offers a Free plan, a Team plan starting at $120 per month, and a Business plan with custom pricing. Final Thoughts CRM data enrichment is crucial for business prosperity and trustful customer relationships. The quality of the enhanced data, though, highly depends on the selected CRM data enrichment tools. If you want to enrich customer profiles properly, consider Skyvia for that. This service integrates data from first-party sources and third-party enriched databases into your CRM. All this could be done for free or at an affordable cost that corresponds to your data flow and matches your budget! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-data-enrichment-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+CRM+Data+Enrichment+Tools+%5B2025%5D&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-data-enrichment-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-crm-data-enrichment-tools/&title=Best+CRM+Data+Enrichment+Tools+%5B2025%5D) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-crm-integration-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best CRM Integration Tools 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-crm-integration-tools/#respond) 662 January 29, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+CRM+Integration+Tools+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-crm-integration-tools/&title=Best+CRM+Integration+Tools+2025) We use multiple applications to manage operations daily. Customer Relationship Management (CRM) integration tools are pivotal in ensuring these applications work seamlessly together, building efficiency and productivity. They act as digital bridges connecting a company\u2019s CRM system with the various applications used in daily operations. They ensure data flows smoothly between systems such as: Email. Marketing platforms. E-commerce sites. The goal is to collect all customer information, simplify relationship management, and streamline operations. Primary Functions of CRM Integration Tools Data Synchronization. These systems ensure information remains consistent across platforms. For instance, if a customer updates their email address on your website, the change is reflected automatically across all connected systems. Process Automation. By connecting different systems, these tools automate tasks like sending welcome emails to new customers or updating inventory after a purchase. Enhanced Reporting. With data from various sources consolidated in one spot, businesses get a comprehensive view of customer interactions, helping in better decision-making. Improved Customer Experience. Communication between systems enables personalized service, such as offering tailored promotions based on a customer\u2019s purchase history. Table of Contents Key Types of CRM Integration Tools Top CRM Integration Tools Zapier MuleSoft Jitterbit Boomi Coupler.io Skyvia How to Choose The Best CRM Integration Tool CRM Integration Tools Comparison Summary Conclusion Key Types of CRM Integration Tools Let\u2019s explore the main types of CRM integration platforms that turn standalone systems into a powerful, interconnected business hub. Sales and Prospecting Integrations These systems link your CRM with platforms like Salesforce Sales Cloud , Mailchimp, or ZoomInfo , helping you find and manage leads effortlessly. Imagine a magic wand that imports prospect data, tracks interactions, and reminds you to follow up because forgetting a lead is like leaving pizza in the oven for too long. Use Cases Lead Enrichment. Automatically pull detailed prospect data from LinkedIn or [ZoomInfo](https://salesintel.io/best-zoominfo-alternatives/) to populate CRM records. Pipeline Management. Sync opportunities from Salesforce Sales Cloud to monitor sales progress. Cold Outreach. Integrate tools like Outreach.io or SalesLoft with Mailchimp to send personalized prospecting emails and track responses directly in your CRM. Activity Tracking: Log calls, emails, and meetings from tools like Zoom or Microsoft Teams into your CRM. Marketing Automation Integrations Modern businesses thrive on omnichannel strategies, and these integrations make it seamless. They connect CRMs with marketing platforms like Reply, Marketo, or Mailchimp to unify customer interactions across channels, automate campaigns, nurture leads, and track complete customer journeys for a cohesive experience. They\u2019re behind-the-scenes marketing wizards, ensuring your messaging hits the right person at the right time. Use Cases Lead Nurturing. Automate email campaigns using Mailchimp or Brevo and sync results with your CRM. Targeted Campaigns. Data from CRM (such as purchase history) can be used to create highly targeted ads on platforms like Facebook or Google Ads. Omnichannel Campaigns. Deliver consistent campaigns across email, SMS, social media, and web notifications by syncing your CRM data for a seamless customer experience on every channel. Behavioral Retargeting. Combine website analytics and CRM data to retarget users with personalized offers or ads, boosting conversions and building loyalty. Performance Analytics. Sync campaign performance metrics from marketing tools to your CRM, providing a centralized view of ROI and other metrics to enable data-driven decision-making for future strategies. Customer Service and Support Integrations These integrations are all about making customers feel like VIPs. Tools like Zendesk, Freshdesk, or Intercom connect to your CRM, giving the support team instant access to customer history and preferences. It\u2019s like having a cheat sheet to wow your customers. No more awkward \u201cCan I put you on hold while I sort this out?\u201d moments. Use Cases 360-Degree Customer View. Provide your support team with a full view of customer interactions, ensuring personalized service. Customer Feedback. Sync post-interaction surveys (e.g., NPS, CSAT) with your CRM to view the product\u2019s roadmap, analyze satisfaction trends, and identify areas for improvement in the service strategy. Escalation and Routing Automation. Set up rules to route tickets to the right teams or escalate high-priority issues based on criteria like customer value, urgency, or topic. Self-Service Portals. Integrate self-service portals with CRM, enabling customers to view order statuses, track ticket progress, or update their details without direct support. Accounting and Finance Integrations Who loves crunching numbers? Probably not you. That\u2019s where these integrations step in. Users can track invoices, payments, and customer financial histories in one place by linking tools like QuickBooks or Xero to the CRM. No more toggling between tabs or getting lost in spreadsheets. Use Cases Invoice Tracking. Automatically generate and sync invoices from platforms like QuickBooks or Xero with CRM contacts. Payment Reminders. Send automated payment reminders via email triggered by CRM data. Financial Health Monitoring. Combine customer payment history with sales data to assess revenue trends. Subscription Management. [Integrate solutions like Stripe](https://skyvia.com/blog/stripe-salesforce-integration/) to track customer billing cycles, automate renewal processes, and ensure accurate payment histories for better financial insights and customer retention. E-commerce Integrations Running an online store? Tools like Shopify, WooCommerce, or BigCommerce play nice with your CRM, syncing sales data, customer purchases, and abandoned carts. It\u2019s like having a sales assistant who whispers, \u201cHey, this customer might want that matching pair of shoes.\u201d Use Cases Order Tracking. Sync systems such as Shopify or WooCommerce orders with CRM records for customer-specific purchase histories to streamline post-purchase support, enable personalized upselling opportunities and provide customers real-time updates on their orders. Abandoned Cart Recovery. Trigger follow-up emails for abandoned carts and track conversions directly in the CRM to increase revenue and reduce lost sales by re-engaging customers with personalized offers, reminders, or discounts to complete their purchases. Personalized Recommendations. Use CRM data to create tailored upsell or cross-sell offers addressing the challenge of generic marketing by delivering highly relevant product suggestions that enhance customer satisfaction and drive additional revenue. Customer Loyalty Programs. Integrate loyalty platforms like Smile.io, Yotpo, or LoyaltyLion to reward repeat buyers and address the challenge of retaining customers by offering points, discounts, or exclusive perks. Communication Tools Integrations Imagine automatically logging every conversation or meeting into your CRM so everyone is in the loop. This approach eliminates manual updates and ensures that all interactions are accurately tracked. For instance, you may integrate the CRM with services like Slack, Zoom, and Microsoft Teams. With a centralized communication history, teams can collaborate more effectively, avoid duplicate efforts, and provide a seamless customer experience. Use Cases Team Collaboration. Automatically share CRM updates, like new leads, in Slack channels to keep teams aligned. Meeting Scheduling. Sync tools such as Zoom or Microsoft Teams calls with CRM to log meeting notes, create meeting scripts, automate follow-ups, and track attendee participation directly within the CRM, ensuring seamless collaboration and comprehensive meeting documentation. Follow-Up Automation. Use CRM-triggered alerts to remind sales reps to send follow-ups after calls or chats. Activity Logging. Auto-log all email communications and attachments from services like Gmail or Outlook to relevant CRM records. Extend this functionality with tools like Microsoft Teams, Slack, HubSpot Email, Zoho Mail, and Apple Mail. These integrations enable seamless communications logging from various platforms, ensuring a comprehensive activity history within your CRM. Event Management Integrations Tools like Eventbrite or Cvent connect with your CRM to manage attendees, registrations, and post-event follow-ups. It\u2019s perfect for tracking who showed up and is interested. These integrations also help streamline event promotion by syncing invitation lists, automating email campaigns, and personalizing communication based on attendee engagement. Additionally, they provide valuable insights by linking attendee feedback, session participation, and post-event surveys directly to CRM records, enabling businesses to identify leads and improve future events. Use Cases Attendee Tracking. Sync event registrations with CRM leads to track attendance and post-event engagement. Automate sending thank-you emails, sharing event resources, scheduling sales calls, etc., for high-priority attendees. Gain insights into attendee behavior by analyzing session participation, feedback forms, and interactions, which can help tailor future outreach efforts and enhance event effectiveness. Event Campaigns. Send personalized invites and reminders using CRM-based customer lists. Lead Generation. Gather leads from event registrations and seamlessly route them into your sales pipeline. Post-Event Feedback. Distribute surveys to attendees and analyze their responses to evaluate the event\u2019s success, gather insights, and identify areas for improvement. Use the feedback to enhance future events and tailor follow-up communications to address attendee needs and interest results in your CRM. Live Chat Integrations When customers need help ASAP, live chat tools like LiveChat, Drift, or Tidio come to the rescue. Acting as a virtual concierge, these tools welcome customers and help direct them to the right solutions. They also gather important customer details during conversations, such as contact information and the purpose of their inquiry, which can be seamlessly [integrated](https://www.chatmetrics.com/blog/crm-integration-guide-maximizing-live-chat-efficiency/) into the CRM for easy follow-up. Beyond convenience, these tools enhance real-time interactions, boosting customer satisfaction and equipping companies with valuable data to resolve issues and tailor future communications. Use Cases Real-Time Lead Capture. Automatically create CRM leads from live chat conversations on your website. For instance, when a new user asks a question to the chatbot, an event is immediately triggered, capturing their details and creating a new lead in the CRM. Customer Support Logs. [Store](https://www.livechat.com/success/crm-integration/) chat transcripts directly in CRM for future reference and service improvement. For example, LiveChat\u2019s integration with CRM platforms enables chat transcripts to become part of the customer\u2019s profile, allowing agents to access previous interactions and provide more precise support. Proactive Engagement. Use live chat data to identify high-value leads and trigger immediate follow-ups. Top CRM Integration Tools In this section, we\u2019ll explore the top platforms and their capabilities, helping you find the perfect fit to integrate data with the CRM. Zapier [Zapier](https://zapier.com/) is an integration platform that connects over 7,000 apps, including CRM systems. By streamlining repetitive tasks, it enables users to create \u201cZaps\u201d\u2014custom workflows that trigger actions across different applications without the need for coding. The extensive app integrations make it a go-to solution for handling complex workflows. By connecting disparate systems, businesses can ensure cohesive operations, minimize manual data entry, and reduce errors. Zapier simplifies lead management, data synchronization, and cross-platform communication. It is especially valuable for small to medium-sized businesses looking to integrate multiple applications without relying heavily on technical expertise. Rating [G2 Crowd](https://www.g2.com/products/zapier/reviews#reviews) 4.5/5 (based on 1324 reviews) Pros User-friendly interface accessible to users without programming skills. It supports a vast array of applications and facilitates diverse integrations. Provides real-time data synchronization, ensuring up-to-date information across systems. Cons Advanced features may require higher-tier subscriptions. Complex workflows can become challenging to manage without proper organization. Some integrations may experience occasional delays or require troubleshooting. Pricing The [pricing](https://zapier.com/pricing) plan is flexible and includes: A free model offers 5 Zaps and 100 tasks per month, suitable for basic automation needs. A 14-day free trial for premium features allows users to explore its capabilities before committing to a subscription. MuleSoft [MuleSoft](https://www.mulesoft.com/) is a comprehensive integration platform that connects [400+](https://skyvia.com/etl-tools-comparison/mulesoft-alternative-skyvia) applications, data, and devices across on-premises and cloud environments. Its Anypoint Platform offers API management and integration capabilities, allowing the building of application networks. MuleSoft shines in large enterprises that juggle multiple complex systems and need robust API management. If your business deals with diverse legacy and modern applications, this tool can be the backbone of your integration strategy, handling both cloud and on-premises scenarios effortlessly. Rating [G2 Crowd](https://www.g2.com/products/mulesoft-anypoint-platform/reviews) 4.5/5 (based on 690 reviews) Pros Robust API management and integration capabilities. Support of complex enterprise-level integrations. Extensive library of connectors for various applications and systems. Cons A steep learning curve may require specialized knowledge. Higher cost compared to some other integration tools. Pricing MuleSoft offers custom pricing based on specific business needs and usage. A free trial is available for evaluation. Jitterbit [Jitterbit](https://www.jitterbit.com/) is an integration platform that connects applications, data, and APIs. It offers pre-built templates and [170+](https://skyvia.com/etl-tools-comparison/jitterbit-alternative-skyvia) connectors, facilitating quick integrations without extensive coding. The tool is perfect for small to medium-sized businesses and works best for those needing quick and straightforward integrations. It\u2019s great for companies looking to automate tasks like syncing CRMs with marketing tools or pulling e-commerce data into reporting dashboards without diving into complex setups. Rating [G2 Crowd](https://www.g2.com/products/jitterbit/reviews) 4.8/5 (based on 553 reviews) Pros Intuitive, user-friendly interface. Pre-built templates and connectors for rapid integration. Support of both cloud and on-premises integrations. Cons Limited advanced customization options. It may not be suitable for highly complex integration scenarios. Pricing Jitterbit offers various [pricing](https://www.jitterbit.com/harmony/pricing/) tiers based on features and usage. The standard plan includes 2-3 standard connections, essential Harmony features, and a 48-hour support response time. The professional plan provides five standard connections, additional features like optional add-ons, and a 24-hour support response time. The enterprise plan offers 8 or more connections, comprehensive Harmony features, and a 6-hour support response time, including a 24-hour emergency hotline. A free 30-day trial is available for new users. Boomi [Boomi](https://boomi.com/) , formerly known as Dell Boomi, is a cloud-based integration platform that connects applications and data across various environments. The solution is low-code and offers [240+](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) connectors, allowing businesses to build integrations quickly. Boomi is a perfect choice for businesses of all sizes that want a balance of user-friendliness and advanced capabilities. It\u2019s handy for quickly scaling companies that need a reliable tool to automate processes, whether for CRM synchronization, financial reporting, or customer service workflows. Rating [G2 Crowd](https://www.g2.com/products/boomi/reviews#reviews) 4.5/5 (based on 411 reviews) Pros Low-code platform with drag-and-drop interface. Extensive library of connectors. Scalable to accommodate growing business needs. Cons Pricing can be high for smaller businesses. Some users report a steep learning curve for complex integrations. Pricing Boomi offers [tiered pricing](https://boomi.com/pricing/) based on the following features: The pay-as-you-go plan starts at $99 per month plus usage fees. It provides full access to the Boomi Enterprise Platform without requiring a long-term contract. The professional plan is priced at $2,000 per month. The pro plus plan offers additional functionality to support real-time integration requirements and is available at $4,000 per month. The enterprise plan is tailored for large-scale organizations with complex integration needs and is priced at $8,000 monthly. The 30-day free trial provides limited access to the Boomi Enterprise Platform. Coupler.io [Coupler.io](https://www.coupler.io/) is a data integration tool that automates data flows between various applications and services. It allows users to import data into Google Sheets, Excel, or BigQuery from multiple sources, facilitating real-time data analysis and reporting. It\u2019s a good choice for individuals and small to medium businesses that need to centralize data from various sources for reporting or analysis. The tool is especially handy for marketing teams, e-commerce sellers, or analysts who need fresh data to make quick decisions. Rating [G2 Crowd](https://www.g2.com/products/coupler-io/reviews) 4.8/5 (based on 65 reviews) Pros User-friendly interface with easy setup. Support of a variety of data sources. Automated data refresh schedules. Cons Limited advanced data transformation features. Primarily focused on data import rather than full-scale integration. Pricing The [pricing](https://www.coupler.io/pricing) is flexible enough and includes: The free plan offers limited functionality with basic connectors and up to 100 runs per month, which is ideal for testing and small-scale automation. The 14-day free trial provides access to premium features, enabling users to explore advanced automation and integrations before committing. Skyvia [Skyvia](https://skyvia.com/) is a versatile cloud-based data integration platform that connects CRM systems with [200 +](https://skyvia.com/connectors) cloud applications, databases, and more. It\u2019s no-code and suitable for non-techs, allowing users to build simple and complex workflows and ensuring seamless connectivity between CRM systems and other business applications. The platform\u2019s robust tools for ETL, ELT, reverse ETL, data replication, and transformation make it ideal for maintaining accurate and up-to-date data across systems, eliminating manual data entry errors. Skyvia stands out for its robust data integration capabilities. It enables seamless CRM data synchronization by connecting customer records, unifying sales and marketing data, and ensuring secure data backups to meet compliance requirements. Rating [G2 Crowd](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) 4.8/5 (based on 242 reviews) Pros Advanced data transformation capabilities, including filtering, mapping, and aggregation, to tailor data integrations to specific business needs. No-code platform with an intuitive interface. Support for over 200 data sources. It offers data backup features. Cons Advanced features may require higher-tier subscriptions. Compared to larger platforms like Zapier or MuleSoft, Skyvia\u2019s smaller user community means fewer third-party tutorials and integration guides are available. Pricing Skyvia offers flexible [pricing](https://skyvia.com/pricing) with a focus on affordability and scalability: The Free Plan i ncludes up to 10k records per month and access to core data integration features, making it great for testing and small-scale automation. A 14-day Free Trial lets users explore premium functionalities like advanced data transformations and higher record limits, providing a risk-free opportunity to evaluate the platform. How to Choose The Best CRM Integration Tool Given the many options available, selecting the right solution can be overwhelming. The key is to align the tool with your business needs and goals. Let\u2019s explore the key factors to consider. Identify Your Business Needs What problems are you trying to solve with integration (e.g., syncing data, automating workflows, real-time updates)? Which tools must be integrated with your ecosystem (e.g., marketing platforms, accounting software, e-commerce stores)? Are you handling simple data sync tasks or need advanced API management? For basic integration , tools like Skyvia , Zapier, or Coupler.io may suffice. For complex workflows or enterprise-grade needs, MuleSoft or Boomi might be better. Consider Your Team\u2019s Technical Skills If your team has little to no coding experience, look for no-code or low-code tools like Skyvia , Jitterbit , or Zapier . On the other hand, platforms like MuleSoft or Boomi are better suited for businesses with IT resources or developers, offering powerful customization and scalability. Look at Supported Integrations Verify that the tool supports all the apps, ecosystems, and service architectures you need to connect. Tools like Zapier and Skyvia offer extensive integration libraries, while others like MuleSoft and Boomi are better for custom or legacy systems. Assess Real-Time vs. Scheduled Syncs Real-time integration is essential for businesses needing instant data updates (e.g., customer support or live inventory tracking). If periodic updates suffice, a tool like Skyvia or Coupler.io can meet your needs at a lower cost. Check for Customization and Advanced Features Do you need to create custom workflows or logic for specific use cases? Tools like Jitterbit , Boomi , and MuleSoft excel in offering advanced features like API orchestration and data transformation. Consider Data Security and Compliance Ensure the tool meets your industry\u2019s security standards, such as GDPR, HIPAA, or SOC 2 compliance. Enterprise-focused tools like MuleSoft and Boomi often provide robust security features. Evaluate Cost and Budget Compare pricing plans to find a tool that fits your budget. To test before committing, look for features like free trials or free plans (e.g., Zapier , Skyvia , Coupler.io ). Assess Customer Support and Resources Check for support options like live chat, email, or phone. A solid knowledge base or community forum can be invaluable. CRM Integration Tools Comparison Summary Now that we\u2019ve covered the checklist, let\u2019s review the comparison table. Tool Benefits Limitations Key Takeaways Zapier User-friendly interface; real-time data synchronization; minimal setup. Advanced features behind premium plans; limited for complex workflows; occasional delays in integration. Best for small to medium businesses needing straightforward automations without technical expertise. MuleSoft Robust scalability; extensive connector library; powerful API management tools. Steep learning curve; high cost; better suited for enterprises. Tailored for enterprises with complex integration and API management needs. Jitterbit Easy to use; fast deployment. Limited customization for complex workflows; lacks advanced features for large-scale needs. A fast and easy solution for small to medium businesses requiring quick integrations with pre-built templates. Boomi Easy scalability; strong reliability; extensive system compatibility. Pricing can be high for smaller businesses; It requires training for complex integrations. Scalable for growing businesses; suitable for medium to large enterprises. Coupler.io Simple setup; automatic refresh schedules; affordable plans for small teams. Focused on data import rather than full-scale integration; limited transformation options. Perfect for simple data import tasks and reporting needs. Skyvia Comprehensive solution; intuitive interface; advanced transformation capabilities. Advanced features require premium plans; fewer third-party tutorials and support options. Suitable for businesses of all sizes with a focus on data sync, transformation, and backup. Conclusion Choosing the right CRM integration tool depends on your business size, goals, and technical expertise. Each tool has its strengths, from automation and synchronization to real-time updates and backups. By carefully assessing your requirements, like integration scope, data security, and budget, you can select the perfect solution to streamline operations and enhance customer relationship management. Regardless of the choice, the right CRM integration tool is a game-changer for improving efficiency and driving growth in today\u2019s interconnected business world. F.A.Q. What are CRM integration tools, and why do I need them? CRM integration tools are platforms that connect your CRM system with other business applications like marketing, sales, e-commerce, and finance tools. They help automate workflows, sync data in real-time, and improve efficiency by ensuring all your systems work seamlessly. Which CRM integration tool is best for small businesses? For small businesses, user-friendly and cost-effective tools like Skyvia or Coupler.io are great options. They require no coding, are easy to set up, and are perfect for automating repetitive tasks or syncing data between a few apps. Are there any free options for CRM integration tools? Tools like Skyvia, Zapier , and Coupler.io offer free plans with limited features, making them suitable for testing or small-scale integrations. Most platforms also provide free trials for premium plans so you can explore advanced features. What are the key factors when choosing a CRM integration tool? Look for: Compatibility with your existing CRM and business apps. Ease of use or technical expertise required. Pricing and scalability for your business size. Features like real-time data syncing, automation, and security. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+CRM+Integration+Tools+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-crm-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-crm-integration-tools/&title=Best+CRM+Integration+Tools+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-data-engineering-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best Data Engineering Tools for Professionals for 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-data-engineering-tools/#respond) 2630 November 24, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-engineering-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Engineering+Tools+for+Professionals+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-engineering-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-engineering-tools/&title=10+Best+Data+Engineering+Tools+for+Professionals+for+2025) Businesses need data engineering solutions to cope with enormous data flows. Thus, the organizational ecosystem could function like a unified mechanism. Moreover, companies also obtain better control over all the digital data within an organization and extract proper value from it. Let\u2019s have a look at the real-life example. A properly designed sales funnel connected with analytical tools tells essential details about customers and their preferences. That\u2019s a great business value, leading to excellent brand loyalty and retention rates. There are multiple similar examples, but one thing is common for all of them \u2013 it\u2019s necessary to select proper data engineering solutions to construct elaborate data systems. So, we have prepared an overview of popular data engineering tools in 2025: Table of Contents Best Data Engineering Tools Skyvia Amazon Redshift Looker Apache Spark Segment Fivetran Apache Kafka Power BI Google BigQuery Tableau Key takeaways Best Data Engineering Tools Data engineers are responsible for designing architectures and coordinating data flows. For that, data engineers have to write complex scripts that manage data during different stages of its life cycle, from collection to serving. To make the process streamlined and optimized, data engineering tools are there. The principal stages of data engineering are the following: Collection Storage Transformation Aggregation Optimization According to the above-mentioned list, the categories of the data engineering tools are more or less the same. Still, some services carry out more functions than others. Skyvia One of the multifunctional data engineering tools is [Skyvia](https://skyvia.com/data-integration/import) \u2013 a cloud-based platform for simple and complex data integration scenarios. It can perform data extraction, transformation, and load into the preferred destination (app, database, or data warehouse). Skyvia has recently been nominated as [the most user-friendly and easiest-to-use ETL software](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) by G2. Its no-code approach contributes to designing data pipelines with an easy touch, saving lots of time for data engineers. What\u2019s more, Skyvia connects to other popular tools on different edges of data engineering pipelines. As a result, Skyvia could become the center of the data engineering pipeline. Key Features No-code data import (ETL and Reverse ETL) between cloud apps, data loading from a database to the cloud, or vice versa. Export data from applications to CSV files. Replicating data (ELT) into a data warehouse and keeping it updated automatically. Bi-directional synchronization of data between apps, databases, and data warehouses. Complex data integration scenarios involving several sources. Meets the data integration requirements of startups as well as enterprises. Pros Visual data management with drag-and-drop functionality. Scheduling data transfer for automated data updates with a 1-minute interval, which is practically a real-time data integration. Pre-made connectors to 180+ sources. Has a free plan \u2013 [start using Skyvia](https://app.skyvia.com/) to see how it suits your business. Cons Notifications about the progress or completion of data transfer are missing (though there are emails with error descriptions). No information about a new release. Pricing Free plan Basic plan at $15/month Standard plan at $79/month Professional plan at $399/month Enterprise plan matching particular regulatory requirements at a custom price. To select a plan that suits you best, contact [Skyvia Sales Team](https://skyvia.com/schedule-demo) to schedule a demo and get further details. Amazon Redshift [Amazon Redshift](https://aws.amazon.com/redshift/) is a popular data warehousing solution for companies dealing with big data. It takes data management to the next level and speeds up the process of getting insights out from data. The latter owes to Amazon Redshift\u2019s ability to integrate with data science and analytics solutions. Key Features Redshift has a columnar storage architecture, which aims to compress data for better querying and analysis. Implements MPP paradigm to speed up operations with data. Offers data protection and recovery through backup. Pros Quick querying Easy to setup Seamlessly integrates with the data in S3 Auto or on-demand scaling Cons The interface could have been more intuitive JSON support in SQL is minimal Missing option to restrict duplicate records Pricing Amazon Redshift cost is based on the pay-as-you-go model depending on the storage and computing capacities requested. Looker Google\u2019s product [Looker](https://lookerstudio.google.com/) is a breakthrough solution that clarifies complex data for everyone. It can sit on top of any modern database or data warehouse to fully leverage its power. Its role in data engineering is to extract the needed information and drive insights out of it through analysis. To supply Looker with data from different sources for enhanced analytics, consider [Skyvia](https://skyvia.com/data-analysis/) for that. It also easily connects to [BigQuery](https://skyvia.com/connectors/google-bigquery) for data transfer, which might be extremely useful for those who use a data warehouse as a basis for analysis. Key Features Accessible via the web-based interface for data exploration and analysis. Doesn\u2019t require extensive knowledge of SQL, so business users can easily exploit Looker. Embeds into other applications and services owing to robust API. Creates customizable dashboards and reports. Pros Cons Data filtering and navigation New updates with each monthly release Simple and easy-to-understand UI Cons It can be slow sometimes It can be difficult to customize the visualizations Pricing See all the pricing details [here](https://cloud.google.com/looker/pricing) . Apache Spark One of the leading data engineering tools for working with big data is [Apache Spark](https://spark.apache.org/) . This service is powerful in processing large amounts of real-time data across different clusters. Spark works perfectly in the finance and banking sectors to detect fraudulent transactions immediately. It\u2019s also popular in the entertainment, gaming, and healthcare industries. Key Features In-memory cluster computing to increase the data processing speed. Fault tolerance due to distributed datasets. Algorithms for machine learning tasks. Integration with other Apache tools. Pros Multiple language support Great performance for both streaming and batch data Outstanding processing speed Cons No Dataset API support in the Python version of Spark Lack of documentation Debugging is difficult Pricing Apache Spark is an open-source free solution. Segment [Segment](https://segment.com/) is a Customer Data Platform (CDP) for the collection, management, and utilization of customer data from digital resources. It also assists businesses in engaging with their audiences in real time. Segment is mostly suitable for organizations in retail, media, financial services, and healthcare. Key Features Design of cross-channel engagement Customer data pipeline optimization User privacy protection Customer profile sync with data warehouse Pros Ability to disaggregate data to analyze customer engagement Handling PII information and GDPR It\u2018s easy to add a new endpoint/application to receive the data Cons Rules for user identification and deduplication There is not enough consistency across connectors Most features are only available on the Business tier Pricing Segment offers three plans: Free, Team, and Business. The Team package starts from $120/month, while the Business plan\u2019s cost is customized depending on the company\u2019s traffic. Fivetran [Fivetran](https://www.fivetran.com/) is a low-code data integration platform that helps businesses to consolidate their data. Its main function is to extract data from various sources and load it into a data warehouse following the ELT concept. Key Features Has more than 300 connectors Data scrubbing functions Scheduling data load Monitoring data pipelines Support for data lakes Pros Near real-time data replication Granular control over data loaded Plenty of connectors available Cons No detailed logging No options for masking sensitive data Pricing Fivetran offers a free plan for low data volumes. The price increases progressively for growing data volumes. Apache Kafka [Apache Kafka](https://kafka.apache.org/) tool is a part of the data engineering toolkit for working with big data. This platform is fault-tolerant and highly scalable. It aims to process real-time data streams. Apache Kafka is particularly useful in website activity tracking, messaging systems, logging, and so on. Key Features Relies on a publish-subscribe model with producers of data and subscribers or consumers. Allows parallel processing and distribution of data. Ensure needed data transformations Pros Excellent administration Event-driven architecture Asynchronous processing Scales horizontally very well Cons It could be difficult to monitor Webhooks Ui doesn\u2019t change over the years Learning curve to be simplified Pricing Apache Kafka is gratis. Power BI The core objective of data engineering is to design pipelines where data serves businesses. So, such a business intelligence and data visualization tool as Microsoft [Power BI](https://powerbi.microsoft.com/) is the right service for that purpose. It copes with data by turning it into valuable insights for businesses. Key Features Uses DAX formula language for making calculations. Creates interactive reports and dashboards. Publishes and shares data with collaborators on-demand. Integrates with Azure Machine Learning. Pros Excellent report generation Open on mobile devices, desktop apps, and web Inexpensive licensing model Cons Many sudden bugs are coming out Sometimes difficult to share dashboards No Mac version Pricing Power BI comes in three editions: Free, Pro for $15/month, and Premium starting from $29.90/month. Google BigQuery Based on Google Cloud, [BigQuery](https://cloud.google.com/bigquery) offers exceptional data warehousing and analytics at scale. It would be a perfect solution for businesses that manage significant data amounts in the cloud. To set up data transfer from business apps into BigQuery, consider [Skyvia](https://skyvia.com/connectors/google-bigquery) for that. Key Features Limitless scalability Serverless architecture Integrated AI collaborator Cost-effective storage Powerful big data analytics In-built ML models Pros Automatically optimizes queries to fetch data quickly Inexpensive data storage Python library support Cons Not able to search specific column fields using search functionality There is no user support No multi-transaction ACID compliance Pricing The service uses the on-demand pricing model. Tableau [Tableau](https://www.tableau.com/) is another popular business intelligence and data visualization tool. It connects to different digital sources and creates interactive dashboards based on the ingested data. Tableau is popular across multiple industries due to its ability to explore complex datasets and flexibility in use. This makes it also convenient in use for both technical and non-technical users. Key Features Drag-and-drop interface Supports a wide variety of chart and graph types for visualization Combines data from multiple sources Provides apps for Android and iOS Pros Handles large data sets very well A very good set of charts Easy to discover and explore information Cons Limited data preprocessing Limited product support Pricing There are three pricing models: Viewer at $15/month for observing reports Explorer at $42/month for observing and editing reports Creator at $75 for data scientists and analysts who create and analyze reports Key takeaways Most tools mentioned in the article originate from famous IT corporations, such as Apache, Microsoft, Amazon, and Google. Some of these solutions are unique in their genre and have no competitors on the market. Solutions for data engineering provided here also appear on other lists of the best data engineering tools. Such selection owes a mix of criteria: affordable pricing powerful functionality investments customer pain resolution Overall, using some of the tools provided in this article will help to design a robust data engineering itinerary. As an ETL tool is usually a core of a data engineering pipeline, we suggest starting with Skyvia on the way to outstanding achievements! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-engineering-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Engineering+Tools+for+Professionals+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-engineering-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-engineering-tools/&title=10+Best+Data+Engineering+Tools+for+Professionals+for+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-data-management-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top Data Management Tools: Free and Paid Options for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-data-management-tools/#respond) 2648 November 29, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-management-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Management+Tools%3A+Free+and+Paid+Options+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-management-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-management-tools/&title=Top+Data+Management+Tools%3A+Free+and+Paid+Options+for+2025) The modern business world concerns data and data management regardless of company size and industry. We cannot close our eyes and avoid all the data processing, review, management, and analysis. So, choosing a perfect tool to simplify our lives is a good idea. Let\u2019s look at 15 top data management solutions, explain how they work, and discuss what business areas they may cover. This review will help select the best tool for making informed decisions, allowing your company to grow and develop. Table of Contents What Are Data Management Tools? Key Benefits of Data Management Tools Types of Data Management Tools Top 15 Data Management Tools Skyvia Stitch Fivetran Talend Informatica Snowflake Tableau Amazon Web Services Microsoft Azure Google Cloud Platform Dell Boomi Looker Microsoft Power BI Airflow Chartio Comparison Table of Data Management Tools Challenges in Data Management Conclusion What Are Data Management Tools? Just imagine a ship: to reach its destination safely and successfully, the captain must know the weather forecast, all the currents, potential pitfalls, and reefs and plot a correct course. Data management is the exact means; actually, it\u2019s a set of various practices, procedures, etc., helping companies collect, store, process, synchronize, and analyze data for efficient decision-making. Data management tools cover a wide range of functionalities, but the most common ones are: Data Integration Data Storage Data Quality Data Governance Data Security Data Analytics Data Migration Data Backup and Recovery Let\u2019s consider the key advantages of using data management tools for different business goals. Key Benefits of Data Management Tools While working with business data, we have to know how to make decisions safely, \u201cbringing our ship to the destination port.\u201d Here are the top five critical features of DM solutions, which make data clear and prepared for appropriate analysis to decide what to do to achieve success. Automation for reducing redundancies : [ETL solutions](https://skyvia.com/blog/etl-tools/) extract data from various sources, transform it into the appropriate format, and finally load it into a database or DWH, decreasing the manual impact and data processing redundancies. Workflow automation helps to streamline repetitive tasks while creating automated complex data pipelines. NOTE : In both cases, you may reach data accuracy up to 100%. Flexibility in managing large data volumes : The scalability of DM tools handles growing data volumes without compromising performance. Such tools support a set of data types (structured, semi-structured, unstructured)) and formats (CSV, JSON, XML) to accommodate different data sources and requirements. Global work facilitation : Cloud services give users freedom of work: data can be accessed anytime from anywhere by any user with the required permissions, which is helpful in remote work and allows data synchronization among team members in different countries. Time and cost saving: You can save time and costs while locating specific, always-correct data, incorporating it into a central database, and exporting it in any format for sharing with those you want to. Security and backup : DM tools provide top security, efficiency, and privacy, allowing data generation and backup. Types of Data Management Tools We may divide DM tools into several categories depending on the use case. Let\u2019s navigate our data management ship through each one. Cloud data management tools connect and integrate various data sources via APIs, Webhooks, and direct DBs. Data integration (ETL & ELT) tools set up complex continuous data loading and transformation from multiple sources to target DB or DWH. Data visualization and analytics tools help review, explore, and analyze massive datasets to generate further reports and dashboards and obtain critical information for correct business decisions. Top 15 Data Management Tools As you can see, various DM solutions cover different goals and business areas. Here\u2019s the list of the best 15 data management tools you might choose to help your company keep data accurate, correct, and always up to date. Skyvia If your organization is looking for a simple and functional solution providing a set of [integration scenarios](https://skyvia.com/data-integration/) of any complexity, including ETL, ELT, reverse ETL, data migration, one-way and bi-directional data sync, workflow automation, data sharing via REST API, backups for cloud apps, etc., and supports [170+](https://skyvia.com/connectors/) CRMs, databases, and cloud apps, all this listing is about [Skyvia](https://skyvia.com/) . Key Benefits The platform is cloud-based, so you just need to register to start working. The UI is intuitive and requires no extra time and costs for the staff\u2019s training. Skyvia provides a perfect combination of robust functionality and pay-as-you-go [pricing](https://skyvia.com/pricing/) , ranging from the freemium model to the enterprise one. Users review According to G2 Crowd, [Skyvia is at the top of the 20 easiest-to-use ETL tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . [Gartner\u2019s](https://www.gartner.com/reviews/market/data-integration-tools/vendor/devart/product/skyvia) rating for Skyvia for 2023 is 4.9. Stitch [Stitch](https://stitchdata.com/) is a cloud-first data management tool that provides an ETL data pipeline service. [Compared to Skyvia](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) , its interface is also user-friendly, allowing fast and intuitive pipeline setup. This tool extracts data on a schedule using the Singer-based engine, saves it into an internal data pipeline via the Import API, transforms it to be compatible with the destination, and finally loads the transformed data there. Stitch supports a bit fewer connectors than Skyvia (140+) and doesn\u2019t offer a free [pricing](https://stitchdata.com/pricing/) model. Prices range from the standard plan ($100 per month) to the premium plan ($2500 per month), and you can also try a 14-day free trial. Key Benefits The platform\u2019s simplicity, even for non-tech users. The ability to transform data. Monitoring and alerting features , which show the current status of data pipelines and notify users of any issues. Users review According to [G2 Crowd](https://www.g2.com/products/talend-stitch/reviews) and [Gartner](https://www.gartner.com/reviews/market/integration-platform-as-a-service-worldwide/vendor/qlik-talend/product/stitch) , Stitch\u2019s rating for 2023 is 4.5. Fivetran [Fivetran](https://www.fivetran.com/) is a popular data management tool that supports ETL and data ingestion. It loads data from your source to the destination you need. For data transformation, you may also use pre-built and QuickStart data models. The low-code solution offers 300+ connectors, including DBs, DWHs, etc. The web interface is user-friendly enough but more complex than [Skyvia](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) . For instance, you must be familiar with SQL coding to use DBT for complex transformations. The [data-volume-based pricing](https://www.fivetran.com/pricing/features) offers a 14-day free trial and a completely free plan. Key Benefits The rapid data integration setup. A wide range of source and destination integrations. The ability to accommodate large data volumes. Users review According to [Gartner](https://www.gartner.com/reviews/market/data-integration-tools/vendor/fivetran) , Fivetran\u2019s rating for 2023 is 4.4; at the same time, [G2 Crowd](https://www.g2.com/products/fivetran/reviews) rates Fivetran as 4.2. Talend [Talend](https://www.talend.com/) is a cloud-based data management tool that supports both cloud and on-premise integration. The platform consists of two tools: the open-source Talend Open Studio and Talend Data Fabric, a low-code service that combines data integration, quality, and governance. Talend supports 1000 connectors and components and provides you with pre-built integration templates. However, some users commented that the UI isn\u2019t as friendly as [Skyvia](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) . Pricing depends on the solution. The Open Studio is free, but the Data Fabric is subscription-based and offers a 14-day free trial for the cloud. Key Benefits The possibility of work with a large number of source systems using standard connectors. Integration with popular cloud platforms like AWS, Azure, and Google Cloud. Users review According to [Gartner](https://www.gartner.com/reviews/market/data-integration-tools/vendor/qlik-talend/product/talend-data-fabric) , Talend Data Fabric\u2019s rating for 2023 is 4.1, [G2 Crowd](https://www.g2.com/products/talend-cloud-data-integration/reviews) rates Talend as 4.3. Informatica If you\u2019re looking for an iPaaS tool that provides solutions for the enterprise sector, you might choose [Informatica](https://www.informatica.com/) . Its main specialties are data ingestion and ETL. Depending on the product, the system varies from no-coding and low-coding to high-coding. [Contrary to Skyvia](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) , the UI sometimes is too complicated, according to users\u2019 feedback. The number of connectors could be more robust \u2013 just 90+. But you may use the Informatica products both for cloud and on-premise scenarios or even work with Claire\u2019s AI engine. The [pricing](https://www.informatica.com/products/cloud-integration/pricing.html) is consumption-based and includes free scenarios. Key Benefits The comprehensive set of [data integration tools](https://skyvia.com/blog/data-integration-tools/) , including PowerCenter, Cloud Data Integration, etc. Data transformation capabilities. Data quality and real-time data integration capabilities. Users review According to [G2 Crowd](https://www.g2.com/products/informatica-powercenter/reviews) , Infoirmatica\u2019s rating for 2023 is 4.4, [Gartner](https://www.gartner.com/reviews/market/data-preparation-tools/vendor/informatica) rates Informatica as 4.3. Snowflake [Snowflake](https://www.snowflake.com/en/) is part of Amazon Web Service. This no-code, cloud-based data warehousing [platform](https://skyvia.com/connectors/snowflake) allows users to perform ETL/ELT and Reverse ETL, build data pipelines, share data via REST API, and more. If you\u2019re looking to adopt a cloud-first approach to data management, a SaaS product Snowflake offers is your tool. It\u2019s good enough for analytics and can be a source for data backups, archives, or simply an off-platform file storage for all your business data. Compared to Skyvia\u2019s intuitive UI, Snowflake\u2019s ease of use can still vary depending on a user\u2019s familiarity with data concepts and SQL. The [subscription-based pricing model](https://www.snowflake.com/en/data-cloud/pricing-options/) doesn\u2019t offer a free plan but a 30-day free trial period for new users. Key Benefits The cloud-native platform, which allows users to scale their data operations up or down as needed without the complexity of managing physical infrastructure. Zero servers, storage, or performance tuning maintenance. Data sharing and integration abilities. Users review According to [Gartner](https://www.gartner.com/reviews/market/cloud-database-management-systems/vendor/snowflake/product/snowflake-data-cloud) , Snowflake\u2019s rating for 2023 is 4.6, and [G2 Crowd](https://www.g2.com/products/snowflake/reviews) rates it as 4.5. Tableau [Tableau](https://www.tableau.com/) is a cool data analytics and visualization tool that [can be used](https://skyvia.com/data-integration/tableau) on a desktop, the cloud, or customer servers. It allows you to copy data from any source to a database or data warehouse and analyze it quickly. The main word to use when discussing this service is \u201ceasy.\u201d It easily connects to desired data sources, gives partners, teams, and clients access to visualizations, and creates interactive maps automatically. Besides that, Tableau supports unlimited data exploration with interactive and intuitive dashboards. It takes a few minutes to set the dashboard up with data from your selected web applications. The [pricing](https://www.tableau.com/pricing/teams-orgs) here is flexible, depending on your organization\u2019s needs, and even includes a free plan. Key Benefits The platform\u2019s simplicity. The strong data visualization. The real-time analytics and extensibility. Users review According to [G2 Crowd](https://www.g2.com/products/tableau/reviews) and [Gartner](https://www.gartner.com/reviews/market/analytics-and-decision-intelligence-platforms-in-supply-chain/vendor/salesforce-tableau/product/tableau) , Tableau\u2019s rating for 2023 is 4.4. Amazon Web Services [Amazon Web Services](https://aws.amazon.com/free/?trk=15faae9b-ab87-4e8f-8946-c46e8264e383&sc_channel=ps&ef_id=CjwKCAiA3aeqBhBzEiwAxFiOBl8JRI6ZiIxPt09qeL5NYa9_YXRMmgnOIka29Pkg36J53wx7uuAQPhoCkQsQAvD_BwE:G:s&s_kwcid=AL!4422!3!645208863523!e!!g!!amazon%20web%20services!19572078132!145087520613&all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc&awsf.Free%20Tier%20Types=*all&awsf.Free%20Tier%20Categories=*all) (AWS) is primarily the choice of large enterprises processing large data volumes daily. It\u2019s actually a set of different solutions you may use simultaneously as a cloud data management track. You pay as you go, and the company provides cloud computing platforms and APIs according to your needs. The list below displays a set of AWS data management services: Amazon Redshift for data warehousing. Amazon S3 for temporary and intermediate storage. Amazon Athena for SQL-based data analytics. Amazon QuickSight for dashboard construction and data visualization. AWS Glue for building data catalogs to categorize, search, and query data. Amazon Glacier for long-term backup and storage. The [pricing](https://aws.amazon.com/pricing/?nc2=h_ql_pr_ln&aws-products-pricing.sort-by=item.additionalFields.productNameLowercase&aws-products-pricing.sort-order=asc&awsf.Free%20Tier%20Type=*all&awsf.tech-category=*all) for each service is separate, so it depends on what you use. But it\u2019s a good idea to plan the spending costs accurately because they may quickly grow with each new service added. Key Benefits The solution\u2019s scalability, flexibility, and global reach might be called the main advantages of AWS. Users review According to [G2 Crowd](https://www.g2.com/products/amazon-aws-platform/reviews) and [Gartner](https://www.gartner.com/reviews/market/managed-hybrid-cloud-hosting-asia-pacific/vendor/amazon-web-services/product/aws-cloud-services) , AWS\u2019s rating for 2023 is 4.7. Microsoft Azure Like AWS, [Microsoft Azure](https://azure.microsoft.com/en-gb/free/search/?ef_id=_k_CjwKCAiA3aeqBhBzEiwAxFiOBvrT037l7OWm0TqLKoedSKY44z44RNKzfIQ9M2It82Y_9cQXwWCE9BoCnK4QAvD_BwE_k_&OCID=AIDcmmyckrgvvo_SEM__k_CjwKCAiA3aeqBhBzEiwAxFiOBvrT037l7OWm0TqLKoedSKY44z44RNKzfIQ9M2It82Y_9cQXwWCE9BoCnK4QAvD_BwE_k_&gad_source=1&gclid=CjwKCAiA3aeqBhBzEiwAxFiOBvrT037l7OWm0TqLKoedSKY44z44RNKzfIQ9M2It82Y_9cQXwWCE9BoCnK4QAvD_BwE) allows setting up a cloud-based data management system with various options, multiple databases or data warehouses, and analytic solutions for Azure-stored data. The tool is cloud-based, so you don\u2019t need any implementation to work with it, but if you\u2019re unfamiliar with the Azure environment, it may be a bit of trouble. The primary services here are: Blob storage. Private cloud deployments. Standard SQL databases and VM-based SQL servers. NoSQL-style table storage options. Easy integration with Panoply for ELT/ETL services. Azure Data Explorer for real-time analysis of huge streaming unprocessed data sets. The [pricing](https://azure.microsoft.com/en-us/pricing/) model is quite similar to AWS and varies depending on your selection and needs. There are also a few [free](https://azure.microsoft.com/en-us/pricing/free-services/) options. Key Benefits Platform\u2019s scalability. Hybrid cloud capabilities. Diverse service offerings like computing, storage, databases, analytics, machine learning, IoT, etc. Users review According to [G2 Crowd](https://www.g2.com/products/microsoft-microsoft-azure/reviews) and [Gartner](https://www.gartner.com/reviews/market/cloud-infrastructure-and-platform-services/vendor/microsoft/product/azure) , Microsoft Azure\u2019s rating for 2023 is 4.4. Google Cloud Platform The principles of [the Google Cloud Platform](https://cloud.google.com/gcp?utm_source=google&utm_medium=cpc&utm_campaign=emea-tr-all-en-bkws-all-all-trial-e-gcp-1011340&utm_content=text-ad-none-any-DEV_c-CRE_500236818957-ADGP_Hybrid+%7C+BKWS+-+EXA+%7C+Txt+~+GCP+~+General%23v1-KWID_43700060393216022-aud-606988877974:kwd-26415313501-userloc_9056779&utm_term=KW_google+cloud+platform-NET_g-PLAC_&&gad_source=1&gclid=CjwKCAiA3aeqBhBzEiwAxFiOBhV1LG5V7lBxjRPGflqet8PtGiXbB9AocQmNela-86WkIxXtL4CoEhoCo0sQAvD_BwE&gclsrc=aw.ds&hl=en) work are similar to AWS and Azure: a wide range of cloud-based data management tools, a valuable workflow manager, collecting the components\u2019 zoo in one system, etc. It fits companies processing large data volumes, but even technical users unfamiliar with Google Cloud may be stuck here. Google Cloud offers the following services: BigQuery for tabular data storage and BigQuery analytics for SQL-style queries. Data Studio for dashboard construction and GUI-based analysis. Cloud Pub/Sub and Cloud Data Transfer for data intake. ML Engine for more advanced analyses using ML and AI. Cloud Datalab for code-based data science. Cloud BigTable for NoSQL database-style storage. Connections to standard BI tools like Charito, Domo, Looker, Tableau, etc. The [pricing](https://cloud.google.com/pricing/) model is flexible, depends on your choice, and includes a [free](https://cloud.google.com/free) monthly tier. Key Benefits The global network and infrastructure. Scalability. Serverless computing options like Google Cloud Functions and Cloud Run. Users review According to [Gartner,](https://www.gartner.com/reviews/market/cloud-infrastructure-and-platform-services/vendor/google/product/google-cloud-platform) Google Cloud Platform\u2019s rating for 2023 is 4.5, [G2 Crowd](https://www.g2.com/products/google-cloud/reviews) rates it as 4.4. Dell Boomi [Boomi AtomSphere](https://boomi.com/) is an iPaaS tool that integrates both cloud and on-premise applications and doesn\u2019t need appliances or coding. The solution integrates real-time data and allows high-volume mobile, batch, and EDI requirements. Its UI is graphical and user-friendly enough. Boomi supports 240+ connectors of different types, such as application, event-driven, technology, and custom ones. You can create your custom connectors in Java using Boomi\u2019s Connector SDK up to the number of licenses in your subscription. Connectors here are categorized by classes, including Small Business, Standard, Enterprise, and Trading Partner. [Pricing](https://boomi.com/pricing/) is flexible and depends on the number of used connectors, workflows, environments, etc., and a 30-day free trial is available. However, compared to [Skyvia](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) , it\u2019s a less cost-effective option unsuitable for companies with limited budgets. Key Benefits The real-time bidirectional process flows across silos. Data alerting ability to resolve duplications and data entry issues. Automatic merge of similar records. Users review According to [Gartner](https://www.gartner.com/reviews/market/enterprise-low-code-application-platform/vendor/boomi/product/boomi-unified-platform) , Dell Boomi\u2019s rating for 2023 is 4.1, [G2 Crowd](https://www.g2.com/products/boomi/reviews) rates it as 4.3. Looker Like Tableau, [Looker](https://cloud.google.com/looker?hl=en) is a cloud-based analytics and visualization system. It is trendy among businesses that value self-service analytics and the ability to customize data experiences for different user groups. In other words, Looker is a set of data analytics and BI tools for exploring, analyzing, and visualizing data. The pricing depends on the number of users and the scale of the deployment, but you can try a free trial [here](https://cloud.google.com/resources/looker-free-trial) . Key Benefits Looker allows defining metrics with the LookML data modeling language, which creates SQL queries to answer any question about such metrics. Easy-to-understand dashboards with the ability to drill in and explore. Looker lets you connect directly to the database without headaches while other software downloads. Users review According to [Gartner](https://www.gartner.com/reviews/market/marketing-dashboards/vendor/google/product/looker) and [G2 Crowd,](https://www.g2.com/products/looker/reviews) Looker\u2019s rating for 2023 is 4.4. Microsoft Power BI [Microsoft Power BI](https://powerbi.microsoft.com/en-us/) is a popular data visualization and analysis platform among business analysts and data scientists. It\u2019s no-code, providing both desktop and web versions and a library of pre-built connectors. The drag-and-drop Excel-like interface is simple enough to allow even non-analysts to get started. However, there are better solutions than Power BI for processing large data volumes. The [pricing](https://powerbi.microsoft.com/en-us/pricing/) ranges from the free Microsoft Fabric account to Power BI Premium per capacity ($4,995 per capacity/month). Key Benefits The solution\u2019s usability. Rich visualizations. Seamless integration with various data sources. Users review According to [Gartner](https://www.gartner.com/reviews/market/analytics-business-intelligence-platforms/vendor/microsoft/product/microsoft-power-bi) , Microsoft Power BI\u2019s rating for 2023 is 4.4, [G2 Crowd](https://www.g2.com/products/microsoft-power-bi-embedded/reviews) rates it as 4.3. Airflow It sounds strange, but [Airflow](https://airflow.apache.org/) wasn\u2019t created for data processing. Actually, it\u2019s an open-source, free-for-use workflow management system that automates the organizing, scheduling, and monitoring of [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) and reverses ETL data processing, etc. However, you must be familiar with Python and some required libraries and Docker Containers to work with it, so it\u2019s not a perfect choice if you prefer no coding. Airflow supports only 80+ connectors \u2013 only a few if compared with [Skyvia](https://skyvia.com/etl-tools-comparison/apache-airflow-alternative-skyvia) . The platform also supports encryption and role-based access control, logging, and audit but doesn\u2019t have official security certifications. Key Benefits The solution\u2019s extensibility. Dependency management. Monitoring capabilities. Users review According to [G2 Crowd](https://www.g2.com/products/apache-airflow/reviews) , Airflowr\u2019s rating for 2023 is 4.3, [TrustRadius](https://www.trustradius.com/products/apache-airflow/reviews) rates the solution as 8.2 out of 10. Chartio Like Power BI, [Chartio](https://chartio.com/about/legal/) is also a cloud-based BI and data visualization platform that connects various data sources, creates interactive dashboards, and generates insightful reports. Chartio fits for users with different technical backgrounds. Its drag-and-drop intuitive UI lets you easily make custom interactive dashboards and create charts, graphs, and reports without coding. With Chartio, you may model and shape data according to your business needs. The pricing here is subscription-based and provides a free trial version. Key Benefits The easy-to-use U. Data visualization capabilities. Flexible data connectivity options. Users review According to [Gartner](https://www.gartner.com/reviews/market/analytics-business-intelligence-platforms/vendor/chartio/product/chartio) , Chartior\u2019s rating for 2023 is 4.3, [G2 Crowd](https://www.g2.com/products/chartio/reviews) rates it as 4.5. Comparison Table of Data Management Tools The table below compares the described data management tools and their primary capabilities. Tool Primary Functionality Cloud-Based Data Integration Data Visualization ETL Capabilities Real-Time Analytics Skyvia Data Integration/ETL/ELT Yes Yes No Yes No Stitch ELT Yes Yes No Yes No Fivetran ELT Yes Yes No Yes Yes Talend Data Integration/ETL Yes Yes No Yes Yes Informatica Data Integration/ETL Yes Yes No Yes Yes Snowflake Cloud Computing Yes Yes No No Yes Tableau Data Visualization Yes No Yes No Yes Amazon Web Services Cloud Computing Yes Yes No Yes Yes Microsoft Azure Cloud Computing Yes Yes No Yes Yes Google Cloud Platform Cloud Computing Yes Yes No Yes Yes Dell Boomi Data Integration Yes Yes No Yes No Looker Data Visualization Yes No Yes No Yes Microsoft Power BI Data Visualization Yes Yes Yes Yes Yes Airflow Workflow Management Yes Yes No Yes No Chartio Data Visualization Yes Yes Yes No Yes Challenges in Data Management You may select a set of perfect data management tools, but finding the suitable approach is often challenging enough. Since data management hardships almost always stem from increased data volumes, let\u2019s review the most common ones and what can be done. Storage issues : data stored in the different platforms isn\u2019t available in a single format or repository, complicating the analysis. Using a solution to transform the data into a unified format is an excellent idea to solve this issue. Complicated management : As the data volumes grow, it is challenging to realize where it\u2019s stored, its exact amount, where to keep it, and how to use it. This issue can be solved with data inventory, cataloging, classification, and prioritization. Storing data in a centralized data repository or lake significantly simplifies data access and management. Data location and sharing difficulties : if your company works with large data volumes from diverse sources, you may face this challenge. This significantly impacts data security, compliance, and accessibility. You should prioritize data management strategies that enable secure, compliant, and efficient data sharing. Conclusion Each of the 15 data management solutions mentioned in the article has powerful capabilities that process and transform data as you wish and need. However, choosing the perfect tool for a company depends on the features and options the business seeks. Another critical point is the solution\u2019s usability and complexity. It can be codeless or require creating a script; it can have an intuitive UI or need time for the staff to train. The solution can also be cloud-based, on-premise, expensive, cheap, or free. All these factors will influence your \u201cbest choice\u201d because even businesses operating in the same industry may have different data processing requests, etc. A perfect selection might be some mix of usability, simplicity, and robust functionality at a reasonable price. [Skyvia](https://skyvia.com/solutions/) fits this description mostly. It\u2019s a no-code, cloud-based, user-friendly platform that provides rich data processing functionality for pay-as-you-go pricing, or you can use it for free right now. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-management-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Management+Tools%3A+Free+and+Paid+Options+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-management-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-management-tools/&title=Top+Data+Management+Tools%3A+Free+and+Paid+Options+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-data-mapping-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best Data Mapping Tools in 2025 By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/best-data-mapping-tools/#respond) 3524 May 19, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-mapping-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Mapping+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-mapping-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-mapping-tools/&title=10+Best+Data+Mapping+Tools+in+2025) According to [Data Pipelines Market Study](https://gumroad.com/l/MhQQJ) , more than 65% of business organizations evolve their data integration by applying cloud or hybrid ecosystems. However, it\u2019s impossible to advance without fitting an appropriate data mapping tool. In this article, we cover all possible solutions on the market as well as provide key requirements to choose the service that answers specific business needs. Table of Contents Data Mapping Data Mapping Process Data Mapping Tools How to Choose a Data Mapping Tool Top 10 Data Mapping Tools Conclusion Data Mapping During the data integration process, data is transferred from one source to another. To ensure that it is being transferred consistently, data mapping is being applied. During the data mapping, It defines the data models of both sources, the relations between data in them, and the data should be transformed during the process. For example, if you want to move your clients from Salesforce to Zoho, you may need to transfer the customer names, addresses, and phone numbers. In this case, you need to define the data fields in each system, identify differences between them, and create rules to transform data before moving. Data Mapping Process The data mapping process usually consists of the following steps: Identifying the source and target data models : check the data structure of both data sources and the way they relate to each other. Defining the relationships : look for common fields. Setting the mapping rules : decide how the data should be transformed during the data transfer. Creating a mapping document : create a document that describes all the mapping rules and steps that should be completed to successfully map all the fields. Testing : apply steps and rules from the mapping document to chunks of data to check for errors. Refining: if errors were found during the testing, refine your mapping document and move to test again. Implementing: once your testing shows no errors, you can start the complete mapping implementation. All these steps can be performed manually or with the help of a data mapping tool that helps to automate the process and provides a friendly user interface designed to help you tackle common mapping challenges. Data Mapping Tools Data mapping tools enable users to create the mapping rules and steps fast and share their results and developments with the team while ensuring that the data is transformed and transferred between source and target consistently. There is a long list of data mapping tools available on the market. While it is hard to evaluate which one is the best, we suggest you look for the tool that fits your needs the most. How to Choose a Data Mapping Tool When choosing a data mapping tool, we suggest you consider the following factors: Integration requirements : think upfront about the use cases of the tool. Consider the scenarios it should handle and how long it takes to accomplish them. Data sources and formats: check if it supports all the data sources and formats you opt for and its data transformation possibilities. Ease of use: check if the tool\u2019s UI is easy to understand. The complex mapping may take a while to be arranged and performed, do not let a rusty UI make the process even more time-consuming. Flexibility and scalability: do not forget that the volumes of data you work with today can drastically change alongside the business requirements. The data mapping tool should be flexible and scalable enough to handle these changes. Customization and extensibility: consider if the basic tool functionality will be enough for your needs and the possibility of tuning it depending on your needs. Look for features such as custom functions or plug-ins that can be used to extend the tool\u2019s functionality. Support and documentation: check if the documentation is well written and easy to follow, and investigate the ways to contact the support team, their opening hours, and response time to ensure that you can get help when needed. Cost: while many tools have a free demo plan, if you are going to unlock the full potential of the tool, you\u2019ll most likely be asked to pay for it. Make sure to include prices for maintenance and support while evaluating the total price. Top 10 Data Mapping Tools Skyvia Integration requirements: 9/10 Skyvia supports a variety of integration scenarios, such as cloud integrations, ETL, bi-directional data sync, data warehousing, and provides a long list of pre-built connectors to ease the process. Data sources and formats: 9/10 Skyvia works with cloud sources, databases, and file storages and supports flat files formats like CSV. Ease of use: 9/10 Skyvia provides a user-friendly interface with a modern non-overwhelming design and convenient mapping and expression editors to ensure a smooth user experience. Flexibility and scalability: 9/10 Skyvia handles large volumes of data. It utilizes parallel processing and data partitioning to improve performance. Customization and extensibility: 8/10 Skyvia provides access to custom functions and expressions; however, it has limitations when it comes to customization and extensibility. Support and documentation: 9/10 Skyvia provides decent email and chat support and has an extensive knowledge base. However, the documentation could have been more detailed. Cost: 10/10 There are several pricing plans available: a free plan with limited features and paid plans that start at $15/month for more advanced features. The pricing is reasonable and flexible. Thoughts Skyvia is a solid choice for organizations looking for a cloud-based data integration platform with a user-friendly interface and a wide range of features that come with reasonable pricing. The service also grants high-grade security, being hosted within the Microsoft Azure cloud platform. Pentaho Integration requirements: 8/10 Pentaho is used for ETL, data migration, data warehousing, and Hadoop job execution. Data sources and formats: 9/10 Pentaho supports a variety of databases, files, and web services. Ease of use: 7/10 Pentaho has a relatively user-friendly interface, but it looks outdated and can be a bit overwhelming for new users. Flexibility and scalability: 9/10 Pentaho is a highly flexible and scalable tool, capable of handling large data volumes and adapting to changing business requirements. Customization and extensibility: 9/10 Pentaho can be customized and extended using a variety of modules. Support and documentation: 7/10 Pentaho offers a comprehensive set of resources for users, including user guides, forums, and support tickets. However, some users have reported issues with the quality of support and clarity of the error messages. Cost: 7/10 Pentaho offers a free Community Edition with limited functionality. The Enterprise Edition subscription price depends on the functionality you want to be included. You can opt for a pricing of $2500 a year as the starting point. Thoughts Pentaho is a powerful tool that can solve many integration challenges. While it may not be the easiest tool to use, it is pretty flexible and scalable and makes it a good fit for organizations that need to handle complex data integration projects. Talend Integration requirements: 8/10 Talend covers manual, middleware, application-based, uniform access and common storage integration scenarios. However, it may require some additional configuration to handle complex integrations. Data sources and formats: 9/10 Talend supports a wide range of data sources and provides numerous pre-built connectors. It can also handle complex data formats, such as JSON, XML, Avro, and more. Ease of use: 7/10 Talend\u2019s UI is on the brighter side in comparison to some other tools in the list; however, it still feels a bit out of date and overwhelmed with panes and control elements. Flexibility and scalability: 9/10 Talend scales well and is able to handle large data volumes. Customization and extensibility: 9/10 Talend has a wide range of pre-built connectors and components. Its modular architecture provides plenty of customization possibilities. Support and documentation: 8/10 Talend provides extensive documentation, a community portal, and online support. However, some support options can be expensive. Cost: 7/10 Talend offers open-source and commercial editions. While the open-source edition is free, the commercial data integration edition starts at $1,170 per user per month. Thoughts Talend is a highly flexible and scalable tool that can handle complex integrations with ease. Its modular architecture makes it highly customizable; however, the ease of use can be a subject of concern. You can always start testing it yourself with the open-source edition and contact their sales team to calculate exact pricing for you if you decide to make a purchase of the commercial version. Informatica Integration requirements: 9/10 Informatica is a comprehensive data integration platform that supports mapping, replication, synchronization, masking, and other integration scenarios to match the needs of its users. Data sources and formats: 9/10 Informatica works with databases, files, web services, and cloud applications and supports complex data structures and transformations. Ease of use: 7/10 Informatica provides easy access to the basic functionality, but when it comes to more advanced stuff, users might spend ages with documentation and tutorials to figure out the expected sequence of actions. Flexibility and scalability: 9/10 Informatica is highly scalable, can be adapted to changing business needs, and handles huge volumes of data with ease. Customization and extensibility: 8/10 Informatica enables custom functions usage and supports scripting languages such as Python. Support and documentation: 8/10 Informatica provides extensive documentation resources with a big community around it. You may find a lot of video guides and tutorials online. It also offers training programs for best practices and troubleshooting. Cost: 6/10 Informatica is a high-end tool with high-end pricing models which are complex and can vary depending on the needs of your organization. We\u2019d recommend you contact their sales team for details but be prepared for the numbers with three or four zeroes in the end. Thoughts Informatica is a powerful data mapping and transformation tool with strong connectivity and integration capabilities. While its ease of use and pricing may be barriers for some users, its comprehensive support and documentation resources make it a viable option for larger organizations with more complex data needs. Adeptia Integration requirements: 7/10 Adeptia supports most ETL scenarios and is mostly focused on B2B integrations. Data sources and formats: 8/10 Adeptia works with databases, files, web services, and cloud applications and supports complex data formats, such as JSON, XML, and EDI. Ease of use: 7/10 Users report that the interface can be cluttered and difficult to navigate, particularly for more complex workflows. Flexibility and scalability: 9/10 Adeptia provides both cloud and on-premises deployment options and can handle large volumes of data. Customization and extensibility: 8/10 Adeptia provides support for custom functions, pre-built connectors, and integration with third-party tools and services. Support and documentation: 7/10 Adeptia provides documentation content and online support options; however, we can track user concerns regarding content quality and response time. Cost: 6/10 : Adeptia\u2019s pricing is based on the number of connectors and endpoints required. Be prepared for prices in a range of $3000/month Thoughts If Adeptia matches your budget and you still find its user interface a fit for you, it\u2019s a solid choice that can complete most of the common integration tasks for you. Oracle Integration requirements: 9/10 Oracle Data Integrator is a robust data integration platform designed to meet a wide range of integration requirements. It utilizes ELT architecture which eliminates the need for a standalone ETL server. Data sources and formats: 9/10 . Oracle Data Integrator works with relational databases, big data platforms, cloud services, legacy systems, and files. Ease of use: 6/10 The Oracle Data Integrator UI can be considered moderately complex, especially for users who are new to the tool or have limited experience with [data integration platforms](https://skyvia.com/blog/data-integration-tools/) . ODI\u2019s UI provides a comprehensive set of features and functionalities, which can initially appear overwhelming. However, with time and practice, users can become proficient in navigating and utilizing the UI effectively. Flexibility and scalability: 10/10 Oracle is highly scalable and can handle large data volumes with ease. It also offers various features such as parallel processing, grid computing, and workload management, making it highly flexible to adapt to changing business requirements. Customization and extensibility: 8/10 Oracle provides a range of customization options, such as custom functions, pre-built components, and SDKs that can be used to extend the platform\u2019s functionality. However, some advanced customization options may require expert-level skills and experience. Support and documentation: 9/10 Oracle provides extensive documentation, including user guides, forums, and support tickets. Additionally, it offers various support options, including online support, phone support, and in-person support. Cost: 6/10 Oracle often offers flexible licensing options and pricing models tailored to the needs of individual organizations. They may also provide volume discounts, special promotions, or bundled solutions as part of their pricing packages. However, expect prices. to be on the higher end. Thoughts Oracle provides a top-notch integration functionality at a relatively high price; however, if you can get a good deal from their sales department, it may be worth a shot. Altova Integration requirements: 9/10 Altova supports various integration scenarios, including ETL and data warehousing. Data sources and formats: 9/10 Altova supports connections to various data sources, including databases, web services, XML, JSON, Excel, and more. Ease of use: 7/10 Altova has a relatively friendly interface and provides visual tools for mapping, debugging, profiling, and validation. Flexibility and scalability: 8/10 Altova shows relatively good results regardless of the volumes of data you need to integrate. Customization and extensibility: 9/10 Altova allows for customization and extension using XSLT, XQuery, and other scripting languages. Its plug-in architecture allows for even more customization. Support and documentation: 7/10 Altova offers documentation alongside other online resources and phone and email support. However, users report that the community is way less crowded compared to similar tools, and response time from the support team can take quite a long. Cost: 6/10 The starting price for the Enterprise Edition of the Altova MapForce is $999 for a single-user license. Thoughts Altova is a great choice to manage the integration of any complexity for users that have rare engagements with a support team and aren\u2019t afraid of its pricing. IBM InfoSphere Integration requirements: 7/10 IBM InfoSphere supports ETL, data quality, data governance, and data warehousing scenarios. Data sources and formats 8/10 IBM InfoSphere supports structured, semi-structured, and unstructured data. It can handle complex transformations and supports XML, JSON, and binary data formats. Ease of use: 6/10 IBM InfoSphere has a complex user interface and can be difficult to learn for new users. Flexibility and scalability: 9/10 Such concepts as distributed architecture, parallel processing, grid computing, and load balancing make IBM InfoSphere highly scalable and allow it to easily deal with data of any capacity. Customization and extensibility: 8/10 IBM InfoSphere offers custom workflow orchestration, connectors and adapters development, and monitoring and alerting that you can tune as you need. Support and documentation: 9/10 The documentation is both really good and really hard. You may need to spend extra time getting things settled or ask for additional help from an IBM representative. Cost: 6/10 IBM InfoSphere is an enterprise-based tool will the according pricing. It\u2019s hard to tell the approximate price tag as it depends on many factors; however, be prepared to spend a fortune. Thoughts If you\u2019re a big enterprise with a solid budget \u2014 go for it, and do not hesitate to hire some coaches. The struggle of the first days will pay off in the future. WebMethods Integration requirements: 7/10 WebMethods supports most ETL scenarios; however, users report issues with custom integrations. Data sources and formats: 9/10 WebMethods supports a full package of sources and data formats and provides advanced data mapping features. Ease of use: 7/10 The UI is clean but sometimes very counterintuitive. It takes some time to get used to it. Flexibility and scalability: 8/10 Despite some reported issues with memory usage WebMethods doesn\u2019t seem to have any troubles with large volumes of data. Customization and extensibility: 9/10 WebMethods can be customized using its built-in APIs, custom scripts, and plug-ins. It also supports custom functions and libraries. Support and documentation: 8/10 WebMethods provides phone and email support alongside a Youtube channel and numerous community forums. Cost: 6/10 The very basic license of WebMethods will cost you $1000 a month. The price tag can grow drastically depending on the amount of functionality included with the ordered package. Thoughts A high price tag may scare small and medium businesses from using WebMethods for their integration needs; however, if you\u2019re running a large enterprise or an integration-centric project, it may be a good fit for you as it holds all the integration tasks relatively well. Jitterbit Integration Requirements: 9/10 Jitterbit supports a variety of integration scenarios, including ETL, ELT, and real-time integration. Data Sources and Formats: 9/10 Jitterbit works with databases, files, web services, and cloud apps. It supports various data formats, including XML, JSON, and CSV, and handles complex data structures, such as hierarchical or nested data. Ease of Use: 9/10 Jitterbit has a clean, user-friendly interface that, like in most data integration cases, is hard to master. Flexibility and Scalability: 9/10 Jitterbit is highly scalable and can handle large data volumes. Customization and Extensibility: 7/10 Jitterbit provides various customization options, including the ability to modify data flows with the help of Javascript and Groovy. Support and Documentation: 8/10 Jitterbit provides comprehensive documentation, a knowledge base, forums, and ticket support. However, users report that the ticket processing takes too long. Cost: 8/10 Jitterbit has three different pricing models that depend on users\u2019 needs and can satisfy users with various budgets. The pricing plans differ with the number of available connections and environments and can even affect the support response time. Thoughts Jitterbit is a strong contender in the data mapping and integration space and can be chosen by those who like clean and easy-to-use interfaces with reasonable pricing and decent capabilities. Conclusion We\u2019ve tried to evaluate the most commonly used data mapping tools so it is easier for you to make a choice. Pay attention to the metrics that are important for your organization or personal use. If you\u2019re looking for an easy-to-use tool with reasonable pricing that holds most scenarios, we\u2019d recommend you to go with tools like Skyvia; if you\u2019re looking for a complex high-end enterprise tool, consider Oracle or IBM Infosphere, and so on. However, while some tools scored lower in specific categories, they may still be strong options for organizations with specific needs or preferences. For example, Oracle may be a great choice for organizations with a lot of data stored in Oracle databases, even though it scored lower in some categories. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-mapping-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Mapping+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-mapping-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-mapping-tools/&title=10+Best+Data+Mapping+Tools+in+2025) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-data-orchestration-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Selecting the Best Data Orchestration Tool in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-data-orchestration-tools/#respond) 2314 January 25, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-orchestration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Orchestration+Tool+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-orchestration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-orchestration-tools/&title=Selecting+the+Best+Data+Orchestration+Tool+in+2025) When you hear the term data orchestration, what are the first few words in your mind to describe it? Like that? Disparate data types and sources. Collecting data in the same destination. Control of the order of the data-related operations. Data pipelines scalability and stability. Data analysis. Company success. Let\u2019s collect these caleidoscope pieces in a whole-picture view. Imagine your business as an international airport where data orchestration is a tower keeping all the logistics and allowing the planes to arrive and depart successfully and safely. So, passengers are happy to get to their correct destinations, and the business is solid and competitive. Data orchestration means tracking the data flow and processing between all the tools and applications, not losing even small details. The main characteristics of data orchestration Data Sources Integration . Data orchestration combines data from various sources, such as databases, APIs, files, and streaming data, into a unified and coherent view. Workflow Automation . Designing and automating end-to-end data workflows to streamline and optimize data processing. Automation ensures consistency, reduces manual errors, and improves efficiency. Data Movement . Managing data transfer between systems, which may include [batch processing](https://skyvia.com/blog/batch-etl-processing/) , real-time streaming, or a combination of both to ensure data is moved efficiently and reliably to its destination. Data Transformation and Enrichment . Perform necessary transformations on the data to ensure it is in the desired format for analysis and enrich the data with additional information from other sources to enhance its value. Data Governance . Implementing policies and controls to ensure data quality, security, and regulation compliance, managing access permissions, auditing data changes, and maintaining data lineage. Monitoring and Management . Monitoring the performance and health of data pipelines, workflows, and systems involved in data processing. Using logging, alerting, and reporting mechanisms to identify and address issues promptly. Scalability and Flexibility . Ensuring that data orchestration processes can scale to handle increasing data volumes.\u00a0Adapting to changing business requirements, possibly by leveraging cloud-based solutions and technologies. Optimization for Efficiency . Continuously reviewing and optimizing data orchestration processes for efficiency.\u00a0Identifying and addressing bottlenecks or areas for improvement. Metadata Management . Maintaining metadata about the data, including its origin, format, and content, for understanding what the data represents and how it can be used. While using data orchestration services, businesses also derive benefits, like scheduling ability, where all the data tasks (collection, transformation, and loading) are provided by schedule automatically, avoiding human impact. In this article, we\u2019ll consider the top data orchestration tools in 2025 and their key features, pros, and cons to help your organization select the perfect one for its role. Table of Contents Skyvia Keboola Apache Airflow Dagster Prefect Luigi Why use Data Orchestration? Skyvia If you seek a robust universal data orchestration solution, look at [Skyvia](https://skyvia.com/data-integration/import) . It\u2019s a no-code cloud-based data integration platform allowing users to work with ETL, ELT, reverse ETL scenarios, workflow automation, bi-directional data sync, etc. With Skyvia, you can easily integrate any data, like CSV files, a cloud, or DB ones, to any supported destination. Key Features An ability to create new records, update the existing ones, and delete unnecessary source records from the target. Import without duplicates. Preserving relations between the specified imported files, tables, or objects. Mapping features for data transformations and import for different structured sources and targets. You may split data, use formulas, complex expressions, lookups, etc. An option of loading just new or modified data to configure import for one-way sync of data sources changes. Data replication and backup support across multiple systems. An ability to create data pipelines with Data Flow and orchestrate them with Control Flow. Pros You can start working from any device and internet connection because the app is cloud-based. The solution is no-code, user-friendly, and doesn\u2019t require additional staff training. If you don\u2019t have a ready IT department in your company and would like to save time and costs, that\u2019s a good choice. The number of data sources supported ( [180+](https://skyvia.com/connectors/) ) allows users to transfer data to data warehouses, CRMs, so to other on-premises and cloud databases. Cons It would be great to have more connectors in some cases, but they\u2019re working on it. Pricing Skyvia offers flexible pay-as-you-go [pricing](https://skyvia.com/pricing/) plans. You also may start with the Freemium option to feel how it goes. Keboola [Keboola](https://www.keboola.com/) is a reasonable choice for large businesses seeking data unification and useability. That end-to-end data solution combines ELT, storage, orchestration, and analysis on the same platform. Key Features The solution provides 100+ integration between the apps you usually use. Keboola integrates with the tools you already have, so you don\u2019t need to change your own architecture. An ability to monitor every CRUD operation on the platform. Simple, unified, and secure user management. Pros The platform supports 250+ connectors for data transfer process automation. Team members\u2019 collaboration allows for sharing workflows, configurations, and insights within the system. The solution\u2019s flexibility is up to user needs: you can choose an intuitive UI or use API for more complex tasks. Cons Starting with Keboola might be difficult for users unfamiliar with [data integration and ETL](https://skyvia.com/blog/data-integration-and-etl/) . Advanced use cases also need complex configurations. That\u2019s not the choice of small businesses and startups. The price is reasonable, but the features list is optional to pay for it. User support sometimes isn\u2019t ready to solve more complex and unique issues. Pricing Keboola [pricing](https://www.keboola.com/pricing) provides 120 minutes for the Free Tier. After the expiration, it costs $0.14 per minute. You may also use the Enterprise plan. Apache Airflow [Apache Airflow](https://airflow.apache.org/) is another vital ETL data orchestration tool. It\u2019s open-source, Python-based, and uses DAGs (Directed Acyclic Graphs) for workflow automation and scheduling. The use cases here vary from data pipelines ETL orchestration to ML apps\u2019 settings and starting. Key Features With DAGs, the task execution in the pipeline is based on dependencies and schedules, so the task order and execution time are always correct. Python allows flexible usage, like creating custom operators, sensors, and plugins to fit diverse workflow needs. Airflow provides multi-node parallel orchestration with the help of the Kubernetes or Celery executors. Pros An ability to monitor data flow in real-time. Flexible transformations are allowed by Python, like big data processing with Spark. Support of complex data orchestration scenarios with DAGs. Cons The solution might be too complicated for non-tech users. For instance, you need to deploy it via a Docker image to set it up on Windows. The system deletes the job with metadata, making debugging difficult. The workflow depends on Python only, not allowing the usage of more scalable languages or technologies. Pricing Airflow is open-source, so you may use it for free just after installation. Dagster In the case of focusing on complex data pipelines, [Dagster](https://dagster.io/) is a good selection. It\u2019s an open-source data orchestration platform for ML, analytics, and ETL, oriented on complicated data processing tasks, like difficult-to-use data sources, etc. Key Features DAGs-based workflow. Early error-catching in the development process ability with strongly typed inputs and outputs for each solid. ML framework integration abilities. Dagster\u2019s local development and testing ability to simplify debugging and the cycle time for pipeline development decrease. Pros Perfect testing abilities. Robust capabilities of business data pipeline control. Advanced dependencies control capabilities. Cons Users must have appropriate experience to become masters of Dagster\u2019s ecosystem. Coding and architecture here are also complicated and need abstract thinking. The data integration is limited and includes mostly data engineering sources, like GitHub, PagerDuty, PostgreSQL, GCP, etc. Pricing The [pricing](https://dagster.io/pricing) depends on infrastructure and includes the following options: Solo: $10 for 7,500 DC (Dagster credits). The additional credit cost is $0.04. Team: $100 for 30,000 DC. The additional credit cost is $0.03. Enterprise: Customizable. Prefect [Prefect](https://www.prefect.io/) is another popular open-source Python-based solution that helps businesses avoid human errors by orchestrating data between warehouses, lakes, ML models, etc. Its strong sides are custom retry and caching abilities, good quality infrastructure, and data workflow control. Key Features Support of the real-time data flow building (each event is set as a trigger). Parallel orchestration abilities with Kubernetes. Ccloud-based and on-premises execution is available to provide flexibility in deployment. Pros Robust monitoring and debugging capabilities with the UI. Data pipeline control and error reduction. Parameterization that allows dynamic workflow capabilities. Cons Prefect is a Python-only solution. Memory-intensive tasks will be an issue. Low-code is unavailable here. If you have some untrivial requests, it\u2019s not your choice. Lack of documentation describing all the use cases. Pricing The [pricing](https://www.prefect.io/pricing) offers a free plan, PRO ($405 per month), and Enterprise ones. Luigi [Luigi](https://luigi.readthedocs.io/en/stable/) is a strong open-source, Python-based player in complex data orchestration. It\u2019s a good time saver for developers on manual work and mostly fits data warehousing, ML processing, batch processing, long-running jobs, and handling dependencies between tasks. Key Features Tasks dependencies management capabilities. You can define tasks and specify how they depend on each other, ensuring the correct execution order. Reach infrastructure that allows complex task management, like A/B test analysis, internal dashboards, recommendations, and external reports. Failure-handling mechanisms, including retries and alerting. Proc An ability to integrate multiple tasks in one pipeline. Easy integration with other Python-based data analytics solutions. Easy handling of complex dependencies between tasks. Cons The platform can be challenging to set up and use, especially for those unfamiliar with data orchestration and Python. Luigi fits medium-sized workflows but is less scalable for larger or more complex ones. The visualization is limited compared to commercial solutions. Pricing Luigi is free for usage. Why use Data Orchestration? In modern reality, business success is only possible with automated workflows, ensuring efficient, accurate, and timely data availability for operations and analytics. With the whole picture view, the decisions are always more transparent, help companies avoid many risks, and save time and costs. All of this is about data orchestration. Let\u2019s go through its main benefits. Data quality improvement with automated data workflows. An ability to reduce the time and resources spent on manual data processing and integration. Such tools are scalable and can adapt to your business growth, handling increasing volumes and complexities of data. The tool chosen depends on your company\u2019s goals, size, and expectations, but the balance between usability, scalability, and the honest price is a good selection point. In this case, [Skyvia](https://skyvia.com/) is a three-win solution because of its intuitive UI , simplicity , and attractive payment model. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-orchestration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Orchestration+Tool+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-orchestration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-orchestration-tools/&title=Selecting+the+Best+Data+Orchestration+Tool+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-data-pipeline-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best Data Pipeline Tools Reviewed for 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-data-pipeline-tools/#respond) 4466 May 27, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-pipeline-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Pipeline+Tools+Reviewed+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-pipeline-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-pipeline-tools/&title=10+Best+Data+Pipeline+Tools+Reviewed+for+2025) Summary . Data pipeline tools are rapidly evolving, reshaping how businesses manage data integration and analytics. Explore the top solutions for 2025, including open-source, licensed, cloud-based, and on-premise platforms. Understand their key features, strengths, and potential limitations to confidently select the best option aligned with your data strategy. Data in the digital world is like oil in the fossil fuel for vehicles. Similar to how transportation means need fuel to ride, companies need data to move forward. Obviously, cars can\u2019t work with raw oil to drive since this material needs to be transformed into petrol first. The same goes for businesses with information systems \u2013 they need to process data before use. Data pipeline tools collect all the required data and make it usable. So, this article explores popular solutions along with their features, limitations, and pricing. It also compares different kinds of data pipeline tools and provides real-world examples of how and when to use them. Table of contents What Are Data Pipelines Components of a Data Pipeline Data Pipelines vs. ETL Processes Benefits of Data Pipelines: Why You Need Them Top 10 Data Pipeline Tools Skyvia Fivetran Apache Airflow Airbyte Stitch Talend Integrate.io Matillion StreamSets Apache Spark How to Choose a Data Pipeline Tool Types of Pipelining Software How to Build Data Pipelines Using Skyvia Data Flow Case Studies of Data Pipelines in Action Conclusion What Are Data Pipelines \u201cA data pipeline is a method in which raw data is ingested from various data sources, transformed and then ported to a data store, such as a data lake or data warehouse, for analysis.\u201d \u2013 [IBM](https://www.ibm.com/topics/data-pipeline) . Data pipeline tools are software applications that enable users to build [data pipelines](https://skyvia.com/learn/what-is-data-pipeline) . The primary objective of these solutions is to automate data collection and flow by connecting source and destination tools. Many such tools also offer simple and sophisticated options to transform data on the go. Data pipeline solutions allow businesses to automate various data-related processes, manage large data sets effectively, and enhance reporting. Components of a Data Pipeline The structure of a data flow differs from one company to another. Anyway, each data pipeline contains more or less the same set of components: Data sources. Each business has its toolset (SaaS apps, databases, IoT systems, etc.), which constitutes a starting point of a data pipeline. [Data ingestion tools](https://skyvia.com/learn/what-is-data-ingestion) . They extract raw data from the source systems either in a batch or streaming mode. The first option is better for periodical data retrieval, while the second one is suitable for real-time processing. Transformation features . Cleansing and normalizing data after ingestion is crucial to ensure quality in further pipeline stages. Storage destinations. After transformations, data goes to the destination database, data warehouse, or another storage system. Delivery and consumption systems. These are usually BI and analytics tools, machine learning platforms, AI apps, and other systems that consume and utilize data. Data Pipelines vs. ETL Processes Data pipelines are often interchangeably used as [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) to describe automated data integration processes. However, these notions aren\u2019t identical even though they share some processes in common. In a nutshell, an ETL pipeline is just a part of an entire data pipeline, which takes care of batch collection, transformation, and loading of structure data. Meanwhile, typical data pipelines can handle real-time workflows, unstructured data, and complex integration scenarios. Feel free to check the [key differences between ETL and data pipelines](https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/) . Benefits of Data Pipelines: Why You Need Them Apart from automating and streamlining workflows, data pipelines bring a number of other benefits to businesses. Centralized data management. Since these solutions can consolidate data in a single central location, they enable the creation of the [SSOT (single source of truth)](https://skyvia.com/learn/single-source-of-true) . This provides a centralized data repository that anyone on the team can access. Flexibility. Modern data pipelines are easily scalable and elastic, which means they can adapt to changing data loads and volumes. Data quality. Due to the transformation options, data is cleansed and standardized, which improves its quality. At this stage, raw data is converted into a uniform format and becomes suitable for analysis and other operations. Enhanced decision-making. Data pipelines make data flow through various stages and transformation processes, making it usable for reporting, analytics, predictions, etc. This promotes a data-driven approach to deriving insights and evaluating business performance outcomes. Top 10 Data Pipeline Tools for 2025 Selecting the right data pipeline tool can significantly impact how smoothly and effectively you manage your data workflows. To help you navigate the options, we\u2019ve compared the top 10 tools based on key criteria like ease of use, pricing, and best use cases. Below you\u2019ll find a quick comparison followed by detailed insights into each platform. Tool Best For Skill Level Pricing Overview Notable Features/Limitations Skyvia SMBs & teams needing easy, no-code cloud ETL Beginner to Medium Free tier; paid plans start at $79/month 200+ connectors, no coding, limited free plan features Fivetran Enterprises wanting fully managed ELT automation Beginner Usage-based, starting free, scales by MAR 150+ connectors, ELT only, minimal transformation control Apache Airflow Data engineers wanting customizable workflows Advanced (Python) Open-source, free Highly customizable, requires coding, community support Airbyte Teams wanting open-source & custom connectors Medium to Advanced Free self-hosted, paid cloud starting custom No-code connectors, CDK for coding custom connectors Stitch SMBs needing simple cloud ETL Beginner to Medium Starting at $100/month 130+ connectors, API, limited support on free tiers Talend Enterprises needing hybrid cloud/on-prem solutions Medium to Advanced Custom pricing via sales Handles structured/unstructured data, complex setup Integrate.io Mid-market e-commerce focus Medium Custom pricing via sales Supports CDC, API generation, limited connectors Matillion Growing businesses wanting cloud-native ETL Medium Starting free developer tier; paid plans up to $2000+/month GUI & SQL transformations, no restart from failure StreamSets Enterprises needing batch & streaming pipelines Advanced (Kubernetes) Custom pricing via sales Supports batch & streaming, Kubernetes knowledge required Apache Spark Data scientists & engineers needing powerful processing Advanced (coding) Open-source, free Multi-language support, no dedicated support 1. Skyvia Best for Businesses and teams looking for a versatile, no-code data integration platform that supports both ELT and ETL workflows. Ideal for organizations that need easy, browser-based access to over 200 connectors, want to automate data backups, and build scalable pipelines without heavy technical overhead. Particularly suited for SMBs and departments without dedicated data engineering resources. Reviews G2 Rating: [4.8 out of 5](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) (based on 200+ reviews). [Skyvia](https://skyvia.com/) is a cloud-based platform for data integration with a no-coding approach. It supports ELT and ETL scenarios, allowing users to connect to a wide range of [data sources](https://skyvia.com/connectors) and build integration pipelines visually. Skyvia provides several products for building integrations of various complexity, creating data endpoints, automating backups, and many more. Since this tool runs in the cloud, it can be accessed via any web browser without any added software installations. Pros Requires no coding. More than 200 connectors to cloud sources, databases, and data warehouses. Provides SSH and SSL connection support. Has a free trial in addition to flexible pricing plans. Responsive support team. Cons Limited transformation options in the Free plan. Basic error handling. Pricing Skyvia provides [five pricing plans](https://skyvia.com/pricing) . Free Basic Standard Professional Enterprise Starting price $0 $79/month $159/month $199/month Custom Records per month 10k 5M+ 500k+ 5M+ Custom Scheduled integrations 2 5 50 Unlimited Unlimited Integration scenarios Simple Simple and advanced 2. Fivetran Best for Organizations seeking a fully managed, automated ELT solution to [replicate data](https://skyvia.com/blog/top-data-replication-tools/) seamlessly from a wide range of SaaS applications and databases. Ideal for teams prioritizing ease of use, minimal manual intervention, and robust support, especially when handling frequent schema changes. Best suited for businesses focused on quick data ingestion into cloud data warehouses with limited need for pre-load transformations. Reviews G2 Rating: 4.2 out of 5 (based on 380+ reviews). Fivetran is a web-based platform that allows users to create data pipelines in the cloud. It enables data replication from various SaaS (Software as a Service) sources and databases with ELT pipelines Fivetran offers no-code connectors to databases or data warehouses that can be used to build integrations. This tool relies on automation to effectively handle schema changes, significantly minimizing manual input. Pros Support of over 150+ connectors for better connectivity. 24/7 technical support for quick resolutions. A fully managed approach minimizes coding and customization in building data pipelines. Cons Fivetran supports only ELT pipelines but not ETL pipelines. This means data transformation isn\u2019t supported before it achieves the destination. Minimal scope for customization of the code. Pricing Fivetran provides four pricing plans. Free Starter Standard Enterprise Users 10 10 Unlimited Unlimited Sync frequency Hourly Hourly 15 minutes 1 minute Monthly active rows (MAR) up to 500,000 Flexible Starting price $0 Depends on MAR 3. Apache Airflow Best for Technical teams and data engineers who need a flexible, open-source orchestration platform for building, scheduling, and monitoring complex data workflows. Ideal for organizations with strong Python skills seeking full control over pipeline customization and integration with version control systems. Best suited for those willing to manage their own infrastructure and rely on community support rather than dedicated vendor assistance. Reviews G2 rating: 4.5 out of 5 (based on nearly 100 reviews). Airflow is an open-source data pipeline orchestration tool. It uses Python programming to create and schedule workflows, also known as DAGs (Directed Acyclic Graphs). You can also monitor and orchestrate these workflows with the Python-based interface. Along with creating data pipelines, Airflow can help you with other tasks. For example, it allows you to manage infrastructure, build machine learning models, or run Python code. What\u2019s more, it provides logs of the completed and running workflow through its UI. Pros Free to use as it\u2019s an open-source solution. Airflow can be easily integrated with version control systems like Git. Users can customize the existing operators or define them depending on a use case. Cons Building data pipelines requires Python knowledge. Airflow doesn\u2019t provide any dedicated technical support, so users need to rely on community support in case of issues. Pricing Since Apache Airflow is an open-source solution, it\u2019s available for free. 4. Airbyte Best for Data teams looking for an open-source, flexible ELT platform with no-code connectors and the ability to customize or build connectors via code. Suitable for organizations that want control over their infrastructure and integration with modern [data orchestration tools](https://skyvia.com/blog/best-data-orchestration-tools/) like Kubernetes and Airflow. Ideal for users with coding skills who need both free self-hosted options and enterprise-grade support in paid plans. Reviews G2 rating: 4.5 out of 5 (based on nearly 50 reviews). Airbyte is an open-source data integration platform. It easily builds ELT data pipelines with the help of no-code pre-built connectors for both data sources and destinations. This tool also provides CDK (Connector Development Kit) for creating custom connectors. You can also use this kit to edit the existing connectors to match your particular workflows. Pros Airbyte provides an open-source version as well as licensed ones that help users manage all the operational processes. Supports integrations with other stacks such as Kubernetes, Airflow, Prefect, etc. The licensed edition provides 24/7 technical support for any debugging. Cons Airbyte supports only ELT pipelines. Creating connectors with CDK requires coding knowledge. Pricing The open-source version of Airbyte comes for free and can be installed on your servers. A paid enterprise version can also be hosted on your infrastructure. Otherwise, you may choose a cloud-based paid version at a custom price, which offers support, extra security features, and other benefits. 5. Stitch Best for Small to mid-sized teams needing a straightforward, cloud-based ETL solution with a wide range of connectors and easy setup. Ideal for users who want visual pipeline management with scheduling and monitoring capabilities, and who value community-driven integrations via the Singer project. Suitable for organizations that prioritize simplicity and scalability in the cloud but do not require on-premises deployment. Reviews G2 Rating: 4.4 out of 5 (based on 60+ reviews). Stitch is a cloud-based data pipeline tool. It provides connectors to 130+ sources that can be configured in a visual interface, which makes it easy to ingest data into a warehouse. This tool also ensures orchestration, embedding, data transformation, and other features for pipeline management. You can also use Stitch API to push any data to the destination system in a programmatic way. Pros Includes a dashboard for data pipeline tracking and monitoring. Provides community-driven development and integration with different tools through the Singer project. Has scheduling options to run jobs at predefined intervals. Cons No on-premise version. Some connectors are accessed only with the Enterprise version. Limited customer support. Pricing Stitch has three pricing plans. Standard Advanced Premium Starting price $100/month $1250/month $2500/month Rows per month 5-300 M 100 M 1 B Destinations 1 3 5 Sources 10 (standard) Unlimited Unlimited Users 5 Unlimited Unlimited 6. Talend Best for Enterprises and teams needing a comprehensive, hybrid data integration platform capable of handling complex ETL workflows across both cloud and on-premises environments. Well suited for organizations that require robust support for structured, semi-structured, and unstructured data, as well as advanced API design and testing capabilities. Ideal for users who value an all-in-one solution with strong governance and management features, and who can manage a more involved installation and setup process. Reviews G2 Rating: 4.0 out of 5 (based on 65 reviews). Talend offers multiple products and services, both open-source and paid ones. Talend Data Studio is a free, open-source solution, while Talend Data Fabric is a paid version, which includes Talend Big Data, Management Console, API Services, Data Inventory, Pipeline Designer, etc. In particular, the Talend Pipeline Designer tool is dedicated to constructing data pipelines. It\u2019s a web-based tool that builds complex ETL dataflows and processes data during the transit. Pros Supports both on-premises and cloud data pipelines. Provides the ability to design and test APIs for data sharing. Handles unstructured data along with structured and semistructured data. Cons No transparent pricing \u2013 you need to contact the sales team for a quote. The complex installation process for on-premise versions. Limited features and connectors for Talend Open Studio. Pricing The price for Talend solutions is discussed with their sales managers. 7. Integrate.io Best for Businesses, especially in e-commerce, looking for a cloud-based ETL/ELT platform with minimal coding requirements. Suitable for teams needing API generation capabilities and support for change data capture (CDC) workflows. Ideal for organizations aiming to orchestrate multiple dependent data pipelines in the cloud, with a focus on integrating REST API-enabled sources. Less suited for those requiring broad connector variety or on-premises deployment. Reviews G2 Rating: 4.3 out of 5 (based on 200+ reviews). Integrate.io is a cloud-based data integration platform. It allows users to build ETL, reverse ETL, and ELT data pipelines. This tool also supports API generation and CDC technology. Integrate.io allows users to create and manage data pipelines with minimal coding requirements. Before ingestion, it\u2019s also possible to apply filters to extract data based on the specified condition. Pros Ability to pull data from any source that offers a REST API. Creates dependencies between multiple data pipelines. Offers API generation for multiple databases, security, network data sources, etc. Cons A limited number of connectors. The available connectors are focused on e-commerce use cases. Doesn\u2019t offer an on-premise version. Pricing The price for Integrate.io is discussed with their sales managers. 8. Matillion Best for Organizations seeking a cloud-native data integration platform with a user-friendly drag-and-drop interface for building [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) pipelines. Well-suited for teams that need flexible deployment across major cloud providers and want to perform transformations using either SQL or visual components. Ideal for businesses requiring a robust orchestration and management solution with support for both cloud and on-premises data sources. Less optimal for users who prioritize extensive documentation or need fault-tolerant job restart capabilities. Reviews G2 rating: 4.4 out of 5 (based on 80 reviews). Matillion is a cloud-native data integration tool that provides an intuitive user interface to develop data pipelines. There are two products Matillion offers: Data Loader for moving data from any service to the cloud and Matillion ETL to define data transformations and build data pipelines on the cloud. Matillion ETL is a fully-featured [data integration solution](https://skyvia.com/blog/data-integration-tools/) for creating ETL and ELT pipelines within a drag-and-drop interface. This tool can be deployed on your preferred cloud provider. Pros Provides connectors to both cloud and on-premises data systems. Contains features for data orchestration and management. Data transformations can be performed either with SQL queries or via GUI by creating transformation components. Cons Lack of documentation describing features and instructions for their configuration. There is no option to restart tasks from the point of failure. The job needs to be restarted from the beginning. Pricing Matillion offers four pricing plans. Developer Basic Advanced Enterprise For whom individuals growing teams scaling businesses large organizations Starting price $0/month $1000/month $2000/month Custom Users 2 5 Unlimited Unlimited Support Community Standard Standard Premium 9. StreamSets Best for Enterprises needing a fully managed cloud platform capable of handling both batch and streaming data pipelines with robust transformation capabilities. Ideal for organizations integrating multiple SaaS and on-premises data sources that require scalable, real-time data processing. Best suited for teams comfortable with Kubernetes or those willing to invest in learning it, as the platform operates on Kubernetes infrastructure. Not recommended for organizations seeking on-premises deployment options. Reviews G2 rating: 4.0 out of 5 (based on nearly 50 reviews). StreamSets is a fully managed cloud platform for building and managing data pipelines. It offers two licensed editions \u2013 a professional edition with a limited set of features and an enterprise edition with extensive support and full functionality. This tool supports 100+ connectors to databases and SaaS apps for easy creation and management of dataflows. It supports two types of engines to run data pipelines: Data collector that supports batch, streaming, and CDC modes. Transformer engine that applies transformations on the entire dataset. Pros Supports integration with multiple SaaS apps and on-premise solutions. Handles batch and streaming data pipelines. Cons No on-premise solution. The users might require Kubernetes knowledge as the data pipelines run on top of it. Pricing The price for IBM StreamSets is discussed with their sales managers. 10. Apache Spark Best for Organizations and data teams with strong programming expertise looking for a powerful, flexible, and free open-source engine to build complex, real-time and batch data pipelines. Ideal for users who need multi-language support (Python, Scala, Java, R, SQL) and require deep customization for data transformation and exploratory data analysis on large datasets. Not suitable for teams without coding skills or those needing dedicated vendor support. Reviews G2 rating: 4.3 out of 5 (based on 50+ reviews). Apache Spark is an open-source data transformation engine. It can be integrated with a wide range of frameworks, supporting a wide variety of use cases. Users can build data pipelines that process real-time as well as batch data with Apache Spark. They can also perform Exploratory Data Analysis (EDA) on large data volumes and run SQL queries by connecting to different storage services. Pros It\u2019s free to use as it is open-source. Offers support for multiple languages such as Python, Scala, Java, R, and SQL. Cons It requires extensive coding experience to implement data pipelines. Debugging is challenging as there is no dedicated technical support, though there is an extensive community that can help address issues. Pricing Since Apache Spark is an open-source solution, it\u2019s available for free. How to Choose a Data Pipeline Tool Choosing the right data pipeline tool is a critical step that can impact your business\u2019s data efficiency and scalability. Beyond just functionality, you need to evaluate various factors carefully to ensure the platform aligns with your technical needs, budget, and growth plans. Here are key aspects to consider in depth: Ease of use. Look for tools with an intuitive, user-friendly interface that allows both technical and non-technical users to design, monitor, and manage data pipelines visually. A low-code or no-code environment empowers business users to build and modify workflows without relying heavily on IT teams. Additionally, check for quality documentation, tutorials, and customer support that ease onboarding and reduce learning curves. Scalability. Choose a platform that scales flexibly with your data volume and organizational growth. It should efficiently handle sudden spikes or drops in data flow without performance degradation. Consider whether the architecture supports horizontal scaling (e.g., distributed processing) and if the pricing model accommodates scaling without exponential cost increases. Integration \u0421apabilities . A strong data pipeline tool must support a broad and growing library of pre-built connectors to your key databases, cloud services, SaaS apps, and on-premises systems. Also, assess the ability to create custom connectors or extend functionality via APIs or SDKs to future-proof your integration ecosystem. Processing Speed and Latency. Depending on your use case, evaluate if the tool supports real-time or near-real-time data ingestion and transformation, or if [batch processing](https://skyvia.com/blog/batch-etl-processing/) suffices. For use cases like fraud detection or live customer personalization, low latency and streaming support are critical. For reporting and archival, batch processing may be more cost-effective. Security and Compliance. Security features are non-negotiable when handling sensitive or regulated data. Ensure the platform offers encryption at rest and in transit, multi-factor authentication, fine-grained role-based access control, audit logging, and compliance with standards like GDPR, HIPAA, SOC 2, and others relevant to your industry. Pricing Model and Total Cost of Ownership . Analyze the pricing structure closely \u2014 whether it is subscription-based, usage-based (per data volume or rows processed), or tiered plans. Consider hidden costs such as data transfer fees, connector add-ons, or support plans. A tool with transparent pricing aligned with your data patterns helps optimize expenses. Also factor in the cost of training, maintenance, and potential downtime. Community and Vendor Support . A vibrant user community can accelerate problem-solving and provide valuable best practices. Check if the vendor offers robust support channels, SLAs, and regular product updates. Open-source tools typically have active communities but may lack dedicated support, while commercial platforms offer professional assistance. Reliability and Monitoring . Look for features like error detection, retry mechanisms, alerting, and detailed logging to maintain pipeline health. Built-in monitoring dashboards help detect bottlenecks and ensure smooth operations, reducing downtime and manual intervention. Flexibility and Customization . Consider how easily you can customize transformations, data routing, and scheduling. Platforms supporting scripting, expressions, or custom modules enable handling complex business logic and evolving requirements. Types of Pipelining Software There are many data pipeline tools available on the market. They are categorized according to their licensing type, purpose of use, and operational environment. Open-source vs. Licensed Tools Open-source solutions are available at no cost and allow users to modify the source code. Anyone can install and use them on their systems. Here are some examples of the open-source services: Apache Airflow Airbyte Dagster The licensed data pipeline tools require a valid subscription for accessing and using them. Some companies offer a trial version for users to explore the functionality of the chosen service. Here are some examples of the licensed solutions: Skyvia Hevodata Fivetran Cloud vs. On-premise Tools Cloud data pipeline platforms are fully managed, meaning that users don\u2019t have to install them and take care of the underlying infrastructure. All the data transfer and processing happens over the cloud servers. These tools can also scale easily when required. Here are some examples of cloud solutions: Skyvia Google Dataflow StreamSets Many organizations don\u2019t want their data stored or processed on the cloud to remain coherent with privacy policies. On-premise tools are installed on servers maintained by a dedicated team within the organization. These solutions aren\u2019t as scaleable as their cloud alternatives. Here are some examples of the on-premise systems: Apache Airflow Oracle Data Integrator Informatica Stream vs. Batch Tools Stream services process data as soon as it arrives on a real-time basis. Here are some of the tools supporting stream data flows: Apache Spark Apache Nifi Google Dataflow Batch data pipeline tools run at regular intervals and extract data in chunks. Here are the examples of such platforms that support batch processes: Skyvia Apache Airflow Talend How to Build Data Pipelines Using Skyvia Data Flow If you decide to select Skyvia for building data pipelines, you can do that with one of the available solutions: [Import tool](https://skyvia.com/data-integration/import) [Replication tool](https://skyvia.com/data-integration/replication) [Synchronization tool](https://skyvia.com/data-integration/synchronization) [Data Flow tool](https://docs.skyvia.com/data-integration/data-flow/index.html) Here, we\u2019ll explore how to build and set up pipelines with Data Flow in Skyiva. This solution allows you to create a diagram of connected components in a visual drag-and-drop interface. Sample Task Description Let\u2019s assume you have to create and configure an integration scenario involving three data systems: A source database. It has the Customers table, which includes the CompanyName and ContactName fields. A CRM. In our case, it\u2019s Hubspot CRM that stores all the deals-related information and has the Number Of Opened Deals field in the Companies table. A target database. It contains the Contact table that should store CompanyName , ContactName , with the Number Of Opened Deals in it. Prerequisites Make sure you have the [Skyvia account](https://app.skyvia.com/) or create a new one. Note that you can use this tool for free to try out all the fundamental features. The next step is to configure connections for the [databases](https://docs.skyvia.com/connectors/databases/) and [HubSpot](https://docs.skyvia.com/connectors/cloud-sources/hubspot_connections.html) . In your Skyvia account, go to + Create New -> Connection . Choose the required connector from the list and click on it. Fill out the required credentials. See the instructions provided next to the setup form. Click Create . Solution Once you\u2019ve created all the connections, you can start building a data pipeline with the help of Data Flow. To do so, go to + Create New -> Data Integration -> Data Flow in your Skyvia account. Add components to the board by dragging them from the panel on the left. Then, link the components by connecting the input and output circles on the diagram. Source Setup Source extracts data from the selected system and makes up a starting flow of the Data Flow diagram. Drag the Source components from the panel to the diagram and click on it. Select the system from the Connection dropdown in the right panel. Select Execute Command from the Actions drop-down list. In the Command Text box, create an SQL query. SELECT CompanyName, ContactName FROM Customers Check the output results by clicking on the output arrow. Lookup Setup The Lookup component aims to match the records in different data systems. It adds columns of the matched records to the scope. In our case, the source output consists of two columns: CustomerName and ContactName . Let\u2019s add the Number Of Opened Deals column from the Companies table in HubSpot to the scope with the help of the Lookup component. Click the Lookup component on the left panel and drag it to the diagram. Choose HubSpot from the Connection dropdown list. Choose Lookup from the Actions dropdown list. Select the table under the Table dropdown list. The Number Of Opened Deals column is stored in the Companies table. Select the Company name in the Keys field. Select the Number Of Opened Deals column from the Result Columns dropdown list. It will be added to the output results. Open Parameters to map keys. In this example, we map the Company name in HubSpot to the CompanyName in the source database. Click on the output arrow to check the changes in the output results. Target Setup The target component defines the system to where you want to load your data. Click Target on the left panel and drag it to the diagram. Select the data system from the Connection dropdown on the right panel. Select the DML operation (INSERT, UPDATE, DELETE, UPSERT) from the Actions dropdown list. Select the Contact table to load data. In the Parameters field, map the Lookup output columns with the Contact table columns. Case Studies of Data Pipelines in Action In general, data pipelines can be of various complexities. While some may contain 2-3 elements, others may comprise 10+ components. Skyvia is the data integration tool that effectively interconnects all these components. In fact, Skyvia appears in the center of the pipelines and coordinates data flows in a pipeline like a heart coordinates blood fluxes in the cardiovascular system. Let\u2019s look at several practical use cases where Skyvia was chosen as a data pipeline tool for dataflow management. Enhanced Inventory Management at Redmond Inc. The main challenge of [Redmond Inc.](https://skyvia.com/case-studies/redmond) was to synchronize the information on inventory stocks between Shopify and their internal ERP system. What\u2019s more, they needed to obtain a unified view of orders and inventory stocks. Thanks to Skyvia\u2019s implementation, the company has improved operational management. They also enhanced customer satisfaction since the Customer Service department obtained a complete overview of stock items. Optimized Workflow at FieldAx This company is a leading provider of training services, which faced the challenge of managing a geographically dispersed workforce one day. It became demanding to coordinate assignments and track the progress of employees in different counties. What\u2019s more, it was rather expensive to buy licenses for each technician, which often caused budget constraints. In response to these challenges, [FieldAx](https://skyvia.com/case-studies/fieldax) imported installation details to a local database (MySQL) instead of purchasing individual licenses. Once the data was in place, Skyvia seamlessly synchronized it with the FieldAx software\u2019s corresponding fields, such as technician names, job numbers, and job statuses. Overall, the collaboration between FieldAx and Skyvia has yielded transformative results for the company, enabling them to streamline job management processes while reducing costs significantly. Automated Data Analytics Pipeline at TitanHQ This company is a leading SaaS cybersecurity platform delivering a layered security solution to prevent user data vulnerability. Their management team wanted to obtain a 360-degree view of their customers, but that wasn\u2019t easy since data was dispersed across different systems (data warehouse, Sugar CRM, Maxio, ticketing and payment services, etc. With Skyvia, [TitanHQ](https://skyvia.com/case-studies/titanhq) engineers built data pipelines to gather data from CRM, payment, and ticketing systems into the Snowflake data warehouse. Then, this data was prepared and sent to Power BI to generate dashboards that provided the management team with valuable insights into their customer base. Conclusion Data pipeline tools can be of different types and for different purposes. In this article, we have explained the differences between them and have given some hints on when each of them will be the most appropriate. To help you choose the right solution for your data needs, we have also presented popular services for data pipeline management along with their features, advantages, and drawbacks. We have also presented some real-life examples of how to use such tools to increase the effectiveness of data management within an organization with Skyvia. This platform provides various data integration scenarios suitable for various cases. What\u2019s more, it offers regular scheduling options for automated data flows. Feel free to use Skyvia to organize your data and extract its value. FAQ for Data Pipeline Tools What is the purpose of a data pipeline? The main goal of each data pipeline is to convert raw data into meaningful insights. This information can help businesses make weighted decisions and optimize processes. Which tool is used for data pipelines? Skyvia is the data integration tool used to build and manage data pipelines. It links data from databases, CRM systems, e-commerce platforms, and other applications. In total, Skyvia supports 200+ data sources from which you can extract data and load data. Is data pipeline the same as ETL? Both data pipelines and ETL processes move data between various systems. In particular, ETL is a subtype of a data pipeline that performs the collection, transformation, and loading of data. Meanwhile, data pipelines in their generalized form also include other stages, such as sending data to end users, which means it\u2019s a more ample process with more stages except for data extraction, transformation, and transfer. What are the main differences between open-source and commercial data pipelines? Open-source solutions are free to use, while licensed software is available at a certain price. Open-source tools can be customized, so they provide more flexibility for businesses, while commercial options have a predefined set of tools, as a rule. Open-source tools don\u2019t provide support, so users have to rely on the community in case of issues, while commercial tools provide support teams usually accessible via chat, phone, or email to help users resolve their data integration issues. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-pipeline-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Pipeline+Tools+Reviewed+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-pipeline-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-pipeline-tools/&title=10+Best+Data+Pipeline+Tools+Reviewed+for+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-data-processing-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Data Processing Tools and Applications: Full Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-data-processing-tools/#respond) 1111 October 25, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-processing-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Processing+Tools+and+Applications%3A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-processing-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-processing-tools/&title=Best+Data+Processing+Tools+and+Applications%3A+Full+Guide) Our reality is flooded with data. However, raw data, in its original form, is often messy and unstructured. Without processing, making sense of or using it meaningfully is hard. This article considers various data processing tool types, their benefits, and their impact on business processes and decision-making. Table of Contents What is Data Processing? Data Processing Types Stages in Processing Data Types of Data Processing Tools Essential Features of Data Processing Tools Top 5 Data Processing Tools in the Market for 2025 Skyvia Informatica PowerCenter Talend Apache Hadoop Google BigQuery Future Trends in Data Processing Final Thoughts What is Data Processing? Data processing transforms raw, unorganized data into clear, actionable insights that help businesses make informed decisions. Whether it\u2019s as simple as sorting a list or as complex as analyzing huge datasets, data processing is the key to unlocking the actual value of data. Let\u2019s see this story in numbers: A study by [McKinsey](https://www.mckinsey.com/capabilities/growth-marketing-and-sales/solutions/periscope/solutions/marketing-solutions/customer-insights) found that companies that leverage customer behavioral insights (a product of effective data processing) outperform peers by 85% in sales growth and more than 25% in gross margin. Research from [Forrester](https://www.forrester.com/report/DataDriven-DecisionMaking-In-Consumers-Everyday-Lives/RES133724?categoryid=2849204&discountcode=CX19S%3Fcategoryid%3D2849204) suggests that 58% of businesses rely on data processing tools to make more accurate and data-driven decisions, leading to an average increase in productivity by 33% . According to a report by [IBM](https://www.ibm.com/topics/data-quality) , the cost of poor data quality in the US alone is estimated at $3.1 trillion annually. Effective data processing can significantly reduce these costs by ensuring data accuracy and reliability. The global data processing and analytics market was valued at $31.8 billion in 2021 and is expected to reach $76.7 billion by 2026, according to [MarketsandMarkets](https://www.marketsandmarkets.com/report-search-page.asp?rpt=data-processing-market) . This growth underscores the increasing reliance on data processing for business success. Data Processing Types Data processing usually depends on specific needs and the nature of data. The three primary types of data processing are: Batch Real-Time Distributed Each type has its unique approach and is suited to different tasks. Here\u2019s a quick overview of these data processing types. Data Processing Type Definition Quick Summary [Batch Processing](https://skyvia.com/blog/batch-etl-processing/) Involves collecting data over a period of time and processing it all at once. This method is ideal for tasks that don\u2019t require immediate action, like payroll processing or generating monthly reports. Best for periodic data handling. Real-Time Processing Data is processed immediately as it is collected, allowing for instant feedback and actions. This method is used in scenarios where time-sensitive data is crucial, such as online transactions, stock trading, or monitoring systems. Essential for instant decision-making. Distributed Processing Involves processing data across multiple servers or machines simultaneously. This approach handles large datasets that require significant computational power, often seen in big data analytics and complex scientific computations. Key for managing large-scale data tasks efficiently. Stages in Processing Data Not depending on the type, each data processing consists of a few crucial stages that work together to turn raw data into valuable insights, guiding better decision-making and helping organizations understand their data in meaningful ways: Data Collection. Data Cleaning. Data Transformation. Data Analysis. Data Visualization. Data Collection . This stage is the first one in the data processing journey. Imagine this as gathering all the ingredients to cook a meal. Data collection involves gathering raw data from databases, IoT, online surveys, or social media. The goal is to merge all the data users need to make sense of it. Data Cleaning . The quality and relevance of the data collected are crucial because they set the foundation for the entire process. Once users have collected the data, it\u2019s time to clean it up, like washing and prepping the ingredients before cooking. This step is vital because raw data is often messy and unstructured. Clean data ensures that the analysis companies perform later on is accurate and reliable. Data cleaning involves: Identifying and correcting errors. Removing duplicates. Filling in missing values. Data Transformation . After the data is clean, it might still need a little tweaking to be usable, just like chopping veggies or marinating meat. Data transformation involves converting the data into a format or structure suitable for analysis. It can include normalizing data, aggregating information, or converting data types. The aim is to get the data into a consistent, standardized form to be easily analyzed. Data Analysis . This stage involves turning raw data into actionable insights to inform decisions, solve problems, or uncover opportunities. It\u2019s where users start to dig into the data to uncover patterns, relationships, and insights. Depending on the goals, this could involve: Statistical analysis. Machine learning models. Looking at trends over time. Data Visualization . Finally, it\u2019s time to serve the dish and show it off. Data visualization presents the findings in a clear, visual format, like charts, graphs, dashboards, or interactive reports. Good visualization helps make complex data more understandable and accessible to everyone, not depending on the users\u2019 expertise level. This stage is crucial because it turns the insights into a story others can easily grasp and act on. Types of Data Processing Tools Data processing tools are essential for transforming raw data into valuable insights. Different tools can be used to handle various aspects of data processing. These tools can be categorized into their functionality and use cases. Each type has its unique features, benefits, and examples. Let\u2019s walk through such tool types. Functionality and Use Case Description Examples Extract, Transform, Load (ETL) Tools [ETL tools](https://skyvia.com/blog/etl-tools/) are designed to extract data from multiple sources, transform it into a format that\u2019s compatible with the destination system, and then load it into that system. These tools are essential for data warehousing, data migration, and business intelligence tasks. ETL tools work well with structured data, preparing it for analysis or reporting. \u2013 [Skyvia](https://skyvia.com/) . A cloud-based tool for no-code ETL operations, enabling data migration and synchronization between platforms like Salesforce, Google Sheets, and databases. \u2013 [Informatica PowerCenter](https://www.informatica.com/) . A market-leading ETL platform known for its powerful data transformation and integration capabilities. \u2013 [Talend](https://www.talend.com/) . Open-source ETL tool with a strong emphasis on data integration and transformation, often used in complex data workflows and migration tasks. Streaming and Event-Driven Tools Streaming and event-driven tools process data continuously, handling real-time data as it\u2019s generated. These tools are essential for scenarios that require immediate analysis and action, such as detecting fraud, monitoring IoT devices, or customer engagement. They are designed to process and respond to events or data streams as they happen. \u2013 [Apache Kafka](https://kafka.apache.org/) . A popular platform for building real-time streaming applications. Kafka processes massive streams of data in real time, perfect for event-driven use cases. \u2013 [Google BigQuery](https://cloud.google.com/bigquery?hl=en) . Offers real-time analytics on large-scale datasets, often used in conjunction with streaming data pipelines. \u2013 [Amazon Kinesis](https://aws.amazon.com/pm/kinesis/) . A fully managed service for real-time data streaming, enabling real-time analytics for IoT devices, logs, and machine learning. Essential Features of Data Processing Tools Picking out the perfect data processing tool is like choosing the right kitchen gadgets. We always want something fast, reliable, and easy to use, but it must also handle whatever we cook: a quick weeknight dinner or a multi-course feast. In the kitchen, like in data processing, certain features can make or break the workflow, determining how smoothly everything runs, how much users can whip up, and how safe their data stays. Here are the key features to look out for: Speed and Efficiency. Scalability. Security Features. User Interface and Experience. Speed and Efficiency . Nobody likes a slow cooker when you\u2019re starving for insights. The speed and efficiency of a data processing tool are crucial. Users want a tool that can handle large volumes of data quickly, whether they\u2019re processing real-time transactions or simmering complex queries on a massive dataset. A fast tool means the business gets the insights served hot and fresh, perfect for making those timely decisions. Scalability . It means preparing for growth. As the business grows, so does the data, which needs a tool to scale with the company. Whether the organization is dealing with small datasets today or planning to process terabytes of data tomorrow, the right tool should be able to expand its capacity without breaking a sweat. Security Features . Data is valuable, and keeping it safe is non-negotiable. Security features in a data processing tool are like your secret family recipe. They protect you from potential risks. Whether it\u2019s encryption, user authentication, or secure data storage, these features ensure that the data is handled safely and stays out of the wrong hands. In today\u2019s world, where data breaches are common, strong security is a must-have. User Interface and Experience . Last but not least, the user interface (UI) and experience (UX) can make or break the relationship with a data processing tool. A tool with an intuitive, user-friendly interface makes it easy to navigate through the data, run processes, and generate reports without needing a manual every time. It\u2019s like having a kitchen with perfectly organized drawers and tools where you expect smooth cooking. Top 5 Data Processing Tools in the Market for 2025 Skyvia [Skyvia](https://skyvia.com/data-integration) is a universal cloud-based platform that provides ETL, ELT, and reverse ETL data integration, backup, and data sync. It\u2019s well-suited for businesses that need to move data between cloud services, synchronize data between systems, or create complex ETL workflows without extensive coding. Skyvia supports [190+](https://skyvia.com/connectors) data sources, including popular cloud apps like [Salesforce](https://www.salesforce.com/) , [Google Sheets](https://workspace.google.com/products/sheets/) , and [SQL](https://www.w3schools.com/sql/sql_intro.asp) databases, making it a flexible and powerful tool for cloud-based data processing. Key Features Intuitive drag-and-drop interface for building data pipelines. The solution offers data integration, backup, and synchronization in a single platform. No-coding ETL process, making it accessible to non-developers. The platform is super user-friendly. According to the [G2 Crowd](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) rate, it\u2019s in the top three of the easiest-to-use ETL tools. Key Challenges Needs more video tutorials to describe the key features. Pricing [Paid plans](https://skyvia.com/pricing) start at $79/month , with more advanced features available at higher tiers. Free tier available with limited features. Informatica PowerCenter [Informatica PowerCenter](https://www.informatica.com/) is an enterprise-grade data integration platform that excels in transforming and moving large volumes of data across various systems. Known for its robust ETL capabilities, it\u2019s a go-to solution for organizations with complex data processing needs requiring high reliability, scalability, and data governance. Key Features Comprehensive ETL tool with strong data transformation capabilities. High scalability, suitable for processing large volumes of data. Extensive connectivity options with various databases, applications, and mainframes. Advanced data governance and metadata management features. Key Challenges The steep learning curve. High cost, making it less suitable for small to medium-sized businesses. Pricing Pricing is available upon request and typically varies based on deployment size and features. Talend [Talend](https://talend.com/) is a popular data integration platform that provides a comprehensive suite of [tools for data management](https://skyvia.com/blog/best-data-management-tools/) , including ETL, data quality, and big data processing. Its open-source roots give it a flexible and community-driven edge, making it an excellent choice for organizations that need customizable solutions and value a collaborative environment. Key Features The platform offers a robust open-source version that can be customized extensively, with a large community for support and development. It includes built-in tools for ensuring data accuracy and consistency, which is crucial for reliable analytics and reporting. It integrates with big data platforms like Hadoop and Spark, making it suitable for processing large datasets. Key Challenges Performance can be an issue with extremely large datasets. Some advanced features require the paid version. Pricing The free, open-source version is available. Talend Data Fabric (paid version) [pricing](https://talend.com/pricing/) starts at approximately $1,170 per user/month . Apache Hadoop [Apache Hadoop](https://hadoop.apache.org/) is an open-source framework that allows for the distributed processing of large datasets across clusters of computers. It\u2019s a cornerstone of big data processing, offering scalability, fault tolerance, and the ability to handle vast amounts of unstructured data. Hadoop is perfect for organizations that need to efficiently store and process large-scale data. Key Features Handles large volumes of [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) . Distributed processing across multiple nodes, ensuring scalability. A strong ecosystem with tools like HDFS (Hadoop Distributed File System) and YARN. High fault tolerance, with automatic data replication and recovery. Key Challenges Complex setup and management require significant expertise. Performance can vary depending on the configuration and hardware. Pricing Hadoop itself is free and open-source, but costs can arise from the hardware, storage, and expertise required to manage it. Google BigQuery [Google BigQuery](https://cloud.google.com/bigquery?hl=en) is a fully managed, serverless data warehouse for fast SQL queries and real-time analytics on large datasets. As part of the [Google Cloud Platform](https://cloud.google.com/?hl=en) , it integrates seamlessly with other Google services, making it a powerful option for organizations looking to process and analyze big data without worrying about infrastructure management. Key Features Serverless architecture with automatic scaling. Ability to run fast SQL queries on massive datasets. Real-time data analysis and integration with Google Cloud services. Built-in machine learning capabilities. Key Challenges Cost can accumulate quickly with large amounts of data or frequent queries. Some users may find the pricing model (based on data processed) challenging to predict. Pricing [Pricing](https://cloud.google.com/bigquery?hl=en#pricing) is based on a pay-as-you-go model, with storage starting at $0.02 per GB/month. Future Trends in Data Processing The future of data processing is dynamic and full of promise, with trends like real-time processing, AI integration, edge computing, and heightened security leading the way. Let\u2019s consider these trends in more detail. Real-Time Data Processing Becomes the Norm Waiting for data to be processed in the future will feel as outdated as waiting for dial-up internet to connect. Real-time data processing, where data is analyzed and acted upon the moment it\u2019s generated, is quickly becoming the standard. This shift is driven by the growing need for instant insights in finance, healthcare, and e-commerce, where decisions must be made in the blink of an eye. Rise of AI and Machine Learning Integration AI and machine learning are becoming the secret sauce in data processing. Almost every tool or service offers AI in its products. Therefore, we will see an even more significant development of such technologies in the future. As these technologies advance, they integrate into data processing tools to: automate complex tasks, predict trends, uncover insights that would be impossible to find manually. In the future, we expect AI to play a more significant role in optimizing data workflows, personalizing user experiences, and driving innovation. Edge Computing Gains Momentum With the explosion of IoT devices and the need to process data closer to its source, edge computing is gaining momentum. Instead of sending data to a centralized cloud server for processing, edge computing allows data to be processed on-site, at the \u201cedge\u201d of the network. This approach reduces latency and speeds up decision-making, which is crucial in scenarios like autonomous vehicles, smart cities, and industrial automation. Increased Focus on Data Privacy and Security As data breaches and privacy concerns continue to make headlines, the future of data processing will see a greater emphasis on data security and compliance. Stricter regulations like [GDPR](https://gdpr-info.eu/) and [CCPA](https://oag.ca.gov/privacy/ccpa) are just the beginning. Companies must adopt more advanced encryption techniques and privacy-preserving technologies like differential privacy and robust access controls to protect sensitive data. Growth of Data-as-a-Service (DaaS) The concept of Data-as-a-Service (DaaS) is set to take off, offering businesses access to data processing and analytics abilities on-demand without the need to manage the underlying infrastructure. This trend will make advanced data processing more accessible to companies of all sizes, enabling them to use big data, AI, and machine learning without the overhead of setting up complex systems. Data Processing with Sustainability in Mind Sustainability is becoming a key consideration across all industries, including data processing. As data centers and processing workloads grow, so does their environmental impact. The future will shift towards greener data processing practices, like using renewable energy sources, more efficient algorithms, and cooling technologies that reduce carbon footprints. Final Thoughts To summarize, data processing tools are essential for transforming raw data into ideas that really work, and their selection depends on different business needs, the company size, users\u2019 skills, and the way of working with data. Whether you\u2019re looking for a user-friendly cloud solution like Skyvia, an enterprise powerhouse like Informatica PowerCenter, or an open-source framework like Apache Hadoop, these tools offer a range of abilities to help businesses process and analyze their data effectively in 2024. FAQ for Data Processing Tools What are data processing tools, and why are they important? Data processing tools are software applications designed to collect, clean, process, and analyze large volumes of data. These tools transform raw data into actionable insights by sorting, filtering, aggregating, and analyzing data. They are essential because they help businesses make informed decisions, improve data accuracy, and streamline operations by automating the data handling process. How do I choose the best data processing tool for my business? When selecting a data processing tool, consider the following factors: Ease of use : Does the tool have an intuitive interface? Integration : Can it seamlessly integrate with your existing data sources and systems? Scalability : Will it grow with your business as your data needs expand? Cost : Is it within your budget, and does it provide value for the features offered? Support and community : Does it have robust customer support and an active user community for troubleshooting and improvements? What are the benefits of cloud-based data processing tools? Cloud-based data processing tools, like Google BigQuery or Skyvia, offer several benefits: Scalability : They can handle growing data volumes without needing expensive on-premise infrastructure. Accessibility : Teams can access data from anywhere with an internet connection. Cost-efficiency : You only pay for the resources you use, which is often cheaper than maintaining on-premise systems. Automatic updates : Cloud services often provide regular updates and maintenance, reducing the need for manual intervention. Can data processing tools handle real-time data? Many modern data processing tools, such as Apache Kafka and Google BigQuery, are designed to handle real-time data processing. These tools allow businesses to collect, process, and analyze data as it is generated, essential for use cases like fraud detection, stock market analysis, and IoT (Internet of Things) applications. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-processing-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Processing+Tools+and+Applications%3A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-processing-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-processing-tools/&title=Best+Data+Processing+Tools+and+Applications%3A+Full+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-data-reportings-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Data Reporting Tools & Software in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-data-reportings-tools/#respond) 1153 October 11, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-reportings-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Reporting+Tools+%26+Software+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-reportings-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-reportings-tools/&title=Best+Data+Reporting+Tools+%26+Software+in+2025) Data reporting tools are like the dashboard of a car. They give users all the information needed to steer their businesses in the right direction. Imagine you\u2019ve got tons of raw data from sales figures, customer feedback, website analytics, etc. On their own, these data points are just numbers on a page. Running this data through a reporting tool consolidates it into a single view to help users create detailed reports ranging from simple tables and charts to more complex, interactive dashboards, allowing to: Spot trends Identify issues Uncover opportunities So, let\u2019s discover the data reporting tool types and their benefits and walk through the top solutions in 2025. Table of contents Main Types of Reporting Tools Benefits of Using Data Reporting Tools Criteria for Selecting the Best Reporting Tools Top Data Reporting Tools for Better Project Visibility Tableau Power BI Domo Looker Studio Zoho Analytics Sisense Qlik Sense SAP Analytics Cloud IBM Cognos Analytic Final Thoughts on Choosing the Best Reporting Tool Main Types of Reporting Tools The table below summarizes the main types of reporting tools, each serving different user\u2019s needs, from strategic decision-making to day-to-day operations. Tool Type Description Examples Business Intelligence (BI) Visualization Tools BI tools are comprehensive platforms that help organizations collect, analyze, and present business data. They focus on transforming raw data into visual formats like charts, graphs, and maps. They often include features for data warehousing, advanced analytics, and reporting, providing a full suite of tools to support decision-making across the enterprise. [Power BI](https://www.microsoft.com/en-gb/power-platform/products/power-bi/) , [Tableau](https://www.tableau.com/) , and [QlikView](https://www.qlik.com/us/products/qlikview) . Ad Hoc Reporting Tools Ad Hoc reporting tools allow users to create reports on the fly without relying on pre-built templates or IT support. These flexible tools empower business users to generate custom reports to answer specific questions as they arise. [Looker](https://cloud.google.com/looker/?hl=en) and [Microsoft Power BI](https://www.microsoft.com/en-gb/power-platform/products/power-bi/) . Embedded Reporting Tools Embedded reporting tools integrate reporting and analytics capabilities directly into other software applications. This allows users to access reports and insights within the context of the software they are already using, improving workflow and decision-making efficiency. [Sisense](https://www.sisense.com/) and [Domo](https://www.domo.com/) . Benefits of Using Data Reporting Tools Numbers are always more honest than words are. So, let\u2019s allow numbers to speak. According to [Gartner](https://www.gartner.com/en) , 85% of Fortune 500 companies will use scalable data analytics tools by 2025 to manage their growing data needs. Collaboration and Sharing Data reporting tools facilitate easy sharing and collaboration, enabling teams to work together more effectively and ensuring everyone can access the same data. Statistics. Companies prioritizing collaborative reporting see a [30%](https://www.gartner.com/en) increase in project success rates. Better Data Accuracy Automated data reporting tools reduce the risk of human error, leading to more accurate and reliable reports. Statistics. Organizations using automated reporting tools experience a [40%](https://www2.deloitte.com/ua/uk.html) decrease in data errors. Criteria for Selecting the Best Reporting Tools Choosing the right reporting tool to make a business successful can feel like picking the perfect gadget. There are many options, and the best choice depends on each company\u2019s needs. We selected three key criteria to consider. [Forrester](https://www.forrester.com/bold) states that businesses using real-time analytics can see a 5% to 10% increase in operational efficiency. It might seem like a small number, but from a future perspective, it means a lot. [Deloitte](https://www2.deloitte.com/) researched that organizations using data visualization tools are 28% more likely to find the information they need and 25% more likely to achieve higher ROI on analytics investments. According to [McKinsey](https://www.mckinsey.com/) , companies that use automated data reporting tools show a 50% reduction in time spent on data-related tasks. [Harvard Business Review](https://hbr.org/) says data-driven companies are 5% more productive and 6% more profitable than their competitors. In business terms, it means that the future is theirs. A survey by [PwC](https://www.pwc.com/gx/en/issues/workforce/hopes-and-fears.html) found that companies using data reporting tools are three times more likely to achieve above-average business performance. [Forester](https://www.forrester.com/bold) also says businesses that customize their data reports are 60% more likely to meet their performance targets. Criteria for Selecting the Best Reporting Tools Choosing the right reporting tool to make a business successful can feel like picking the perfect gadget. There are many options, and the best choice depends on each company\u2019s needs. We selected three key criteria to consider. User-Friendliness . Robust reporting tools should be easy to use, even for those who aren\u2019t data experts. Look for tools with intuitive interfaces, drag-and-drop functionality, and clear instructions. The easier it is to navigate, the quicker users can start making data-driven decisions without getting bogged down by a steep learning curve. Also, consider the community, tutorials, video channels, etc., that might be helpful. Tip. Try out tools that offer free trials or demos to see if they fit well with your team\u2019s skill set and workflow. Customization Options . Each business is unique, so the reports should be too. Customization options provide tailoring reports highlighting the most important data to companies\u2019 goals. Whether customizing dashboards, setting up specific KPIs, or creating unique visualizations, a tool with strong customization features can help users get the insights that matter most. Tip. Look for tools to create custom templates, filters, and visualizations to match your business\u2019s needs. Integration Capabilities . The reporting tool should play nicely with other software that organizations already use. Strong integration capabilities ensure that all data flows seamlessly into the reports, whether CRM, ERP, marketing automation platform, etc. Such an approach saves time and improves data accuracy by eliminating manual data entry. Tip. Check whether the tool supports integration with the existing systems and offers API access for custom integrations. [Data integration platforms](https://skyvia.com/blog/data-integration-tools/) like [Skyvia](https://skyvia.com/data-integration) , [Zapier](https://zapier.com/) , [MuleSoft](https://www.mulesoft.com/) , [Talend](https://ua.talend.com/) , etc., become the game changers in this step. Each of them is robust enough for appropriate tasks. For example, Skyvia supports [190+](https://skyvia.com/connectors) integrations with popular databases, cloud apps, and CRM systems. It may connect Google Analytics, HubSpot, Salesforce, or other systems, ensuring data flows smoothly into your reporting tools without any hitches. The platform is in the top three list of 20 user-friendliest ones according to the last [G2 Crowd](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) rate and doesn\u2019t require extensive technical knowledge to set it up. Top Data Reporting Tools for Better Project Visibility Here\u2019s a quick overview of the top 10 data reporting tools, focusing on their key features, challenges, pricing, and what users say. Tableau [Tableau](https://www.tableau.com/) is a powerhouse with its drag-and-drop interface, allowing users to create stunning, interactive visualizations. It supports various data sources and is known for its robust analytics capabilities. Tableau makes it easy to turn complex data into actionable insights without depending on the users\u2019 skills. Plus, its visual storytelling features help businesses present their findings in a way that everyone can understand and act on. Challenges It has a steep learning curve for beginners, and some users find it pricey for smaller teams. Pricing Plans [Pricing](https://www.tableau.com/pricing) starts at $115 /month per user for the Creator plan. There are also options for $35 for Viewer and $70 for Explorer roles, making it flexible depending on users\u2019 needs. Users\u2019 Feedback Users love its visual capabilities but note that it might be overkill for simple reporting needs. [G2 Rating](https://www.g2.com/products/tableau/reviews) 4.4/5 (Over 2,120 reviews) [TrustRadius Rating](https://www.trustradius.com/search?q=Tableau) 8.9/10 (Over 1,400 reviews) [Capterra Rating](https://www.capterra.com/search/?query=Tableau) 4.6/5 (Over 1,600 reviews) Power BI [Power BI](https://www.microsoft.com/en-gb/power-platform/products/power-bi/) integrates natively with Microsoft products. So, it\u2019s the number one choice for those with Microsoft as the backbone of their data stack. It\u2019s strong in data visualization and has AI-powered analytics. Its ability to handle large datasets makes it a powerful tool for large enterprises. Challenges There is a steep learning curve for those diving into advanced features because the interface could be clearer. Pricing Plans [Power BI Pro](https://www.microsoft.com/en-gb/power-platform/products/power-bi/pricing) starts at $9.99 /month per user. There\u2019s also a free version with limited features. Users\u2019 Feedback The solution is highly praised for its value and integration with Microsoft tools. However, it can be challenging for non-technical users at first. [G2 Rating](https://www.g2.com/products/microsoft-microsoft-power-bi/reviews) 4.4/5 (Over 1,1300 reviews) [TrustRadius Rating](https://www.trustradius.com/products/microsoft-power-bi/reviews) 8.4/10 (Over 2,300 reviews) [Capterra Rating](https://www.capterra.com/p/176586/Power-BI/) 4.6/5 (Over 1,700 reviews) Coupler.io Coupler.io is a reporting automation platform with AI features focusing on marketing, sales, and financial reports. This tool offers in-app dashboards as well as helps automatically bring cross-channel business data into Looker Studio, Power BI, Tableau, spreadsheets to craft live reports. Using their pre-defined templates, like an [accounting dashboard](https://www.coupler.io/dashboard-examples/accounts-receivable-dashboard) and others, is a good option to start with data reporting. Challenges Some users report on complexities in the setup from scratch, requiring efforts to use the tool at full capacity. Pricing Plans Pricing starts at $24 per month, providing a set of plans for different team sizes and needs. Users\u2019 Feedback Many users enjoy Coupler.io\u2019s data transformation features and multi-channel reports, while some admit its steep learning curve. [G2 Rating](https://www.g2.com/products/coupler-io/reviews) : 4.8/5 (76 reviews) [TrustRadius Rating](https://www.trustradius.com/products/coupler-io/reviews) : 10/10 (2 reviews) [Capterra Rating](https://www.capterra.com/p/202311/Coupler-io/) : 4.9/5 (82 reviews) Domo [Domo](https://www.domo.com/) is an all-in-one platform that combines BI with collaboration tools. It\u2019s robust in real-time data visualization and has many connectors for different data sources. Domo\u2019s user-friendly interface makes it easy for teams to collaborate and share insights across the company. While its extensive feature set can be overwhelming initially, it\u2019s a powerful tool for businesses looking to centralize their data and make informed decisions quickly. Challenges The cost can be too expensive, especially for smaller companies. Pricing Plans [Pricing](https://www.domo.com/pricing) is customized based on the number of users and data integrations, so requesting a quote is better. Users\u2019 Feedback Domo users love the range of data connectors and real-time dashboards but often mention the high cost and complexity of the platform. [G2 Rating](https://www.g2.com/products/domo/reviews#reviews) 4.2/5 (Over 790 reviews) [TrustRadius Rating](https://www.trustradius.com/products/domo/reviews) 8.6/10 (Over 300 reviews) [Capterra Rating](https://www.capterra.com/p/119119/Domo/reviews/) 4.3/5 (Over 100 reviews) Looker Studio [Looker Studio](https://lookerstudio.google.com/u/0/navigation/reporting) (formerly Google Data Studio) is an open-source, popular Google ecosystem tool that makes creating custom dashboards and reports easy, especially for companies already using other Google products like Google Analytics. Its user-friendly interface is great for quick insights without a steep learning curve. While it may not have all the advanced features of some paid tools, its seamless integration with Google services and zero-cost access make it a go-to choice for small to mid-sized businesses. Challenges It\u2019s not as feature-rich as some paid tools, and it might not handle large datasets as smoothly. Pricing Plans It\u2019s a free solution. Users\u2019 Feedback Users love its simplicity and seamless integration with Google tools. However, it can be limited in terms of advanced features and handling large datasets. [G2 Rating](https://www.g2.com/products/looker-studio/reviews) 4.3/5 (Over 420 reviews) [TrustRadius Rating](https://www.trustradius.com/products/looker-studio/reviews) 8.3/10 (Over 50 reviews) [Capterra Rating](https://www.capterra.com/p/190616/Data-Studio/) 4.5/5 (Over 320 reviews) Zoho Analytics [Zoho Analytics](https://www.zoho.com/analytics/) is a versatile BI tool focused on easy-to-use dashboards and AI-powered analytics. It integrates well with the rest of the Zoho suite. This approach makes it a good choice for businesses already using Zoho\u2019s CRM, finance, or HR tools. While it\u2019s designed to be user-friendly, Zoho Analytics also offers powerful features for deeper data analysis, making it a solid option for both beginners and experienced analysts. Its affordability and seamless integration within the Zoho ecosystem make it a popular choice for growing businesses. Challenges Some users report that it can be slow with large datasets, and the interface isn\u2019t as polished as some competitors. Pricing Plans Starts at [$24](https://www.zoho.com/analytics/pricing.html) per month for two users, with various plans depending on users\u2019 needs. Users\u2019 Feedback Users mostly like Zoho Analytics for its affordability and ease of use, especially within the Zoho ecosystem. However, some find it less powerful when dealing with large, complex datasets. [G2 Rating](https://www.g2.com/products/zoho-analytics/reviews) 4.3/5 (Over 300 reviews) [TrustRadius Rating](https://www.trustradius.com/products/zoho-analytics/reviews) 8.2/10 (Over 80 reviews) [Capterra Rating](https://www.capterra.com/p/129749/Zoho-Analytics/) 4.4/5 (Over 300 reviews) Sisense [Sisense](https://www.sisense.com/) is known for its ability to handle large datasets and complex data modeling. It\u2019s great for embedding analytics into users\u2019 applications. With its powerful data engine, Sisense can quickly process vast amounts of data, making it good for businesses dealing with big data. While the platform is feature-rich and may require some technical expertise to fully leverage, its scalability and flexibility make it a top choice for companies that need robust, customizable analytics solutions. Additionally, its ability to integrate seamlessly with various data sources and tools ensures that users can build a comprehensive analytics environment tailored to their requirements. Challenges It\u2019s one of the most expensive options and can be complex to set up and manage. Pricing Plans Custom [pricing](https://www.sisense.com/get/pricing/) is based on users\u2019 specific needs. Please get in touch with them for a quote. Users\u2019 Feedback Users love its scalability and performance with big data but note that it requires significant technical expertise. [G2 Rating](https://www.g2.com/products/sisense/reviews) 4.2/5 (Over 1000 reviews) [TrustRadius Rating](https://www.trustradius.com/products/sisense/reviews) 8.4/10 (Over 350 reviews) [Capterra Rating](https://www.capterra.com/p/86955/Sisense/) 4.5/5 (Over 380 reviews) Qlik Sense [Qlik Sense](https://www.qlik.com/us/products/qlik-sense) offers powerful data visualization and self-service analytics. It provides an associative data model, which helps discover hidden insights. This model allows users to explore data freely without the limitations of traditional query-based tools. That makes it easier to uncover connections that might otherwise go unnoticed. Its flexibility and depth of analysis make it a favorite for businesses that want to empower users at all levels to make data-driven decisions. Challenges It can be challenging for beginners, and some users find the learning curve steep for advanced features. Pricing Plans [Pricing plans](https://www.qlik.com/us/pricing/data-integration-products-pricing) are flexible and depend on the user\u2019s requirements. There\u2019s also a free version available with limited capabilities. Users\u2019 Feedback Users often mention a steep learning curve but also appreciate the depth of analysis it offers. [G2 Rating](https://www.g2.com/products/qlik-sense/reviews) 4.5/5 (Over 660 reviews) [TrustRadius Rating](https://www.trustradius.com/products/qlik-sense/reviews) 8/10 (Over 990 reviews) [Capterra Rating](https://www.capterra.com/p/209809/Qlik-Sense/) 4.5/5 (Over 257 reviews) SAP Analytics Cloud [SAP Analytics Cloud](https://www.sap.com/products/technology-platform/cloud-analytics.html) combines BI, predictive analytics, and planning in one platform. It\u2019s powerful for those already using SAP products. The platform offers a unified solution that seamlessly integrates with other SAP tools, making it a successful choice for enterprises already invested in the SAP ecosystem. Its comprehensive approach to analytics, including AI-driven insights, makes it a robust option for companies looking to centralize their analytics, planning, and reporting needs in one place. Challenges It\u2019s not the most user-friendly tool, especially for users who are new to SAP. Pricing Plans [Pricing](https://www.sap.com/products/technology-platform/cloud-analytics/pricing.html) starts at $36 /month per user, with plans based on companies\u2019 specific needs. Users\u2019 Feedback Users mostly like integrating SAP products and combining BI, planning, and predictive analytics in one tool. However, some find it complex and not very user-friendly. [G2 Rating](https://www.g2.com/products/sap-analytics-cloud/reviews) 4.2/5 (Over 500 reviews) [TrustRadius Rating](https://www.trustradius.com/products/sap-analytics-cloud/reviews) 8.5/10 (Over 800 reviews) [Capterra Rating](https://www.capterra.com/p/249571/SAP-Analytics-Cloud/) 4.4/5 (Over 110 reviews) IBM Cognos Analytics [IBM Cognos Analytics](https://www.ibm.com/products/cognos-analytics) offers AI-assisted data exploration and reporting, allowing for easy finding insights from complex data sets. The platform is a go-to solution for businesses with extensive reporting and analytics needs while handling large-scale data challenges. It might have a steeper learning curve, especially for new users. Still, its extensive customization options make it robust for companies that require detailed, enterprise-grade analytics. IBM Cognos also integrates well with other IBM products, providing a cohesive experience for organizations already within the IBM ecosystem. Challenges Some users find the interface outdated, and the initial setup can be complex. Pricing Plans [Pricing](https://www.ibm.com/products/cognos-analytics/pricing) starts at $10.6 /month per user, with options for different user roles. Users\u2019 Feedback Businesses like its powerful analytics capabilities but often mention that the interface could be updated. [G2 Rating](https://www.g2.com/products/ibm-cognos-analytics/reviews) 4.0/5 (Over 370 reviews) [TrustRadius Rating](https://www.trustradius.com/products/ibm-cognos-analytics/reviews) 8.2/10 (Over 570 reviews) [Capterra Rating](https://www.capterra.com/p/240583/IBM-Cognos-Analytics/) 4.2/5 (Over 139 reviews) Final Thoughts on Choosing the Best Reporting Tool Not all reporting tools are created equal. Each platform has strengths and weaknesses, so let\u2019s highlight three key moments to help you make the best choice. Company requirements Company budget User expertise Before selecting the data reporting tool, define the objectives that will guide the company in choosing the right data, metrics, and report types. It may be tracking sales performance, monitoring customer behavior, measuring marketing ROI, etc. Knowing clear goals will keep such a selection focused and relevant. For instance, Looker Studio might be perfect for small businesses with simple needs. Tools like Tableau or Power BI might be more appropriate if a company deals with large datasets and requires advanced features. So, evaluate all the options and try a few demos before selection. Ratings and feedback may also give potential users a good sense of how each tool is perceived in the market and how it can solve your organization\u2019s business pains. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-reportings-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Reporting+Tools+%26+Software+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-reportings-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-reportings-tools/&title=Best+Data+Reporting+Tools+%26+Software+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-data-transformation-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Data Transformation Tools for 2025 [Free & Paid] By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/best-data-transformation-tools/#respond) 3814 March 13, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-transformation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Transformation+Tools+for+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-transformation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-transformation-tools/&title=Best+Data+Transformation+Tools+for+2025+%5BFree+%26+Paid%5D) Not all data are created equal. For example, both [HubSpot and Salesforce](https://skyvia.com/blog/hubspot-salesforce-integration/) are CRM platforms. But they don\u2019t have the same data structure and format. Or they even use different naming conventions, APIs, and more. To integrate data from both, you need to do some data transformation. And that\u2019s where data transformation tools come in. Data transformation is part of [data pipelines](https://skyvia.com/blog/10-best-data-pipeline-tools/) like [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) . If all data have the same structure, types, and formats, there\u2019s no need for the \u2018T\u2019 in these data pipelines. So, if you\u2019re on a quest to look for tools that offer easy but flexible data transformation, seek no more. This article contains curated information from reputable sources to give you options. We included only the tools with very positive feedback from their users. Here\u2019s what we are going to discuss: The Definition of Data Transformation The 10 Best Data Transformation Tools Skyvia Informatica Matillion dbt Datameer Denodo Pentaho Data Integration IBM DataStage Keboola Trifacta Conclusion Let\u2019s dive in and compare data transformation tools. The Definition of Data Transformation Data transformation is the process of converting data from one format to another. To explain how it works, let\u2019s compare it to translating a book. An author may have started writing his masterpiece in English. But he wants to capture another language group. Let\u2019s say Spanish. So, the author hires a translator. The translation of the text is like converting data from one format to another. Once done, a Spanish reader can understand the translated book. Likewise, a different app or system can use the transformed data. Data transformation also helps in cleaning, filtering, summarizing, and reshaping data. For example, a stakeholder doesn\u2019t need a lot of details. Only the summary matters. So, you perform data aggregation like sum, average, or the like. Another example is redundant entries in your data. You don\u2019t want this spoiling your stakeholder\u2019s data-driven decisions. So, you remove these unwanted duplicates. But what should you look for in tools for data transformation? The best data transformation tool for you is one that is simple and fast. But it should be flexible enough that it can handle your unique requirements. Some tools offer a graphical interface with components and templates you can use. Others use formulas. And even others allow you to code. But some tools have them all. So, let\u2019s start our list of data transformation tools. The Best Data Transformation Tools Skyvia [Skyvia](https://skyvia.com/) is a cloud-based data platform that helps you work with many data sources and formats. You can use it for data extraction, transformation, and loading. The cool thing here is the easy-to-use graphical interface for data transformation. You can also do a number of data transformations and load it to different targets in one data flow. You\u2019ll feel at home if your business works on data in the cloud, on-premise, or hybrid. It also handles [import](https://skyvia.com/data-integration/import) , export, replication, and backup. Check out some of Skyvia\u2019s data transformations: If needed, you can apply transformations using SQL to make use of aggregate functions. Both power users and data engineers will enjoy its no-code and low-code solutions. Pros Allows scheduled bidirectional data integration between various sources. You can pick from [200+ connectors](https://skyvia.com/connectors/) . Use both graphical components and SQL transformations that are easy to configure. Automatic mapping of fields with similar names. Perform SQL queries on cloud apps like [Salesforce](https://skyvia.com/blog/best-etl-tools-for-salesforce/) and HubSpot. All connectors and data transformation components are available for the free tier. It\u2019s not only for transformations. It\u2019s an end-to-end [data integration solution](https://skyvia.com/blog/data-integration-tools/) . Cons Limits on the free tier. Go for Standard pricing or higher for advanced mapping features and integration scenarios. Pricing Flexible [freemium model](https://skyvia.com/pricing/) . Free plan for 10k records/month. Starts at $15/month for 100k records. Informatica [Informatica](https://www.informatica.com/) is another company that helps you integrate data from various sources. You can use Data Transformation to change complex files. This includes PDFs, Excel with Data Processor, and other transformations. You can also use different languages like Java or Python to write your own custom logic. Or use a graphical interface for easier transformations. Various transformations include Aggregator, Filter, Joiner, Lookup, etc. Data experts from large enterprises will enjoy the power of Informatica\u2019s solutions. Pros It provides a comprehensive set of data transformation components and connectors. With a graphical user interface to create transformations. Supports complex data transformations with coding. Cons No free tier, and the cost is expensive. Learning it is not too easy for beginners. Pricing 30-day free trial Prepaid subscription based on Informatica Processing Unit (IPU). Contact Informatica sales for more details about IPUs, or visit the pricing page [here](https://www.informatica.com/products/cloud-integration/pricing.html) . Matillion [Matillion](https://www.matillion.com/) helps you transform data in the cloud with ease and speed. You can connect to different data sources. And load data into your cloud warehouse. And also, use a simple graphical interface to change data as you like. It also lets you write Python code for some advanced tasks. With Matillion, you can make your data ready for analysis and insights. This is good for data experts in small and medium-sized businesses. Pros Over 150 pre-built connectors for various data sources and destinations. Includes a graphical UI to create orchestration jobs and sophisticated ETL pipelines. Native transformation capabilities using an expression editor. Supports custom Python scripts for complex transformations. Cons The free tier does not include Matillion ETL. Limited Matillion Data Loader features on the free tier. Pricing With free tier and trials to Basic, Advanced, and Enterprise plans. Pay by Matillion credits. Visit [Matillion pricing](https://www.matillion.com/pricing/) for more details. Matillion credits start at $2.00 per credit. dbt [dbt](https://www.getdbt.com/) is a tool that helps you transform data with SQL and Python. You can write SQL code to build and manage data models, test your data quality, and document your work. dbt is not a graphical tool, but a command-line tool that needs code. So, the target users are the ones good at coding in SQL and Python. And dbt does the \u2018T\u2019 in ETL and ELT. So, you can\u2019t extract and load with dbt. You can use dbt with other tools like [Snowflake](https://community.snowflake.com/s/article/Using-DBT-to-Execute-ELT-Pipelines-in-Snowflake) . Pros Modular data transformations with .sql and .py files. And since it\u2019s modular, you can reuse these files. Version control code. dbt streamlines the creation of data documentation. Extends other ELT and [ETL tools](https://skyvia.com/blog/etl-tools/) . Cons Requires coding with SQL and/or Python. Simplicity and readability are in the hands of data engineers. Pricing Forever free for 1 developer seat. Starts at $100/developer seat/month. Datameer [Datameer](https://www.datameer.com/) is a company that makes it easy for you to work with your data on Snowflake. You can use SQL or no-code tools to explore, visualize, transform, and catalog your data. Datameer helps you get insights faster and rely less on IT. Data engineers and non-coding data analysts will like this tool. You can use Datameer along with other ETL/ELT tools. And Datameer will handle the Snowflake data transformation part. For example, you can use [Datameer with FiveTran](https://www.datameer.com/fivetran-and-datameer/) . Pros No-code, graphical interface. Datameer can produce SQL queries in the form of views. It makes it accessible to users without SQL expertise. Claims to have native Snowflake transformations up to 12x faster with Visual SQL. Extends other ETL and ELT tools that can handle Snowflake. Cons No free tier. Limited to Snowflake and CSVs. Datameer can be difficult to integrate with some legacy systems or custom applications. Pricing 14-day free trial. No posted pricing on the website. You have to upgrade, and someone from Datameer will contact you. Denodo [Denodo](https://www.denodo.com/en) is a data management platform that lets transform data without moving it. You can choose between structured and unstructured sources. And you can use SQL or other services to code your data transformations. Or you can use a graphical interface to create virtual views. Those views combine data from different sources. This is great for business users with a little technical background. But data experts will love this too. Pros Increases business user productivity and reduces data preparation effort. Supports various data sources. This includes flat files, cloud-based databases, or on-premises databases. Good for massive data that is always in motion and constantly changing. Cons Learning materials need improvement. Date conversions are not easy according to [this](https://www.peerspot.com/products/denodo-reviews#review_2956543) . Not cheap and not suitable for smaller organizations, according to [this](https://www.trustradius.com/reviews/denodo-2019-06-07-10-17-48) . Pricing 30-day free trial Download and install Denodo Express with standard features for free. $6.27/hour after a 30-day license. Check [here](https://www.denodo.com/en/denodo-platform/denodo-cloud) for the pricing model. Pentaho Data Integration [Pentaho Data Integration (PDI)](https://help.hitachivantara.com/Documentation/Pentaho/9.1/Setup/Pentaho_Data_Integration_(PDI)_tutorial) helps you do ETL from different sources. You can use it to check, fix, and improve your data quality. You can also use it to analyze your data with various methods. PDI lets you use graphical tools or scripting languages in data transformations. It also has built-in templates for common tasks. Data experts with scripting skills will appreciate this tool. Pros Import data from any sources and different databases. Managing data in on-premise, hybrid, and cloud environments. Simple graphical user interface. Cons Technical documentation needs improvement. Pricey, according to some reviews in Gartner Peer Insights. No built-in data masking for sensitive data. But a scripting transformation is possible. Initial configuration is complicated according to [this](https://www.trustradius.com/reviews/pentaho-2021-03-18-20-11-48) . Pricing Try for free the [Pentaho Community Edition](https://www.hitachivantara.com/en-us/products/lumada-dataops/data-integration-analytics/pentaho-community-edition.html) . 30-day free trial of Pentaho Enterprise Edition. $100/user/month to process 5 million rows. You can also adjust your plan as you grow. IBM DataStage [IBM DataStage](https://www.ibm.com/products/datastage) is a data integration tool that helps you move and change data in different ways. It\u2019s good for both ETL and ELT patterns. It can deliver transformed data to various destinations. This includes data warehouses, web services, and messaging systems. It also provides a graphical framework for developing the jobs. This is another tool that data experts will appreciate. Pros Simple graphical user interface with drag and drop. It can handle large volumes of data and perform parallel processing. Various connectors, both relational and non-relational. Perform data quality and transformation, such as cleansing, standardization, and validation. Cons Newer types of data sources are not yet available, according to [this](https://www.peerspot.com/products/ibm-infosphere-datastage-reviews) . Quite expensive according to [this](https://www.peerspot.com/products/ibm-infosphere-datastage-reviews) . Pricing Charges by Capacity Unit-Hour used for job runs. Price varies per country. Free for 15 Capacity Unit-Hours. Check [here](https://cloud.ibm.com/catalog/services/datastage) for more details about the pricing. Keboola [Keboola](https://www.keboola.com/) is a cloud data platform that helps you connect, analyze, and manage your data. You can use different languages like SQL and Python to write your transformations. Then, automate them with orchestrations. You can also explore your data with analytical workspaces. Then, use data templates and code patterns to make your workflows easier. This product is best for teams of technical data experts. This includes scientists, engineers, and analysts. Pros 200+ data sources and targets supported. Transform your data with SQL, Python, R, dbt, etc. A graphical user interface for creating and managing workflows. Automate and schedule your transformations through orchestrations. Use data templates and code patterns to speed up your data workflows. Choose between AWS, Azure, or GCP to host Keboola. Cons Not a data streaming service and does not offer continuous data extraction. Only one Keboola Connection project for the free tier. Pricing Includes free tier. Contact Sales for Enterprise Plan. Trifacta [Trifacta](https://www.trifacta.com/) is a data transformation solution without coding. You can see your data and how it changes on the screen. It can also give you some ideas on how to transform your data. It uses Predictive transformation to do that. You can use Trifacta to join data from different places, switch rows and columns, group data, and more. If you like Excel, this is like a powerful version of it. Pros It provides a self-service, no-code, low-code, or dbt transformations option for different users. A wide range of transformation operations. Visualize data transformations with predictive, real-time previews. Graphical user interface. Cons Performance issues on web interface when dealing with large datasets. Pricing 30-day free trial. Starts at $80/user/month. Conclusion That\u2019s all for the top data transformation tools. The next thing to do is to try them. Most have free trials or free tiers. So, what\u2019s stopping you from giving them a try? Skyvia is at the top of the list because we want you to try it out first. Users of Skyvia remember it for ease of use. So, see for yourself by registering [here](https://id.skyvia.com/core/register) . It\u2019s free. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-transformation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Transformation+Tools+for+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-transformation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-transformation-tools/&title=Best+Data+Transformation+Tools+for+2025+%5BFree+%26+Paid%5D) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-data-warehouse-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) 10 Best Data Warehouse Tools for 2025: Boost Your Analytics By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-data-warehouse-tools/#respond) 3472 May 24, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-warehouse-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Warehouse+Tools+for+2025%3A+Boost+Your+Analytics&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-warehouse-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-warehouse-tools/&title=10+Best+Data+Warehouse+Tools+for+2025%3A+Boost+Your+Analytics) Have you noticed that the amount of data coming from IoT services, OLTP databases, and enterprise applications is constantly growing? That isn\u2019t just an illusion \u2014 [Forbes](https://www.forbes.com/sites/gilpress/2021/12/30/54-predictions-about-the-state-of-data-in-2021/) reports that the generated data volume increased by around 5,000% between 2010 and 2020. This growth rate is tremendous, and we are already in 2025, so what\u2019s next? As the amount of data continues to grow, taking value from it appears crucial. At this point, data warehouse tools come in handy! There are various types of data warehouse applications, starting from pure [ETL tools](https://skyvia.com/blog/etl-tools/) up to more specific analytical ones. ETL services are responsible for the extraction of heterogeneous data from multiple sources, its transformation, and loading into a data warehouse. Whereas analytical tools, such as dbt, offer data modeling features to make real-time business decisions. In this article, you\u2019ll get an overview of data warehouse tools and decide which one suits best for any specific data-related tasks. Table of Contents Introduction to Data Warehouse Tools What Are Types of Data Warehouse? Top Data Warehouse Tools Skyvia Amazon Redshift Microsoft Azure Google BigQuery IBM Data Warehouse Tools Oracle Autonomous Data Warehouse Snowflake Teradata SAP Datasphere SAS Cloud How to Choose the Best Data Warehouse Solution Conclusion Introduction to Data Warehouse Tools Before exploring a range of tools data warehouse is often associated with, it\u2019s worth reminding what a data warehouse actually is. In brief, a [data warehouse](https://www.ibm.com/topics/data-warehouse) (DWH) is the most powerful tool that supports business intelligence (BI). Together with top data warehouse tools, it transforms raw data into meaningful information for analysis and further business decision-making. Here are some examples of what data warehouses are used for: Inventory control and sales prediction in commerce. Fraud detection in the financial sector. Vehicle management in logistics. Customer profile analysis. A DWH integrates data from various sources: logistics, sales, marketing, support, and other business departments. The type of data loaded into the data warehouse is heterogeneous: [Structured](https://skyvia.com/blog/structured-vs-unstructured-data/) (from relational databases) Semi-structured (XML files) [Unstructured](https://skyvia.com/blog/structured-vs-unstructured-data/) (multimedia materials) Here come data warehouse ETL tools that help to properly extract, transform, and load data into the storage repository. All these operations are essential to make data from various sources consistent and, thus, easier to analyze. The amount of data stored in a DWH is enormous, so there\u2019s a need to implement the best data warehouse tools for managing it properly. Those tools need to enable aggregating data based on the [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) technology in so-called [data marts](https://www.ibm.com/topics/data-mart) : building blocks of a DWH. Each data mart serves a specific business area, corporate department, or user group. What Are Types of Data Warehouse? Classification of DWH types depends on various factors and aspects: purposes of use, spheres of application, size, structure, etc. In this article, we provide the data warehouse range based on the physical location criteria. On-premises Physical clusters and servers are usually located within the operational address of the company. On-premises data warehouse provides instant access to data to the company\u2019s admins, managers, and employees, depending on the provided access level. Such a solution is perfect for those businesses that require high data security and protection. At the same time, on-premises DWHs have some drawbacks, which could be addressed and mitigated with open-source data warehouse tools. Cloud-based Gathered data is stored on the servers of cloud storage providers and managed with cloud data warehouse tools. This guarantees high quality of service (QoS) rates: high availability of data stored, scaling opportunities, pay-per-use, and other tangible advantages for businesses. Hybrid Some companies decide to implement hybrid data warehouses, which means that both on-premises and cloud-based resources are used together for data analysis. A hybrid data warehouse meets the ongoing data flow in the company \u2013 instead of adding new clusters on-premises, cloud solutions come in handy. Another reason for the hybrid data warehouse concept in use is to separate data for various purposes: storage-only, business intelligence, backup, etc. Top Data Warehouse Tools Similarly to the types of DWHs, paid and free data warehouse tools could be classified. Solutions for [data integration](https://skyvia.com/blog/data-integration-tools/) , data export, and data analysis are common types of data warehouse tools. Applications for data integration, also known as ETL tools, are widely used across different DWH types.\u00a0 Their purpose is to ingest data from various data sources, [transform the obtained data](https://skyvia.com/blog/best-data-transformation-tools/) (from ordinal to numerical type, for instance) for further data analysis, and then load it into a DWH. At this point, a data warehouse becomes a foundation for business intelligence decisions. As data gets more complex and voluminous,\u00a0 an [ELT approach evolved](https://skyvia.com/blog/elt-vs-etl/) and has practically replaced ETL. The main difference between ELT and ELT is that in the ELT approach data transformation happens already in a data warehouse. It removes the risks of damaging the data during the loading stage and could be extremely convenient when companies deal with tons of data. Below you\u2019ll find a description of the top [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) data warehouse tools addressing complex data-related tasks and providing a holistic solution for business performance. Skyvia [Skyvia](https://skyvia.com/) is the cloud integration tool perfect for data ingestion: it extracts the needed data from CRMs, social media, sales systems, and other applications. As all this data needs to be standardized before loading to the data warehouse \u2014 here also comes Skyvia to automatically transform the extracted data to the required format. And finally, this service loads data into DWH so everything is ready for analysis and reporting. Skyvia provides such fundamental scenarios for data integration: [Import (ETL and Reverse ETL tool)](https://skyvia.com/data-integration/import) [Replication (ELT tool)](https://skyvia.com/data-integration/replication) This platform also provides other tools for numerous data-related tasks and scenarios. [See here for more information](https://skyvia.com/platform/) . Data Import Skyvia allows you to import data from local CSV files, cloud apps, databases, and data warehouses. Import scenario is a fully-featured ETL and Reverse ETL tool that would be ideal in case there are already created tables in a DWH. For instance, you may need to load data into an existing database structure, or you need a Reverse ETL tool to load the activated data back to the operational systems. To implement the import scenario, simply [indicate the source](https://docs.skyvia.com/data-integration/import/) from where the data would be extracted and the target DWH to where it would be transferred. In order to define the logic of data import, create a task by indicating the data mapping rules and scheduling for automatic data loading or update. Data Replication This scenario is ideal for creating the data copy in DWH and keeping it up-to-date automatically. The system uses the ELT approach for data replication to design a data structure and put the extracted data there. To benefit from Skyvia data replication, choose the source database or cloud app along with the objects you want to replicate. Then select your data warehouse as a target. Make sure to also select [the Incremental Update option](https://docs.skyvia.com/data-integration/replication/) so that new or modified data will be transferred from a source application to DWH. Advantages of Skyvia Apart from being a holistic data warehouse tool, offering all the necessary data management features, Skyvia has such other benefits: No extra software installation is needed. Being a cloud-based platform accessible via any web browser, there\u2019s NO need to install and configure anything on your computer. Suitable for any business. This platform offers solutions for SMBs as well as for large enterprises. No technical knowledge is required. This service is extremely user-friendly: no coding experience is required to fulfill complicated tasks. Offers a free plan. Every Skyvia solution offers a free plan. There are also plans with additional options and storage capacities to choose from: see [Skyvia pricing](https://skyvia.com/pricing/) for details. Amazon Redshift [Amazon Redshift](https://aws.amazon.com/redshift/) is a cloud data warehouse tool designed to provide advanced capabilities for data-based reporting and analytics. In simple words, Amazon Redshift allows managing large amounts of structured data using SQL queries. This data warehouse tool is easy to set up: it takes only several steps to configure. Then you can decide whether to query data from Amazon S3 without loading it directly in Amazon Redshift or work with data lakes by importing and exporting data from them. Advantages of Amazon Redshift Suitable for automated data monitoring and data management processes. Allows for scaling a data warehouse. Provides high-performance computing services for data processing. Price: Amazon offers a free trial for the Redshift product, after which the pay-as-you-go pricing model applies. Thus, the overall cost for Amazon services largely depends on the amount of computing and storage capacities utilized. Microsoft Azure [Microsoft Azure](https://azure.microsoft.com/en-us/) is a cloud platform that provides computing power for companies, in particular, Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions. There\u2019s also a Software-as-a-Service (SaaS) solution, though it would rather be suitable for conventional users rather than companies. Microsoft Azure contains more than 200 tools but we would highlight only those that are designed for data warehouse management: Azure Data Factory. This tool contains all the necessary instruments to build up a DWH by allowing the migration of all the necessary data using the ETL approach. Then the loaded data is transferred to the Azure Synapse Analytics tool. Azure Synapse Analytics. This cloud application employs [data mining](https://skyvia.com/blog/data-mining-tools/) and machine learning algorithms for extracting the key points from the loaded data. Advantages of Microsoft Azure Provides a variety of different solutions within the same platform. Has functions for managing on-premises and cloud data warehouses. Grants high availability of data. Price: Microsoft Azure is a public cloud platform; the payment for services depends on the storage and computing power used. Google BigQuery [Google BigQuery](https://cloud.google.com/bigquery?utm_source=google&utm_medium=cpc&utm_campaign=emea-it-all-en-dr-bkws-all-all-trial-e-gcp-1011340&utm_content=text-ad-none-any-dev_c-cre_574628601484-adgp_Hybrid%20%7C%20BKWS%20-%20EXA%20%7C%20Txt%20~%20Data%20Analytics%20~%20BigQuery%23v1-kwid_43700072692462180-aud-606988878614%3Akwd-47616965283-userloc_1008141&utm_term=kw_bigquery-net_g-plac_&&gad=1&gclid=Cj0KCQjwu-KiBhCsARIsAPztUF0KvsqHqqwSSElvQLp2GQfsze-EM9qlGsE0mCT8T05NhJnbL5xQegkaApnzEALw_wcB&gclsrc=aw.ds#section-1) is an enterprise DWH located on the Google Cloud platform. It contains instruments for migrating data from relational databases and on-premises data warehouses to BigQuery. The same goes for data transfer from other cloud-based solutions \u2014 you can migrate everything in BigQuery to obtain precise analytical derivations. Google BigQuery is very effective when it comes to providing real-time and predictive analytics based on AI algorithms. This tool is particularly suitable for analyzing marketing data by connecting to Google Ads and other Google products, loading data from them, and deriving business decisions based on it. Advantages of Google BigQuery Provides flexibility in selecting the feature set suitable for business workflow. Implements machine learning for data analysts to perform tasks with minimal effort. Works with structured, semi-structured, and [unstructured data types](https://skyvia.com/blog/structured-vs-unstructured-data/) within a data warehouse. Price: Google BigQuery could be free to use for up to 1TB of storage. The payment is done for greater storage amounts, higher computing power in use, and/or multiple data streaming sessions. IBM Data Warehouse Tools IBM has its own public cloud platform as well as standalone tools used for data-related operations within the cloud warehouse and on-premises warehouses. Let\u2019s have a look at the most popular tools that are particularly suitable for DWH creation and management. [Netezza](https://www.ibm.com/products/netezza?lnk=flatitem) . It\u2019s a performance server used for complex queries for ingesting data and analyzing it for business purposes. Netezza is available for cloud warehouses as well as for on-premises and hybrid ones. [IBM Business Analytics Enterprise](https://www.ibm.com/products/business-analytics-enterprise?lnk=flatitem) . Similarly to Netezza, this tool deals with data processing for BI. However, it\u2019s mainly focused on predictive analytics which helps to pick the right content ideas or adjust organizational plans. [IBM Db2 Warehouse](https://www.ibm.com/products/db2/warehouse) . This platform helps to unify data from various sources in a single data warehouse and standardize it to the needed format. Advantages of IBM Data Warehouse Tools Enables real-time decision-making for businesses. Unifies data from various sources. Offers hands-off data management experience. Price: IBM Data Warehouse Tools implement the pay-as-you-go pricing system. Oracle Autonomous Data Warehouse [Oracle Autonomous Data Warehouse](https://www.oracle.com/autonomous-database/autonomous-data-warehouse/) is a complex solution suitable for both seasoned data scientists and non-experts in extracting value from data. Its core objective is to gather data from various databases, cloud applications, data lakes, and other data sources into a single data hub. Then the ingested data gets optimized and elaborated according to the predefined purposes of its use. Oracle Autonomous Data Warehouse offers both automated and manually-managed solutions. Users can operate the data on their own by loading, cleansing, and analyzing it to discover any outliers or hidden patterns. Otherwise, users can apply automated solutions that greatly diminish human factor error common for manual data processing. Advantages of Oracle Autonomous Data Warehouse Grants lower data administration costs. Boosts query speed and performance for data extraction. Reduces time for reporting derived from data patterns. Price: Oracle Autonomous Data Warehouse implements a complex pricing model. It depends on whether the focus is on computer power, storage, or networking capabilities. Snowflake [Snowflake](https://www.snowflake.com/en/) is a completely cloud-based solution that sits on top of the most popular and reliable providers, such as AWS, Microsoft Azure, etc. Therefore, Snowflake would be an option for those who decide to create and manage DWHs in the cloud environment. With Snowflake, you can integrate IoT data, OLTP databases, enterprise applications, and data from other sources to be gathered and organized properly within a single platform. Advantages of Snowflake Provides effective scaling depending on your workload. Ensures high-security levels for your data. Makes data sharing easy. Price: Snowflake offers a free trial period during which you\u2019ll define which storage and computing power you use on average. Then the Snowflake Sales Team will set up a monthly pricing model for your business. Teradata [Teradata](https://www.teradata.com/) is a data analytics tool that uses multiple AI and ML algorithms for elaborating on data. Teradata could be deployed on top of AWS, Microsoft Azure, or Google Cloud so it extracts your data located within these cloud providers and processes it. Teradata also works with hybrid and on-premises data warehouses. Advantages of Teradata Suitable for large enterprises as it works with enormous data workloads. Associates with low maintenance effort. Implements parallel data processing for obtaining analytics results faster. Price: The company offers various pricing packages \u2013 Enterprise, Enterprise+, and Optimized Cloud, starting from a $9000 monthly payment. SAP Datasphere [SAP Datasphere](https://www.sap.com/products/technology-platform/datasphere.html) is a professional data warehouse solution that helps drive business decisions with advanced but clear information technology. SAP Datasphere resides and operates exclusively on a cloud platform to deliver services to clients instantly. SAP Datasphere integrates data from various locations and loads AI applications into a single environment. This greatly helps to analyze data in real time and obtain instant business insights critical for the company\u2019s operations. Advantages of SAP Datasphere Provides dynamic scalability for your data warehouse. Ensures real-time analytical solutions based on business data. Grants effective cost management. Price: SAP Datasphere offers a dynamic calculator where you can enter an approximate number of needed data warehouse blocks along with expected computing power. Scaling is always available, and the price will be recalculated accordingly. SAS Cloud [SAS Cloud](https://www.sas.com/en_us/solutions/cloud/sas-cloud.html) is particularly designed for setting up cloud data warehouses according to specific business needs. SAS Cloud implements SaaS to manage cloud DWHs by easily configuring infrastructure and operating systems. SAS offers a range of products suitable for businesses in different industries. Below are two solutions that perfectly fit effective data warehouse management. SAS Data Quality. Checks data and provides suggestions on whether it should be cleaned, transformed, or remapped. SAS Analytics Pro. Uses statistical approaches to analyze data and represent it visually. Advantages of SAS Cloud Implements a range of various products for cloud-based data warehouses. Provides numerous qualitative educational materials to help get along with SAS Cloud. Price: You need to contact the SAS Sales Team directly to get the pricing details. How to Choose the Best Data Warehouse Solution There\u2019s a myriad of data warehouse tools on the market \u2014 the important thing is to select the ones that suit your business perfectly! But first of all, estimate your budget for data warehouse tools, define which objectives you plan to achieve with them, and determine the approximate data volume to operate. Find the following key criteria based on which you can pick up the best data warehouse solutions: Type. Depending on the security requirements for data, select the DWH type: on-premises, cloud, or hybrid. Tasks to solve. The variety of tasks that data warehouse tools can perform is enormous. Some solutions even have different components dedicated to various data-related operations. Therefore, you need to pay attention to the feature set each application provides. Amount of Data. Almost every data warehouse application forms its pricing policy based on the amount of storage and computing capacities. That\u2019s why you need to evaluate your current data workflow to select tools that go within your budget. Compatibility with Infrastructure. According to [Inc.](https://www.inc.com/dana-severson/24-must-have-tools-for-running-a-growing-company-today.html#:~:text=According%20to%20data%20gathered%20by,day%2Dto%2Dday%20operations.) , companies use 37 different digital tools on average in their workflow. It\u2019s necessary to make sure those would be compatible with the data warehouse applications to be implemented. Skyvia is the perfect data warehouse tool because it meets all the criteria mentioned above. In particular: Suitable for various DWH types and sizes. Implements all the necessary data-related operations and procedures \u2013 import, replication, synchronization, streaming,\u00a0 export, querying, and backup. Offers a free plan for non-intensive workflow. Boasts around 180 connectors, which means you can load/transfer data to/from cloud applications and data warehouses. Conclusion The necessity of implementing data warehouse tools in your business workflow is undeniable. Luckily, there\u2019s an abundant choice of data warehouse tools compatible with cloud, hybrid, and on-premises infrastructures. With Skyvia, you can streamline your business processes by aggregating data from more than 160 various platforms into DWH and analyzing it. Having Skyvia at hand allows you to back up your data and recover it when needed. Check Skyvia by yourself for free and enjoy the benefits this service brings to your business \u2014 [try it now](https://app.skyvia.com/register) ! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-warehouse-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Warehouse+Tools+for+2025%3A+Boost+Your+Analytics&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-data-warehouse-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-data-warehouse-tools/&title=10+Best+Data+Warehouse+Tools+for+2025%3A+Boost+Your+Analytics) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/best-ecommerce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top E-commerce Integration Software for Businesses By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/best-ecommerce-integration/#respond) 224 April 18, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-ecommerce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+E-commerce+Integration+Software+for+Businesses&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-ecommerce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-ecommerce-integration/&title=Top+E-commerce+Integration+Software+for+Businesses) In the fast-paced e-commerce landscape, staying competitive means having the right tools in place to ensure smooth operations, streamline workflows, and deliver excellent customer experiences. One of the most powerful ways to achieve this is through e-commerce integration software, which serves as the backbone of modern online businesses. E-commerce integration platforms bring together various systems, such as CRMs, ERPs, and marketing tools, allowing them to seamlessly communicate and share data. Whether you\u2019re managing inventory, processing orders, or running marketing campaigns, integration tools ensure that all your systems are working in harmony. In this article, we\u2019ll explore the top e-commerce integration solutions that can elevate your business operations and enable scalable growth. Table of Contents What is an E-commerce Integration Platform? Why E-commerce Integration is Essential for Online Businesses Website Platforms Shopify WooCommerce Website Platforms Comparison Customer Support Software Integrations Zendesk Freshdesk Customer Support Software Comparison Email Marketing Integrations Mailchimp Klaviyo Email Marketing Comparison Payment Gateways Stripe PayPal Payment Gateways Comparison CRM Management Tools Salesforce HubSpot CRM Management Tools Comparison Shipping Integrations ShipStation Easyship Shipping Integrations Comparison Benefits of Using Skyvia for E-commerce Integration Conclusion What is an E-commerce Integration Platform? An e-commerce integration platform is a centralized solution that connects the various tools and systems in your business ecosystem, helping them work together seamlessly. Whether it\u2019s your online store, CRM, ERP, payment gateways, inventory management, or shipping systems, such a platform automates the data flow between all these applications. This ensures that critical information, such as orders, customer details, product availability, and inventory levels, is synchronized across your entire business in real time. Think of it as the backbone of operations, ensuring everything from the front-end e-commerce store to back-end systems communicates smoothly. By handling all the data exchanges automatically, these platforms eliminate the need for manual input, reducing the chances of errors. Instead of juggling multiple disconnected systems, businesses can manage everything from a single, unified platform. With everything connected and in sync, e-commerce businesses can streamline their workflows, improve productivity, and focus on what matters most: growth. Whether you\u2019re a small business or an enterprise-level operation, integration platforms can help you save time, reduce errors, and operate at scale without the growing pains of manual data entry or disconnected systems. Why E-commerce Integration is Essential for Online Businesses In the competitive world of online marketplaces, e-commerce integration isn\u2019t just a nice-to-have \u2014 it\u2019s a must. The digital landscape constantly evolves, and businesses that fail to integrate their systems will quickly fall behind. Here\u2019s why e-commerce integration is essential: Operational Efficiency : With everything automated, manual tasks, like order management, inventory updates, and customer communications, are handled more efficiently. This reduces human error, speeds up operations, and ensures that your systems are always up-to-date. Real-Time Data Synchronization : Whether it\u2019s pricing, inventory levels, or order status, integrated systems ensure that every platform (website, CRM, ERP, etc.) reflects the latest information. Customers benefit from seeing real-time product availability, making their shopping experience smoother and more reliable. Enhanced Customer Experience : A seamless integration between front-end and back-end systems means a better overall customer journey. Customers get accurate product details, quick order processing, and up-to-date shipping information. This enhances trust, reduces friction, and increases the likelihood of repeat purchases. Scalability and Growth : You don\u2019t need to worry about adding new software or dealing with complex integrations. Automated workflows and centralized management help businesses scale: easily expand into new sales channels, marketplaces, or geographic regions. Better Business Insights : Integrated data from multiple sources gives businesses a comprehensive, holistic view of their operations. From sales analytics to customer behaviors, having all your data in one place allows businesses to optimize marketing strategies, improve inventory management, and, again, provide better customer service. Competitive Advantage : Businesses that can adapt quickly to changes have a distinct advantage. E-commerce integration enables this agility, allowing companies to respond faster to market shifts, offer new services, and maintain high standards of customer satisfaction. In short, e-commerce integration is a must for staying competitive, enhancing customer experiences, and boosting growth. In a fast-paced business world, it\u2019s the key to reaching your full potential. Now, let\u2019s dive into some popular e-commerce tools and explore how they power different aspects of your business. Website Platforms E-commerce platforms, when integrated with website builders, serve as the cornerstone of any successful online store. These powerful tools streamline critical operations by automating tasks such as: inventory management, order processing, product listing updates. Businesses can synchronize data in real time across their online storefronts, third-party marketplaces, and even physical retail locations, ensuring accuracy in stock levels, pricing, and order statuses. This connectivity eliminates time-consuming manual processes, reduces the risk of errors like overselling, and enhances the overall customer experience. By simplifying these day-to-day operations, integrated website platforms free up valuable time and resources. Shopify Shopify is one of the most popular e-commerce platforms, known for its ease of use and extensive app ecosystem. It seamlessly integrates with a wide array of services, making it a go-to solution for businesses of all sizes. Shopify\u2019s marketplace features thousands of apps, allowing users to connect everything from inventory management and shipping to marketing automation and analytics. Whether you\u2019re just getting started or scaling to an enterprise level, Shopify\u2019s robust capabilities ensure you can easily customize and expand your store with minimal technical expertise. Main Features User-friendly drag-and-drop website builder Integrated order and inventory management Multi-channel selling (online store, social media, marketplaces, POS) Vast app marketplace for additional functionalities (shipping, marketing, accounting, etc.) Secure payment processing with Shopify Payments Real-time analytics and reporting Price Starter: $5/month Basic: $29/month (billed annually) Shopify: $79/month (billed annually) Advanced: $299/month (billed annually) Plus: Starts at $2,300/month Transaction fees range from 2.9% + 30\u00a2 to 2.4% per sale, with lower rates for using Shopify Payments. Best for Shopify\u2019s seamless integrations and low entry price point make it ideal for those looking for an all-in-one solution that\u2019s easy to set up and scale. WooCommerce WooCommerce is a powerful, open-source e-commerce plugin designed to integrate seamlessly with WordPress. This platform allows businesses to transform any WordPress site into a fully functional online store. WooCommerce stands out for its flexibility, offering a wide variety of extensions that cater to different business needs. Whether you\u2019re looking for a basic online store or a more complex setup, WooCommerce\u2019s vast customization options enable you to build a store that fits your unique requirements. Being open-source, WooCommerce is perfect for developers who want complete control over their site\u2019s functionality. Main Features Deep integration with WordPress for streamlined content and e-commerce management Supports essential payment gateways and comprehensive order/customer management Flexible product listing and inventory management tools Extensive library of free and paid extensions (subscriptions, bookings, memberships, etc.) Open-source, highly customizable for developers Price Core WooCommerce plugin: Free Additional costs for premium extensions, advanced payment gateways, and specialized features, depending on store requirements Best for WooCommerce offers the ultimate flexibility, ideal for businesses that want to tailor every aspect of their store and are comfortable with WordPress\u2019s open-source environment. While it\u2019s free to start, additional costs for premium features can add up depending on the customization required. Website Platforms Comparison Both Shopify and WooCommerce provide efficient ways to manage inventory and orders. Shopify is the go-to for businesses seeking an out-of-the-box, managed solution, while WooCommerce is perfect for those wanting more control and flexibility, especially if they are already familiar with the WordPress ecosystem. The choice between the two depends largely on your business needs and level of technical expertise. Feature Shopify WooCommerce Platform Type Hosted, all-in-one e-commerce platform Open-source plugin for WordPress Ease of Use User-friendly, drag-and-drop builder, minimal technical skills needed Requires WordPress knowledge and more technical expertise for customization Key Features \u2013 Multi-channel selling (online, social, marketplaces, POS) \u2013 Vast app marketplace \u2013 Integrated order/inventory management \u2013 Secure Shopify Payments \u2013 Deep WordPress integration \u2013 Flexible product/inventory tools \u2013 Extensive extensions library \u2013 Highly customizable Customization Extensive via apps, limited code-level control Full control, highly customizable for developers Pricing \u2013 Starter: $5/mo \u2013 Basic: $29/mo \u2013 Advanced: $299/mo \u2013 Plus: $2,300+/mo \u2013 Transaction fees: 2.4%-2.9% + 30\u00a2 \u2013 Core plugin: Free \u2013 Additional costs for extensions, themes, and hosting Best For Businesses wanting an easy-to-use, scalable, all-in-one solution Developers or businesses needing flexibility and WordPress integration Notes Shopify is ideal for quick setup and scalability with minimal technical effort. WooCommerce suits those comfortable with WordPress and seeking deep customization. Customer Support Software Integrations Delivering outstanding customer service is essential for building lasting loyalty and driving repeat business. Integrating customer support software with your e-commerce platform transforms how you manage customer interactions, significantly enhancing the overall experience. These integrations consolidate inquiries from multiple channels, such as email, live chat, social media, and phone, into a single, unified system, enabling support teams to respond with greater speed and precision. With direct access to critical data like customer orders, purchase histories, and profiles, support managers can address issues faster and more accurately. This reduces resolution times, minimizes customer frustration, and creates a seamless, personalized experience. By automating repetitive tasks and centralizing communication, these tools free up your team to focus on building stronger relationships, fostering trust, and turning one-time buyers into loyal advocates for your brand. Zendesk Zendesk is one of the most robust and popular customer support platforms that integrates seamlessly with e-commerce platforms, allowing businesses to deliver consistent, high-quality support across multiple channels. It centralizes all customer inquiries \u2014 from email to chat, social media to phone \u2014 into a single dashboard, making it easier for customer support teams to track, manage, and respond to requests quickly. This integration not only improves response times but also boosts customer satisfaction by providing timely solutions to issues, directly tied to the customer\u2019s orders and account details. Main Features Omnichannel ticketing system (email, chat, social media, voice) Automated ticketing and workflow management to streamline operations Advanced reporting and analytics to track team performance and customer satisfaction Seamless integration with e-commerce platforms for real-time access to order and customer data Add-ons for enhanced capabilities, including AI-powered triage, workforce management, and quality assurance Price Zendesk Support: Starts at $5/agent/month for basic plans Zendesk Suite: $49/agent/month for comprehensive omnichannel support Add-ons: Workforce Management: $25/agent/month Advanced AI: $50/agent/month Quality Assurance: Starts at $25/agent/month Best for Zendesk is ideal for businesses of all sizes that need to scale their customer service operations and provide a unified experience across all channels. Freshdesk For small businesses or those just starting to scale, Freshdesk offers a powerful and cost-effective customer support solution with a strong suite of integration options. Freshdesk automates ticketing, provides multi-channel support, and enables businesses to set up workflows that help streamline support processes. Additionally, with over 1,000 apps available through its marketplace, Freshdesk offers significant flexibility to extend functionality as needed. It\u2019s an affordable option for businesses that want to automate their support systems and offer responsive, personalized service without a hefty price tag. Main Features Integrated ticketing system supporting email, social media, and live chat Powerful automation and customizable workflows to reduce manual work Knowledge base to assist both customers and agents with self-service SLA management to ensure timely response and resolution Marketplace with over 1,000 integrations to extend functionality Price Free plan: For up to 10 agents, basic support features Growth: $18/agent/month with added features for team collaboration and automation Pro: $59/agent/month with advanced reporting, SLA management, and custom automations Enterprise: $95/agent/month with enterprise-level customization and analytics Best for Freshdesk\u2019s pricing structure makes it a great choice for growing businesses, with the flexibility to scale as your customer service needs increase. Customer Support Software Comparison Whether you\u2019re looking for an omnichannel solution like Zendesk or a flexible, budget-friendly option like Freshdesk, both platforms offer robust features to help you meet your customer service goals. The right choice will depend on your business\u2019s size, scale, and specific needs, so weigh the pros and cons and choose the tool that best aligns with your support strategy. Feature Zendesk Freshdesk Platform Type Robust omnichannel customer support platform Flexible, cost-effective customer support solution Ease of Use Intuitive, centralized dashboard for all channels User-friendly, ideal for small businesses with simpler setups Key Features \u2013 Omnichannel ticketing (email, chat, social, voice) \u2013 Automated workflows \u2013 Advanced analytics \u2013 E-commerce platform integrations \u2013 Multi-channel ticketing (email, chat, social) \u2013 Automation & workflows \u2013 Knowledge base \u2013 1,000+ marketplace integrations Customization Extensive add-ons (AI, workforce management, quality assurance) Highly flexible via marketplace apps and custom workflows Pricing \u2013 Support: $5/agent/mo \u2013 Suite: $49/agent/mo \u2013 Add-ons: $25-$50/agent/mo \u2013 Free: Up to 10 agents \u2013 Growth: $18/agent/mo \u2013 Pro: $59/agent/mo \u2013 Enterprise: $95/agent/mo Best For Businesses needing scalable, omnichannel support with advanced features Small to growing businesses seeking affordable, flexible support solutions Notes Zendesk excels for businesses prioritizing omnichannel support and advanced analytics. Freshdesk is ideal for cost-conscious businesses with simpler needs and scalability. Email Marketing Integrations Email marketing remains a cornerstone for building meaningful connections with customers and driving consistent sales growth. Integrating email marketing platforms with your e-commerce system unlocks powerful opportunities to automate campaigns, deliver highly personalized content, and engage your audience with timely, relevant messages tailored to their needs. By leveraging data such as shopping behaviors, purchase histories, abandoned carts, and browsing patterns, these integrations enable businesses to craft targeted emails that resonate deeply with customers, enhancing their experience at every touchpoint. This seamless connection streamlines the entire customer journey, from nurturing leads to re-engaging past buyers, all while reducing the need for manual intervention. The result is increased engagement, higher conversion rates, and stronger, more loyal customer relationships. Mailchimp Mailchimp has long been a go-to platform for businesses looking to integrate email marketing seamlessly with their e-commerce operations. Known for its user-friendly interface and powerful automation features, Mailchimp allows businesses to design personalized campaigns based on customer data and interactions. It integrates smoothly with many popular e-commerce platforms, providing tailored messaging for everything from welcome emails to product recommendations and cart recovery. Whether you\u2019re a small startup or a growing enterprise, Mailchimp offers the tools to optimize your email marketing and drive sales. Main Features Drag-and-drop email builder for easy design Automated campaigns and customer journeys for streamlined communication Audience segmentation and personalization to target specific groups E-commerce integrations for abandoned cart reminders, product recommendations, and more In-depth analytics and reporting to track campaign performance Price Free plan: Available for businesses with up to 500 subscribers Essentials: From $13/month, for basic email features Standard: From $20/month, includes advanced automation and reporting Premium: From $350/month, for large businesses with more advanced needs Best for Mailchimp\u2019s flexibility and scalability make it a popular choice for businesses of all sizes, whether you\u2019re just starting or looking to optimize your email marketing strategy. Klaviyo For businesses that need advanced email automation with deep ecommerce integrations, Klaviyo is the platform to consider. Klaviyo is a powerful tool known for its robust segmentation capabilities and real-time customer data analytics. It offers personalized messaging that\u2019s based on real-time interactions, allowing businesses to send highly relevant emails and SMS messages to customers. With Klaviyo, you can create sophisticated automation flows, dynamic segments, and leverage predictive analytics to anticipate customer behavior and optimize campaigns. Main Features Advanced automation flows designed for e-commerce businesses Dynamic segmentation and predictive analytics to drive personalized messaging Integration with over 350 platforms, including major e-commerce platforms like Shopify, Magento, and BigCommerce Ability to run both email and SMS marketing campaigns from a single platform Comprehensive reporting and insights to optimize marketing performance Price Free for up to 250 active profiles Email plan: Starts at $20/month for 251-500 profiles Email & SMS: Starts at $35/month for 251-500 profiles, includes SMS credits Best for Klaviyo\u2019s rich feature set and advanced capabilities make it an excellent choice for businesses looking to take their email marketing to the next level, especially if they rely heavily on customer segmentation and data-driven insights. Email Marketing Comparison Both Mailchimp and Klaviyo offer powerful email marketing integrations for e-commerce businesses, but they serve different needs. While Mailchimp is ideal for companies looking for a user-friendly, scalable solution with solid automation features, Klaviyo offers deeper integrations and advanced segmentation capabilities for businesses that require more personalization and real-time data-driven insights. Feature Mailchimp Klaviyo Platform Type User-friendly email marketing platform Advanced email and SMS marketing with a deep e-commerce focus Ease of Use Intuitive drag-and-drop builder, ideal for beginners More complex, suited for data-driven marketers Key Features \u2013 Drag-and-drop email builder \u2013 Automated campaigns \u2013 Audience segmentation \u2013 E-commerce integrations (abandoned cart, recommendations) \u2013 Detailed analytics \u2013 Advanced automation flows \u2013 Dynamic segmentation \u2013 Predictive analytics \u2013 Email & SMS campaigns \u2013 350+ integrations Customization Flexible templates and segmentation, limited advanced analytics Highly customizable with real-time data and predictive tools Pricing \u2013 Free: Up to 500 subscribers \u2013 Essentials: $13/mo \u2013 Standard: $20/mo \u2013 Premium: $350/mo \u2013 Free: Up to 250 profiles \u2013 Email: $20/mo (251-500 profiles) \u2013 Email & SMS: $35/mo (251-500 profiles) Best For Businesses seeking an easy-to-use, scalable email marketing solution Ecommerce businesses needing advanced segmentation and real-time insights Notes Mailchimp is great for beginners and businesses prioritizing simplicity and affordability. Klaviyo excels for data-driven ecommerce businesses requiring sophisticated automation and personalization. Payment Gateways A frictionless checkout experience is paramount for maximizing conversion rates, enhancing customer satisfaction, and fostering long-term loyalty. These integrations empower businesses to offer various payment options, accommodate multiple currencies, and adhere to rigorous security standards, ensuring a seamless and trustworthy customer experience worldwide. By prioritizing ease of use and robust security, payment gateways not only build credibility but also strengthen customer relationships, laying the foundation for sustained business growth. Let\u2019s dive into two of the most trusted and widely used payment gateway solutions, Stripe and PayPal, and explore how they drive e-commerce success. Stripe For businesses looking for a flexible, developer-friendly solution to integrate payments, Stripe stands out as a top choice. Its powerful API allows for custom [e-commerce payment setups](https://skyvia.com/blog/stripe-salesforce-integration/) , giving businesses complete control over payment processing and checkout flows. Whether you\u2019re offering subscriptions, one-time purchases, or handling complex payment scenarios across multiple currencies, Stripe has the tools to accommodate. Known for its seamless integration capabilities, Stripe ensures that transactions are secure, quick, and straightforward. Main Features Developer-friendly API for custom integrations and flexible payment solutions Supports various payment methods, including credit/debit cards, wallets, and local payment options Built-in real-time fraud prevention tools to protect against malicious transactions Advanced reporting and analytics to track payment performance and business insights Secure and PCI-compliant infrastructure for enhanced security Price No setup or monthly fees Standard processing fee: 2.9% + 30\u00a2 per successful card transaction (US rates; fees may vary by region and payment method) Best for Stripe\u2019s versatility and robust security make it an excellent choice for growing e-commerce businesses or those needing highly customized payment processing. Whether you\u2019re just starting out or expanding internationally, Stripe provides the tools and scalability required for success. PayPal PayPal has long been one of the most trusted names in the global payments space, offering seamless integration with most e-commerce platforms. It\u2019s a go-to solution for businesses that want to provide a familiar and secure payment option for customers worldwide. PayPal\u2019s broad reach \u2014 available in over 200 markets \u2014 combined with buyer and seller protection programs, makes it a popular choice for merchants who want to provide peace of mind to their customers while keeping their payment processing simple. Main Features Easy integration with major e-commerce platforms like Shopify, WooCommerce, and Magento Accepts payments from PayPal accounts, credit/debit cards, and local payment methods Robust buyer and seller protection programs to ensure secure transactions Supports international transactions in over 200 markets and more than 100 currencies Quick and easy setup with no hidden fees Price No setup or monthly fees Standard processing fee: Typically 2.9% + a fixed fee per transaction (varies by country) Best for PayPal remains a staple in the e-commerce world because of its ease of use, strong customer protection, and global presence. For businesses seeking a straightforward, widely recognized payment solution, PayPal offers a solid choice with minimal hassle. Payment Gateways Comparison Choosing the right payment gateway is critical for any business aiming to provide a smooth, secure checkout experience. While Stripe is ideal for companies needing a customizable, developer-friendly solution, PayPal shines with its vast global reach and simple integration process. Both platforms offer robust features, making them top contenders for businesses looking to elevate their payment processing capabilities. Feature Stripe PayPal Platform Type Developer-friendly payment gateway Globally trusted payment gateway Ease of Use Requires technical expertise for custom setups Simple, quick integration for non-technical users Key Features \u2013 Flexible API for custom integrations \u2013 Multiple payment methods \u2013 Real-time fraud prevention \u2013 Advanced analytics \u2013 PCI-compliant \u2013 Easy platform integrations \u2013 PayPal, card, local payments \u2013 Buyer/seller protection \u2013 200+ markets, 100+ currencies Customization Highly customizable via API Limited customization, standardized checkout Pricing 2.9% + 30\u00a2 per transaction (US; varies by region) 2.9% + fixed fee per transaction (varies by country) Best For Businesses needing custom payment flows and scalability Businesses seeking simple, trusted, global payment solutions Notes Stripe is ideal for tech-savvy businesses requiring tailored payment solutions. PayPal excels in ease of use, global reach, and customer trust. CRM Management Tools Integrating Customer Relationship Management (CRM) systems with your platform is a game-changer for building and sustaining meaningful customer relationships. By centralizing data from diverse touchpoints, such as purchase histories, website interactions, support tickets, and marketing campaigns, CRMs provide a comprehensive, 360-degree view of each customer. This holistic perspective enables businesses to track interactions seamlessly, deliver highly personalized experiences at every stage of the customer journey, and craft targeted marketing strategies that resonate deeply. The benefits are transformative: enhanced customer loyalty, improved sales performance, more efficient operations, and the ability to anticipate customer needs with precision. Salesforce Salesforce is a powerhouse in the world of customer relationship management, offering a robust set of tools designed to help businesses engage with customers, automate sales processes, and leverage data for actionable insights. By integrating Salesforce with your e-commerce platform, businesses can unify customer data, streamline workflows, and offer highly personalized experiences across various channels. It\u2019s particularly beneficial for companies looking to scale and automate customer service and marketing tasks, thanks to its seamless integration with popular e-commerce platforms. Main Features Centralized customer database that consolidates data across all touchpoints Sales, marketing, and service automation for increased efficiency Advanced analytics and reporting to track customer behavior and business performance Seamless integration with leading e-commerce platforms like Shopify, Magento, and WooCommerce Customizable workflows tailored to specific business needs Price Pricing varies depending on edition and features Essentials plan starts around $25/user/month (pricing may vary by region and features) Best for Salesforce\u2019s powerful automation capabilities, combined with its deep integration options, make it an ideal choice for businesses looking to enhance their customer relationships and streamline their operations. HubSpot HubSpot offers an intuitive, all-in-one CRM platform designed to help businesses connect with their customers, automate marketing campaigns, and drive sales growth. Its seamless integration with e-commerce platforms makes it a top choice for businesses seeking to engage customers and track their interactions across all stages of the buying journey. HubSpot\u2019s CRM allows you to manage contacts and deals, automate email campaigns, and even integrate with payment gateways to streamline sales processes. Main Features Contact and deal management with personalized customer tracking Marketing automation tools for targeted email campaigns and lead nurturing Integration with e-commerce platforms and payment gateways for a unified view of customer interactions Free CRM with paid add-ons for advanced features (marketing, sales, service hubs) Customizable dashboards and reporting tools Price Free CRM available Paid plans (Marketing, Sales, Service Hubs) start at $20\u2013$50/month per user (pricing varies by features and scale) Best for HubSpot\u2019s user-friendly interface and scalable functionality make it an excellent choice for businesses of all sizes looking to grow and maintain meaningful customer relationships. Whether you\u2019re just starting or expanding, HubSpot\u2019s CRM tools can help you take your e-commerce business to the next level. CRM Management Tools Comparison In summary, integrating CRM systems like [Salesforce and HubSpot](https://skyvia.com/blog/hubspot-salesforce-integration/) with your e-commerce platform enhances customer relationship management, improves sales conversion, and enables businesses to deliver personalized marketing strategies. By streamlining workflows and automating key processes, these CRM tools help businesses build stronger, long-term customer relationships while driving growth. Feature Salesforce HubSpot Platform Type Robust, enterprise-grade CRM Intuitive, all-in-one CRM Ease of Use Complex, suited for larger businesses with technical expertise User-friendly, ideal for businesses of all sizes Key Features \u2013 Centralized customer database \u2013 Sales, marketing, service automation \u2013 Advanced analytics \u2013 E-commerce platform integrations \u2013 Customizable workflows \u2013 Contact/deal management \u2013 Marketing automation \u2013 E-commerce/payment integrations \u2013 Customizable dashboards \u2013 Free CRM option Customization Highly customizable, tailored for complex needs Flexible, with scalable add-ons for growing businesses Pricing \u2013 Essentials: ~$25/user/mo (varies by edition) \u2013 Free CRM \u2013 Paid plans: $20\u2013$50/mo per user (varies by features) Best For Large businesses needing advanced automation and scalability Small to mid-sized businesses seeking simplicity and scalability Notes Salesforce is ideal for enterprises with complex needs and robust integration requirements. HubSpot excels for smaller businesses or those prioritizing ease of use and affordability. Shipping Integrations Shipping integrations are a constant for delivering exceptional customer experiences and optimizing order fulfillment. By seamlessly connecting your online store with leading carriers and logistics providers, these integrations automate essential shipping tasks, such as label generation, rate calculations, and delivery scheduling, while enabling faster delivery times and providing real-time tracking updates to customers. This streamlined, technology-driven approach minimizes manual effort, reduces errors, and ensures transparency throughout the delivery process, fostering trust and reliability. Ultimately, these tools empower e-commerce brands to create a seamless post-purchase experience, strengthen customer loyalty, and lay a robust foundation for long-term growth and success. ShipStation ShipStation is a powerful tool for automating and streamlining your e-commerce shipping operations. Its seamless integrations with top ecommerce platforms like Shopify, WooCommerce, and BigCommerce make it easier to manage orders, print labels, and track deliveries all from one centralized dashboard. Whether you\u2019re shipping domestically or internationally, ShipStation helps you stay on top of your shipping tasks, improving efficiency and reducing manual work. It\u2019s an excellent choice for businesses that need to simplify their shipping process while scaling their operations. Main Features Multi-carrier shipping management for flexibility and cost-effectiveness Batch label printing and customizable automation rules to save time Branded tracking pages to enhance the customer experience Integration with major e-commerce platforms and marketplaces for streamlined operations Price Plans start at $9.99/month (pricing scales based on shipment volume and features) Best for ShipStation\u2019s versatility and user-friendly interface make it a great fit for businesses looking to optimize their shipping workflows while maintaining full control over order fulfillment and tracking. Easyship Easyship offers a global shipping solution tailored to ecommerce businesses of all sizes. It simplifies international logistics by integrating with over 250 shipping solutions worldwide, allowing businesses to access real-time shipping rates, taxes, and customs documentation all within one platform. With its ability to automate many of the tedious shipping tasks, Easyship makes it easier to scale your business globally while keeping costs in check. Whether you\u2019re shipping locally or internationally, Easyship provides the tools to help businesses stay competitive in the global marketplace. Main Features Real-time shipping rates and taxes for accurate pricing Integration with over 250 shipping solutions worldwide, including major carriers Automated customs documentation for faster international shipments Delivery tracking and analytics for full visibility into shipment status Price Free plan available Paid plans start at $29/month (pricing varies by shipment volume and features) Best for Easyship stands out for its global shipping capabilities and cost-effective solutions, making it a top choice for e-commerce businesses seeking to expand their reach and optimize their shipping processes. Shipping Integrations Comparison Integrating shipping solutions like ShipStation and Easyship into your e-commerce platform not only optimizes the order fulfillment process but also enhances the overall customer experience. By automating shipping tasks and ensuring timely deliveries, these tools help businesses reduce operational costs, improve shipping efficiency, and increase customer satisfaction. With real-time tracking and seamless integrations, these shipping solutions help to scale smoothly while maintaining a high level of service. Feature ShipStation Easyship Platform Type Centralized shipping management tool Global shipping solution Ease of Use User-friendly dashboard for streamlined operations Intuitive platform with global focus Key Features \u2013 Multi-carrier management \u2013 Batch label printing \u2013 Automation rules \u2013 Branded tracking pages \u2013 E-commerce integrations \u2013 Real-time rates/taxes \u2013 250+ shipping solutions \u2013 Automated customs docs \u2013 Delivery tracking/analytics Global Reach Strong domestic/international support Extensive global focus with 250+ carriers Pricing Starts at $9.99/mo (scales with shipment volume) Free plan; paid plans start at $29/mo (varies by volume) Best For Businesses needing efficient, scalable domestic/international shipping Businesses focused on global expansion and cost-effective shipping Notes ShipStation is ideal for businesses prioritizing automation and a centralized shipping workflow. Easyship excels for global e-commerce with extensive carrier options and customs automation. Benefits of Using Skyvia for E-commerce Integration [Skyvia](https://skyvia.com/) is an incredibly versatile, cloud-based data integration platform that simplifies the complex task of connecting different services. Whether you\u2019re integrating your e-commerce platform with a CRM, ERP, marketing tools, or payment gateways, Skyvia provides a seamless solution without the need for complex coding or specialized developers. This makes it a perfect choice for businesses of all sizes and technical skill levels, from small startups to large enterprises. Key Benefits of Skyvia for E-commerce Integration No-Code, User-Friendly Integration : With an intuitive drag-and-drop interface, setting up and managing integrations becomes a breeze. There\u2019s no need for programming expertise, allowing your team to focus on what matters most: running and growing the business. This user-friendly setup reduces onboarding time and removes the need for hiring or relying on developers for integrations. Comprehensive Connectivity : Skyvia supports a wide array of data sources and destinations ( [over 200 connectors](https://skyvia.com/connectors) ), including popular e-commerce platforms like Shopify, WooCommerce, and Magento, as well as databases, cloud applications, and file storage services. This ensures smooth data flows across the business ecosystem. Real-Time Data Synchronization : The platform offers real-time or scheduled data synchronization, ensuring critical business data such as inventory, customer information, and order statuses are always up to date. This real-time synchronization helps prevent issues like stockouts, overselling, and customer dissatisfaction, especially important during peak periods like Black Friday and Cyber Monday. Secure Data Backup and Reliability : Besides streamlining integration, it offers automated, secure backup features, ensuring your valuable data is protected against loss. All data transfers are encrypted, and the platform complies with top-tier industry security standards, guaranteeing data reliability and business continuity. Enhanced Operational Efficiency : Skyvia automates routine data management tasks, reducing the time spent on manual updates. This reduces human error and cuts operational costs. By automating these tasks, your team can focus on scaling the business and improving the customer experience. Personalized Customer Experience : With Skyvia, your data remains unified and up-to-date, allowing you to craft personalized marketing campaigns, product recommendations, and targeted customer support. This enables more engaging customer interactions, driving higher loyalty and conversion rates. Scalability and Flexibility : As your business grows, Skyvia scales with you. The platform is designed to handle large volumes of data seamlessly, offering the flexibility to grow without the need for constant reconfiguration or infrastructure upgrades. Experience next-level ecommerce integration with Skyvia \u2014 seamlessly connect your storefront, CRMs, marketing tools, and databases in just a few clicks. [Watch our step-by-step video tutorials](https://www.youtube.com/@SkyviaPlatform/videos) to see how easily you can set up and maximize Skyvia\u2019s powerful e-commerce integration capabilities! Watch [Zendesk and Salesforce](https://skyvia.com/blog/salesforce-zendesk-integration/) integration below, or look for more tutorials on the Skyvia YouTube channel. Conclusion With integration, businesses gain a 360-degree view of their customers, enabling personalized experiences, smarter marketing, and proactive service. Real-time inventory synchronization prevents overselling and stockouts, while automated data management frees teams to focus on growth instead of troubleshooting disconnected systems. Solutions like Skyvia stand out by making integration accessible to all skill levels through a no-code, cloud-based platform that connects over 200 data sources and automates even complex workflows, without the need for IT resources or custom development. This empowers businesses to scale confidently, adapt quickly to market changes, and deliver seamless customer experiences across every channel. If you\u2019re ready to move beyond manual processes and unlock the full potential of your e-commerce data, Skyvia offers a comprehensive, user-friendly solution to help you thrive. F.A.Q. for E-commerce integration What is e-commerce integration software? E-commerce integration software connects your online store with other business systems \u2014 such as ERP, CRM, inventory, shipping, and marketing tools \u2014 to automate data flow, synchronize information, and streamline operations across your entire tech stack. Why is e-commerce integration important for my business? Integration eliminates manual data entry, reduces errors, ensures real-time data accuracy, and enables you to deliver a seamless customer experience. It also supports omnichannel selling, helps scale your business, and allows for faster, data-driven decision-making. What are common types of e-commerce integrations? Common integrations include connections with: Inventory management systems Payment gateways Shipping and fulfillment solutions [CRM and ERP](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) platforms Email marketing and analytics tools How does e-commerce integration improve customer experience? By synchronizing data across systems, integration ensures accurate product availability, order tracking, and personalized marketing, resulting in faster service and higher customer satisfaction. Can e-commerce integration software scale as my business grows? Yes, many integration platforms are designed to scale with your business, allowing you to add new sales channels, systems, or features as needed without major disruptions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-ecommerce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+E-commerce+Integration+Software+for+Businesses&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-ecommerce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-ecommerce-integration/&title=Top+E-commerce+Integration+Software+for+Businesses) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/best-etl-tools-for-salesforce/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 9 Best ETL Tools for Salesforce in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-etl-tools-for-salesforce/#respond) 4536 March 24, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-etl-tools-for-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=9+Best+ETL+Tools+for+Salesforce+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-etl-tools-for-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-etl-tools-for-salesforce/&title=9+Best+ETL+Tools+for+Salesforce+in+2025) Salesforce is only a CRM tool, whereas the true lifeblood of each business is the data. And to make it work to your benefit, it should be accurate and consistent across all the systems. That\u2019s when ETL Salesforce tools come in. In this article, we\u2019ll show: Why integrate data to and from Salesforce? What role does the data warehouse play in practical insights management? How [ETL tools](https://skyvia.com/blog/etl-tools/) in Salesforce may overcome data integration challenges. What is the best ETL tool for Salesforce? Table of contents What is Salesforce? Why Salesforce Is Not the Same As a Data Warehouse How Data Integration from Salesforce to Data Warehouse Can Help Common Salesforce Data Integration Challenges What are ETL Tools? What are ETL Tools in Salesforce? 9 Best ETL Tools for Salesforce in 2025 Skyvia ETLeap Stitchdata Workato Fivetran Lyftron Segment Integrate.io Matillion ETL How to Choose the Right Salesforce ETL Tool Best Practices for Salesforce ETL Conclusion What is Salesforce? It\u2019s a robust cloud-based customer relationship management (CRM) platform that, for 20+ years, has gone way beyond this concept and turned into a unique, self-sufficient ecosystem. The service includes products for various purposes, like: Sales and marketing automation. Application development. Advanced data analysis. Thanks to its infrastructure and agility, the company remains a #1 CRM provider for the eighth year. Why Salesforce Is Not the Same As a Data Warehouse Your business data must be consolidated and easily retrievable to get a 360-degree view of customer interactions. Unfortunately, despite all the apparent strengths of Salesforce, it has data storage and management limitations. Multi-Tenant Nature Since Salesforce is a cloud-based technology, many organizations share the same platform resources. And to make this coexistence possible, each company is limited in the data storage capacity within their system. As a result, it is usually not enough for medium and large organizations. In addition, it can cost extra dollars to expand this storage space. Slow Data Transfer Time Since the amount of data each company stores varies significantly, response time to specific queries may also vary. Some calls can take hours to pull necessary data and create a comprehensive report. Per-Transition Limits Salesforce has query limits that restrict manipulations with extensive data flows within the CRM. So, if you need to import/export many records daily, you can quickly hit these limits. Complex Data Relationships Salesforce organizes data using a structured relational model built around objects, relationships, and dependencies. This structure is optimized for CRM operations like managing leads, opportunities, and activities. However, performing complex analytics across multiple objects is challenging. Running multi-level joins or aggregations across related objects often requires custom reports, workarounds, or external tools. In contrast, data warehouses are designed with query performance and analytical flexibility in mind, allowing for simplified relationships, denormalization, and schema optimization. It often needs to be extracted, transformed, and restructured to make Salesforce data analytics-ready, especially when combining it with other data sources or running advanced BI and machine learning workflows. Storage Costs and Limits Salesforce imposes strict storage limits on records, files, and attachments, making retaining large datasets within the platform costly. Although additional storage is available, it comes at a significant premium, especially for high-growth businesses. These constraints often force companies to offload historical or less frequently accessed data to external storage or cloud data warehouses. As a result, implementing ETL or data integration solutions becomes a perfect solution to manage storage efficiently while maintaining access to valuable business data. How Data Integration from Salesforce to Data Warehouse Can Help The good news is that data warehousing solutions can prevent the abovementioned issues. Amazon Redshift, Snowflake, and Google Big Query are the most famous examples. They are designed to accumulate and analyze large data volumes from databases and other sources. Thanks to this, you can: Get a unified view of the customer information by eliminating unnecessary data layers. Increase efficiency and become a true master of your business data. Improve data quality, thanks to the consolidation of sources. Since Salesforce wasn\u2019t primarily created as a data repository, you need to build a proper Salesforce warehouse synchronization strategy and data integration model to leverage the full power of these two software types. In this way, you should deal with specific challenges. Common Salesforce Data Integration Challenges Salesforce users often consider data integration as their primary challenge. Before\u00a0 launching this process, you should see the risks you may face, such as: No understanding of the data integration scope. Lack of clear integration strategy. Misselection of the integration solution. Poor data quality and data gaps. Mismatch of data and field types in systems. No data validation rules. Lack of understanding of objects\u2019 relationship within solutions. Underestimation of systems and processes complexity. As mentioned earlier, you can mitigate most troubles by creating a data quality policy and choosing the best ETL tool for Salesforce. What are ETL Tools? These systems are a must-have in modern data management and integration. Extract, Transform, and Load (ETL) tools help organizations: Gather data from various sources. Transform it into a consistent and usable format. Load it into target systems such as databases, data warehouses, or cloud storage. ETL platforms automate and streamline this multi-step process, making it easier for businesses to maintain clean, organized, and analytics-ready data. They support business intelligence, reporting, machine learning, and operational efficiency. What are ETL Tools in Salesforce? Now, we\u2019re talking about the most popular ones when connecting a DWH with the Salesforce platform. They help automate the management of the ETL pipelines and standardize data for further analysis. Simply put, these platforms extract the data from several sources, integrate it with the database, and store it with/without transformations in a data warehouse. Moreover, ETL tools can test the data pipelines and analyze the reports of the performed runs. Over 150 ETL tools available on the market can work with Salesforce. Some are designed for beginners with no programming background; others require some coding skills. We\u2019ve carefully analyzed all the software leaders\u2019 pros and cons to present you with their detailed review below. 9 Best ETL Tools for Salesforce in 2025 Skyvia [Skyvia](https://skyvia.com/data-integration) is a powerful integration platform for solving different data-related tasks, including [ETL](https://skyvia.com/learn/what-is-elt?) from Salesforce to a data warehouse and vice versa, [ELT](https://skyvia.com/learn/what-is-elt) from Salesforce to a data warehouse, many other tools, and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) . It allows users to quickly copy Salesforce data to a data warehouse as is or configure an ETL process with data transformations to fit an existing schema. When needed, the data can be reuploaded back to Salesforce or even published as OData and linked to Salesforce as external objects. PROS Easy no-coding tools to copy cloud data to a database or data warehouse. Over [200](https://skyvia.com/connectors) connectors. Advanced transformation capabilities with powerful mapping. A number of different data-related tools in one platform. Ability to load data between cloud apps and databases in any direction. Comprehensive documentation and tutorials. CONS No support for NoSQL databases like MongoDB. No 24/7 support. No on-premise option. ETLeap [ETLeap](https://etleap.com/) is also one of the top Salesforce [data extraction tools](https://skyvia.com/blog/top-data-extraction-tools/) that helps connect data from various sources to data warehouses like Redshift or [Snowfl](https://skyvia.com/blog/snowflake-to-salesforce-integration) ake. It\u2019s designed for users with no development skills, so any data analyst can easily configure it without heavy coding. Like many other leading big data software on the market, it\u2019s cloud-based, so you don\u2019t have to involve IT resources to install and support it. PROS Has 75+ connectors. No-code ETL functionality is analyst-friendly. Allows pre-load transformation via Data Wrangler and SQL. It has an on-premise option (AWS VCP). CONS For some custom API connectors (via user-defined API) or advanced use cases, light scripting or manual configuration may be required. Enterprise users may face a learning curve when dealing with advanced data modeling or integrations. Stitchdata [Stitchdata](https://www.stitchdata.com/) by Talend is a robust, cloud-first ETL service tool for Salesforce users of all business sizes. It enables high-volume data transfers from cloud-based apps and databases to data warehouses. It allows users to leverage transformed and unified data for reporting and better decision-making. With its simple setup and prebuilt connectors, the platform makes it easy to launch data pipelines without extensive engineering resources. PROS It has 130 nearly-native connectors. It can be integrated with ten popular data warehousing solutions. Allows adding new data sources via Singer. Has detailed documentation. CONS Advanced users or organizations seeking to programmatically manage their Stitch accounts or build custom integrations may utilize the Stitch Connect API. Particular APIs (e.g., QuickBooks) do hard deletes of objects, and the Stitchdata tool replicates them. Workato [Workato](https://www.workato.com/) is a cloud [integration platform as a service](https://skyvia.com/blog/what-is-ipaas/) (iPaaS) software that helps organizations integrate thousands of apps with Salesforce and consolidate data to turn it into actionable insights. This low-code tool enables fast business workflow automation and massive data aggregation across these apps. It supports real-time and event-driven triggers, which is fine for time-sensitive operations like lead routing or order processing. With built-in AI and hundreds of prebuilt recipes, Workato empowers both IT teams and business users to create scalable integrations without deep coding expertise. PROS Convenient UI/UX and easy implementation. Maintenance-free functionality. Reusable connectors and managed file transfers. Easy and transparent job monitoring. CONS Workato supports API endpoint caching and lookup tables, to enhance performance. However, there are limitations on the volume of data that can be effectively cached within the platform. Mastering advanced features and complex workflows may require a significant learning investment. Fivetran Like other above-reviewed Salesforce ETL tools, [Fivetran](https://www.fivetran.com/) allows automatic data copying from Salesforce to the data warehouse. In addition, it offers many pre-built connectors to meet the real-world needs of data analysts. This [data integration software](https://skyvia.com/blog/data-integration-tools/) doesn\u2019t require maintenance from your side and can be easily deployed and adjusted. It also supports schema drift handling, ensuring new fields in Salesforce are automatically captured without manual reconfiguration. The solution is especially suited for teams looking for a reliable, set-it-and-forget-it data pipeline. PROS Over 500 out-of-the-box connectors. Allows post-load data transformation via SQL. Can synchronize with a lot of typical data warehouses. Allows adding new data sources via Cloud Functions. CONS No sync schedule by a fixed time. Needs improvement in the bulk field adding/deletion controls. Advanced operations or custom integrations may necessitate the use of the REST API available for Standard, Enterprise, or Business Critical plans, and during trial periods. Therefore, leveraging the API for extended functionalities may require these specific account tiers. Lyftron [Lyftron](https://lyftron.com/) is a Salesforce data extraction and load tool for the streamlined processing of large sets of information and the removal of data pipeline roadblocks. It enables data professionals to create and manage data flows within one platform and increase productivity. With built-in data virtualization and automatic schema detection, the system allows real-time querying across multiple data sources. It also supports ELT transformations using ANSI SQL, making it accessible for both analysts and engineers. PROS Has 300+ pre-built connectors. Provides an on-premise integration option. Offers 19 data destinations. Allows pre-load and post-load data transformation via SQL. CONS Limited community and user feedback. Steeper learning curve for new users. Few third-party reviews or benchmarks. Potential concerns around long-term vendor maturity and support. Segment [Segment](https://segment.com/) (aka Twilio Segment) ETL solution also helps streamline the processing of data from Salesforce and other data sources with an additional focus on event tracking. This platform allows businesses to accumulate, unify, and map customer data to and from Salesforce. Its real-time pipelines ensure that customer behavior data is immediately routed to analytics tools, marketing platforms, or data warehouses. The tool is especially useful for building personalized user experiences, tracking customer journeys, and syncing behavioral data across multiple systems. PROS It has an extensive list of out-of-the-box 800 connectors. Allows adding new data sources via the available sources offered on the Segment platform. Can integrate into six data warehouses. It enables data transformation and customization for a particular destination. CONS Has no on-premise option. May require extra developing tools for custom data collection and integration. Custom integrations may require a solid understanding of the platform\u2019s API and data structures. Integrate.io [Integrate.io](https://www.integrate.io/) enables integration of Salesforce with major data warehouses. This no-code software helps create a single source of truth via real-time data replication and a plug-and-play interface. In addition, the Xplenty solution allows data analysts and engineers to deploy custom transformations in a few minutes. It supports pre-built connectors for a wide range of sources, making setup fast and efficient. With built-in scheduling, monitoring, and error handling, users can automate workflows and maintain data integrity with minimal oversight. PROS Has over 200 pre-built connectors. Enables data transformation and filtering. It doesn\u2019t require coding skills for the tool setup and management. Easy job progress tracking. No-code API generation. CONS Limited advanced customization for complex workflows. Some connectors may require manual schema mapping. Pricing can become high for large-scale data volumes. Occasional delays in real-time sync performance. Matillion ETL [Matillion ETL](https://www.matillion.com/) helps overcome data inconsistencies in your systems that may arise and build higher confidence in it. This cloud-based tool enables easy data integration and transformation, which helps reach new levels of efficiency and productivity. It offers deep integration with major cloud data warehouses like Snowflake, Redshift, and BigQuery. With its visual interface and support for both no-code and SQL-based transformations, teams can accelerate development without sacrificing flexibility. PROS It has over 150 nearly-native connectors. Ability to connect on-premise sources via Progress DataDirect Hybrid Data Pipeline. Enables pre-load data transformation via SQL. Allows the creation of custom connectors with the API Extract functionality in Matillion ETL. CONS Requires strong SQL knowledge for building complex workflows. Limited support for advanced data science or ML integrations. Costs can be high for smaller teams or startups. UI may become cluttered when managing large numbers of jobs. Lacks native support for unstructured or semi-structured NoSQL data sources. How to Choose the Right Salesforce ETL Tool Selecting the right ETL tool for Salesforce depends on the company\u2019s size , technical expertise , and integration goals . Let\u2019s review the key factors to decide what to select: Business Requirements. Define if you need simple one-way data extraction , bi-directional sync , or complex multi-system integration . Each choice should align with how often and how much data you move. Ease of Use vs. Flexibility. No-code platforms like Skyvia or Integrate.io are great for non-technical teams. However, if the workflows demand more profound control and customization, tools like Matillion or Fivetran offer more power at the cost of a steeper learning curve. Real-Time vs. [Batch Processing](https://skyvia.com/blog/batch-etl-processing/) . If near-instant updates are essential (e.g., lead-to-order processes), look for tools with real-time sync capabilities like Workato or Segment . For periodic data loads, batch-focused tools such as Stitch or Lyftron may be sufficient. Scalability and Performance. As the data grows, so do integration demands. Choose a tool that can scale horizontally , handle schema changes automatically, and maintain high throughput, especially for large Salesforce orgs. Data Transformation Needs. Some tools only support post-load transformation (e.g., ELT), while others allow pre-load transformations via SQL or visual mapping. Consider how much control you need over cleaning and reshaping data before it hits the destination. Cost and Licensing. Evaluate both initial and ongoing costs , especially for tools that are based on volume or connector usage. Free trials and transparent pricing (like [Skyvia\u2019s](https://skyvia.com/pricing) ) help with budgeting and comparisons. Security & Compliance. Ensure the tool meets compliance standards (GDPR, HIPAA, SOC 2, etc.) and offers role-based access, audit logs, and secure data transfers. Best Practices for Salesforce ETL To get the most value from the Salesforce ETL process, follow these best practices to ensure accuracy, performance, and maintainability: Understand Your Data Model First. Before extracting anything, map out Salesforce objects and relationships: custom vs. standard fields, lookup fields, and object dependencies to avoid downstream broken joins and incomplete records. Use Incremental Loads. Avoid pulling the full dataset every time. Instead, use filters like LastModifiedDate or CDC (Change Data Capture) to extract only what changed to reduce load times and API usage. Maintain Data Quality Early. Apply transformations, validations, and deduplication rules as early as possible in the pipeline. Clean data at the start prevents bad data from propagating through the systems. Test in Sandbox Before Production. Always develop and test the ETL flows in a Salesforce Sandbox or staging environment to prevent accidental data loss or corruption in production. Monitor and Audit Your Jobs. Set up alerts and logging to monitor failed jobs, API call usage, and sync delays. Many tools, like Fivetran and Skyvia, offer built-in job monitoring features. Document and Version Your Workflows. Keep a record of the data pipelines, schedules, field mappings, and logic. Use version control or naming conventions to manage changes over time. Secure Your Data Transfers. Ensure encryption is used for info in transit and at rest. Use OAuth for authentication where possible and limit access to sensitive fields. Conclusion [Salesforce data integration](https://skyvia.com/blog/salesforce-integration-tools/) can be technically complex and time-consuming. Using a reliable ETL tool reduces manual effort and simplifies the process. Skyvia is a no-code, affordable platform built for businesses of all sizes. With flexible [pricing](https://skyvia.com/pricing) starting at the freemium model, you can explore its ETL capabilities and scale as the company needs to grow. Whether you\u2019re replicating Salesforce data to the cloud or to on-premises systems for added security, Skyvia makes it easy with fast setup and plug-and-play functionality. Ready to try it yourself? Schedule a demo today and see [Skyvia](https://skyvia.com/schedule-demo) in action. FAQ for ETL Tools for Salesforce What is an ETL tool, and why do I need one for Salesforce? ETL (Extract, Transform, Load) tools help you move data from Salesforce to other systems like data warehouses. They clean and reformat the data so it\u2019s ready for reporting, analytics, or syncing with other apps without manual effort. What\u2019s the best ETL tool for non-technical users? Platforms like\u00a0Skyvia,\u00a0Integrate.io, and\u00a0Workato\u00a0offer intuitive no-code interfaces ideal for non-developers who need to build data pipelines quickly. Can these tools handle real-time data sync? Yes, some tools like\u00a0Segment,\u00a0Workato, and\u00a0Fivetran\u00a0offer real-time or near real-time sync options. Others operate on scheduled batch updates. Your choice depends on how fresh your data needs to be. Which tool is most budget-friendly for small businesses? Skyvia\u00a0stands out with a free plan and affordable paid tiers.\u00a0Stitch\u00a0also offers transparent, usage-based pricing that scales with your needs. What if I need to integrate with on-premises data sources? Tools like\u00a0Matillion\u00a0and\u00a0Lyftron\u00a0support hybrid environments, allowing users to securely connect both cloud and on-premise systems. Can I customize or transform data during integration? Yes, most tools support data transformations. Some via visual builders (e.g., Skyvia, Integrate.io), others with SQL or scripting (e.g., Matillion, Fivetran). How do I choose the best ETL tool for my business? Consider your team\u2019s technical skills, real-time needs, data volume, security requirements, and budget. Our article breaks down each tool\u2019s strengths to help you decide. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-etl-tools-for-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=9+Best+ETL+Tools+for+Salesforce+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-etl-tools-for-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-etl-tools-for-salesforce/&title=9+Best+ETL+Tools+for+Salesforce+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-free-and-paid-etl-tools-for-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) MySQL ETL Tools: Top Choices for Use in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-free-and-paid-etl-tools-for-mysql/#respond) 4763 April 15, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-free-and-paid-etl-tools-for-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=MySQL+ETL+Tools%3A+Top+Choices+for+Use+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-free-and-paid-etl-tools-for-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-free-and-paid-etl-tools-for-mysql/&title=MySQL+ETL+Tools%3A+Top+Choices+for+Use+in+2025) Summary [ETL tools](https://skyvia.com/blog/etl-tools/) are essential for managing data in MySQL, as they automate the process of extracting, transforming, and loading data from various sources. MySQL is a relational database management system that is user-friendly and scalable but is not specifically designed for big data or real-time analytics. The ETL process consists of three main steps: Extract (pulling data from sources), Transform (cleaning and shaping data), and Load (loading data into MySQL). There are both free and paid ETL tools available for MySQL, catering to different user needs, from technical users requiring complex transformations to non-technical users needing simple, no-code solutions. Choosing the right ETL tool depends on factors like team skills, data complexity, and growth goals, with options ranging from open-source solutions to fully managed services. Managing data in MySQL sounds simple until users juggle dozens of sources, messy formats, and constant updates. Whether importing customer records, syncing data from cloud apps, or preparing it all for analytics, manual work can eat up hours and leave room for costly errors. That\u2019s why we need ETL tools. They help businesses extract data from different systems, clean, transform, and load it into MySQL without writing endless scripts or chasing down file formats. From automating recurring imports to building full-scale pipelines for reporting or machine learning, ETL tools simplify companies\u2019 lives and scale with data. In this guide, we\u2019ll look at some of the best free and paid ETL solutions for MySQL so you can spend less time wrangling spreadsheets and more time making data work for the business. Table of contents What Is MySQL? MySQL ETL Process Free MySQL ETL Tools Talend Big Data Open Studio Airbyte Singer Paid MySQL ETL Tools Skyvia Hevo Data Pentaho Kettle Fivetran Blendo Conclusion What Is MySQL? It\u2019s a relational database management system (RDBMS) that stores data in row-and-column tables. Businesses of all sizes use it to manage everything from customer info and sales records to app data and logs. It is available as open-source software and under a premium license . It is relatively easy to use compared to other relational databases like Postgres. It is well-known for its high performance and scalability . It is compatible with multiple operating systems, such as Windows, Linux, and macOS. It has GUI support . While it\u2019s great for traditional transactional workloads, you might wonder: Is MySQL good for big data? Sort of. It handles large datasets well, especially with the proper indexing and optimization, but it isn\u2019t built for unstructured data or massive real-time analytics at scale. Is MySQL a data warehouse? Not exactly. It\u2019s more of an operational/transactional database, not a purpose-built analytical store. So, some companies use MySQL for lightweight analytics, but for serious reporting and BI, it\u2019s often paired with a proper data warehouse and an ETL tool to handle the heavy lifting. MySQL ETL Process Such a process means three main steps: Extract. Transform. Load. Extract It pulls data from one or more sources, including cloud apps like Salesforce, files like CSVs, other SQL or NoSQL databases, or even APIs. The key is to collect the data as-is , regardless of format or quality. Transform This is where the real magic happens. Transformation is about cleaning , shaping , and preparing the data for MySQL. You might: Convert formats (like dates or currency). Remove duplicates or errors. Join or split tables. Apply business rules. This step ensures the info fits your schema and is analytics-ready. Load Finally, the transformed information is loaded into the MySQL database. Depending on the tool, this could be: A full reload. An incremental update. A real-time sync . Once loaded, the data is ready for queries, dashboards, or whatever insights you need. [ETL tools](https://skyvia.com/blog/etl-tools/) help automate this whole flow, saving time , reducing errors , and simplifying the management of large and complex datasets. Free MySQL ETL Tools Talend Big Data Open Studio It\u2019s a free, open-source ETL tool that simplifies data integration in large-scale environments. Built on Java and the Eclipse platform, it provides a visual interface to create, schedule, and manage data pipelines without writing everything from scratch. Here, users can: Extract data from multiple sources. Transform it with built-in components. Load it into destinations like MySQL. It supports file management, data profiling, and even Hadoop job orchestration through a simple drag-and-drop canvas. While it\u2019s built with big data in mind, it\u2019s flexible enough for traditional ETL tasks, too, especially for users comfortable with technical workflows. Review G2 Rating: 4.0 / 5 What users like: Flexible component library, open-source model, integration with [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) . What users don\u2019t like : Steep learning curve, UI performance lags on large projects, limited support for non-technical users. Pros Free and open-source. Strong support for Hadoop and big data ecosystems. Visual drag-and-drop UI with a robust component library. Customizable with Java for advanced transformations. Suitable for batch data processing and scheduled jobs. Cons Not beginner-friendly. It requires technical knowledge. UI can be slow for complex or large projects. No built-in cloud deployment or automation. Lacks real-time streaming support out of the box. Best For Data engineers and developers working with big data tools like Hadoop, Spark, or Hive. Companies needing complete control over their ETL logic in an on-premise or hybrid environment. Teams that prefer open-source flexibility over vendor lock-in. Airbyte This data integration platform is also open source and built for modern teams that want control, scalability, and extensibility. It focuses on ELT, supporting a wide range of connectors that can be customized or created from scratch. Airbyte aims to centralize all the data pipelines, whether users are syncing data from SaaS apps, databases, or files into destinations like MySQL, Snowflake, BigQuery, or Redshift. It runs locally or in the cloud, and its modular architecture makes it an excellent fit for data engineers looking to automate and scale integrations with minimal vendor lock-in. Review G2 Rating : 4.4 / 5 What users like: Strong open-source community, customizable connectors, ease of deployment with Docker. What users don\u2019t like: It is still maturing, some connectors are unstable or under development, and there are limited scheduling features out of the box. Pros Open-source with a growing library of pre-built connectors. Easy to deploy locally or in your own cloud. REST API and CLI support for automation. Connector templates and SDK for building users\u2019 own sources/destinations. Strong community and frequent updates. Cons Some connectors lack stability or are in beta. It is not fully no-code and requires some technical knowledge. Built-in scheduling and orchestration features are limited (though can integrate with Airflow, Prefect, etc.). There is no official support unless you go with the paid enterprise plan. Best For Data teams and engineers who want to build or customize their own connectors. Companies with cloud infrastructure looking for self-hosted or open-source alternatives. Use cases where ELT is preferred over traditional ETL (e.g., transformations done in the database after loading). Singer It\u2019s one more open-source ETL framework built around a simple idea: data pipelines as code using reusable \u201cTaps\u201d (for extracting data) and \u201cTargets\u201d (for loading it). Instead of a GUI, Singer uses standardized JSON-based scripts and command-line tools to define how data flows from source to destination. The solution is widely adopted by developers and engineers who want lightweight, composable pipelines that can be integrated into their own orchestration systems like Airflow or Prefect. Singer supports MySQL as both a source and a destination, and it plays nicely with other open-source tools like Meltano and Airbyte. Review G2 Rating: 4.1 / 5 What users like: Simplicity, code-first design, and flexibility in combining tools. What users don\u2019t like: No UI, not user-friendly for non-developers, inconsistent tap/target maintenance. Pros Open-source and free to use. Lightweight and flexible architecture using reusable taps/targets. Easily integrates into CI/CD pipelines or custom ETL systems. Great for teams that want infrastructure-as-code for data. Supported by the Singer Spec, making community-built connectors more consistent. Cons No GUI, command line only. Some community-built taps/targets are outdated or poorly maintained. No built-in orchestration or monitoring. Requires technical skills and setup from scratch. Best For Developers and data engineers who prefer scripting and infrastructure-as-code over visual interfaces. Teams looking for maximum control and flexibility in how data pipelines are built and deployed. Companies building custom ETL frameworks or integrating with orchestration tools like Apache Airflow or dbt (Data Build Tool). Paid MySQL ETL Tools Skyvia This no-code, cloud-based [data integration](https://skyvia.com/data-integration) platform simplifies [ETL](https://skyvia.com/learn/what-is-etl) , [ELT](https://skyvia.com/learn/what-is-elt) , [reverse ETL](https://skyvia.com/learn/reverse-etl-tools) , replication, and synchronization processes. It allows companies to easily connect cloud apps , databases , and file storage platforms without coding. With a drag-and-drop interface and over [200+](https://docs.skyvia.com/connectors) built-in connectors , including MySQL , Skyvia empowers technical and non-technical users to: Automate data pipelines. Migrate records. Unify their tech stack effortlessly. It\u2019s especially popular among small to mid-sized businesses for its simplicity, flexibility, and affordable pricing model. Review G2 Rating : 4.8 / 5 What users like: Easy to use, powerful automation, broad connector library, reliable scheduling. What users don\u2019t like: No real-time triggers, occasional limitations in advanced filtering. Pricing Skyvia offers flexible [pricing](https://skyvia.com/pricing) to fit different use cases, from small-scale tasks to enterprise-level integrations: Free plan : Basic functionality with limited runs/month (great for testing or small jobs). Basic plan : Starts at $79/month, ideal for light data import/export Standard and Professional plans : From $159 to $199/month, with more rows, connectors, and automation features. Enterprise : Custom pricing for high-volume, multi-user, or advanced integration needs. Pros Fully cloud-based, no installation or updates needed. No-code interface is accessible for business users and analysts. Supports MySQL, PostgreSQL, Salesforce, HubSpot, and many more. Handles ETL, ELT, reverse ETL, replication, API creation, and scheduling. Affordable and scalable, from free usage to enterprise deployment Includes logging, monitoring, and error-handling features. Cons Doesn\u2019t support NoSQL databases (e.g., MongoDB). Runs on scheduled syncs only, with no real-time triggers. Advanced custom logic may require using additional tools or APIs. No on-premise deployment option. Best For Small and medium businesses that want a simple, no-code ETL solution. Teams that need to sync MySQL with cloud apps like Salesforce, Dropbox, Google Sheets, etc. Companies looking for affordable automation without developer dependency. Users who want to create, monitor, and schedule data flows without writing scripts. Hevo Data Another cloud-native data pipeline platform that enables users to move data in real-time from 150+ sources to popular data warehouses like Snowflake, BigQuery, Redshift, and MySQL. Designed with automation and ease of use in mind, Hevo lets data teams sync and transform info without coding. Its intuitive UI, real-time streaming capability, and built-in data quality checks make it ideal for operational analytics and business intelligence. Users can also schedule transformations, monitor pipelines, and resolve issues quickly with automated alerts and logs. Review G2 Rating : 4.6 / 5 What users like: Fast setup, clean UI, reliable performance, responsive support. What users don\u2019t like: Limited flexibility for highly custom transformations. It can get expensive at scale. Pricing Hevo offers transparent pricing based on event volume: Free Plan : Up to 1 million events/month, including core features and basic support. Starter Plan : Begins at $239/month, includes 20 million events, incremental pricing after that. Business Plan : Custom pricing that includes advanced features, API access, and priority support. Pros Supports both batch and real-time streaming ETL. Automatic schema mapping and transformation options. Excellent monitoring and logging for visibility into pipeline health. Built-in data quality checks and alerts. Cons Limited custom logic; more advanced transformations may require external tools. Pricing may scale quickly with high data volume. No on-premise version, cloud-only. Supports fewer destinations compared to some larger competitors. Best For Startups and growing companies needing fast, reliable data syncing into MySQL or cloud warehouses. Ops and marketing teams that want to enable self-service analytics with minimal IT support. Data engineers and analysts looking for a streaming ETL solution with strong automation and alerting. Pentaho Kettle It\u2019s a robust open-source ETL tool developed by Hitachi Vantara, also known as Pentaho Data Integration (PDI). Its visual drag-and-drop interface lets users design complex data workflows without heavy coding. PDI supports extracting, transforming, and loading data from various sources, including relational databases like MySQL, flat files, cloud services, and more. Pentaho is available in two editions: The Community Edition is free and open-source, offering full access to core ETL capabilities. The Enterprise Edition (paid) includes additional features like clustered execution, advanced security, integration with Pentaho BI tools, and enterprise support, making it more suitable for mission-critical, large-scale deployments. Review G2 Rating : 4.1 / 5 What users like: Flexible visual design, powerful transformations, and rich functionality. What users don\u2019t like: Outdated UI, learning curve, and missing cloud-native features. Pricing Community Edition : Free and open-source. Enterprise Edition : Custom pricing (contact Hitachi for a quote) Pros Visual, code-optional interface for building pipelines. Supports advanced transformations and scripting via Java. Good for data cleansing, migration, and batch jobs. Connects to a wide variety of structured and unstructured sources. Enterprise version includes BI tools, clustering, and support. Cons Not cloud-native, best suited for on-premise or self-managed cloud environments The user interface feels outdated compared to newer tools. The steep learning curve for beginners. Community updates are less frequent. Best For Enterprises needing on-premise or hybrid ETL infrastructure. Technical teams building complex data flows across legacy systems. Organizations that want open-source flexibility with the option to scale into enterprise-grade capabilities. Companies integrating and transforming large datasets into systems like MySQL, Hadoop, or cloud warehouses. Fivetran It\u2019s a fully managed, cloud-native ELT platform designed to automate data integration at scale. It focuses on extracting information from hundreds of sources, like SaaS tools, databases, and cloud systems and loading it directly into your MySQL database or data warehouse. Fivetran shines in its \u201cset-it-and-forget-it\u201d approach: once connected, it handles schema changes, sync schedules, and error handling automatically. While it doesn\u2019t offer deep pre-load transformation like traditional ETL tools, it\u2019s perfect for teams focused on speed, scalability, and minimizing data engineering overhead. Review G2 Rating : 4.4 / 5 What users like: Zero-maintenance setup, high reliability, wide connector coverage. What users don\u2019t like: Limited transformation tools, steep pricing at scale Pricing Fivetran uses a consumption-based pricing model, charging based on monthly active rows (MAR): the number of new or updated rows synced. Free Plan : Available for up to 500K MAR/month. Starter Plan : From ~$300/month, based on usage. Enterprise Plans : Custom pricing for high-volume and advanced use cases. Pros Fully managed, no-maintenance pipelines. Supports over 500+ connectors for apps, databases, and files. Auto-detects schema changes and adjusts pipelines accordingly. Optimized for fast setup and high-volume replication. Excellent for ELT workflows with downstream tools like dbt. Cons There is no built-in pre-load transformation; it is just post-load only. It is expensive at scale, especially with high MAR usage. Lacks flexibility for custom connectors or deeply tailored pipelines. No on-premise or hybrid deployment options. Best For Data teams and analysts who want fast, reliable access to raw data for reporting or modeling. Companies using modern data stacks (e.g., dbt + Snowflake + Fivetran). Organizations with growing connector needs and limited engineering capacity. Teams focused on replication and ELT, not custom logic or scripting. Blendo It\u2019s a cloud-based ELT platform that helps businesses connect cloud apps, databases, and services to data warehouses or databases like MySQL. Known for its ease of use and fast setup, Blendo specializes in syncing data from popular tools like HubSpot, Google Ads, Zendesk, and Stripe into your analytics environment. It focuses on reliability and simplicity. Users also don\u2019t need to write code or manage infrastructure. While it\u2019s less customizable than some competitors, it\u2019s ideal for small to mid-sized teams looking to centralize data for reporting and analysis. Review G2 Rating : 4.3 / 5 What users like: Quick to set up, consistent syncing, clean UI. What users don\u2019t like: Limited connectors and transformation features compared to larger platforms. Pricing Blendo offers usage-based pricing , primarily based on the number of connected sources and synced rows. Plans start at ~$150/month . Pricing increases based on volume and data destination types. Custom plans are available for larger teams. Pros The solution is easy to use; no technical expertise is required. Fast setup and pre-built connectors for major SaaS tools. Syncs to significant databases and warehouses like MySQL, Redshift, and BigQuery. It offers fundamental transformations and scheduling features. It\u2019s secure and GDPR-compliant. Cons Smaller connector libraries than competitors like Fivetran or Airbyte. Limited transformation capabilities (ELT-focused only). Fewer features for large-scale enterprise use cases. Lacks complex orchestration or error-handling logic. Best For Startups and SMBs looking for fast, reliable ELT in MySQL or cloud warehouses. Teams that want to automate SaaS data sync without coding. Companies that need a straightforward tool to power dashboards or reports. Organizations focused more on analytics-ready data, not deep transformations. Conclusion Choosing the right ETL tool for the MySQL workflows comes down to: Team\u2019s skills. Data complexity. Growth goals. From powerful open-source solutions like Talend Open Studio and Pentaho , to fully managed platforms like Fivetran , Skyvia , and Hevo Data , there\u2019s something for every use case and budget. Tools like Airbyte , Singer , and Blendo are great for teams that want flexibility without a steep learning curve. At the same time, Skyvia stands out for its no-code accessibility and broad connector coverage. F.A.Q. What is ETL, and how does it relate to MySQL? ETL stands for Extract, Transform, Load: a process that helps you move data from various sources into a centralized system like MySQL. You extract data from apps, databases, or files, clean and format it (transform), and then load it into MySQL for storage, analysis, or reporting. What are the key benefits of using an ETL tool with MySQL? ETL tools save time, reduce manual errors, and automate data movement. They help you pull data from different systems, format it consistently, and keep your MySQL database up to date. Whether for analytics, reporting, or syncing data across platforms. What are the differences between [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) , and which is better for MySQL? In ETL , data is transformed before it\u2019s loaded into MySQL. In ELT , the raw data is loaded first, and transformations happen inside MySQL. ETL is more common for smaller datasets or when you want control over cleaning before the load. ELT works better for large-scale operations if your MySQL instance can handle in-database transformations. How do I choose the right MySQL ETL tool for my specific needs? Start by evaluating your data sources, team skill level, budget, and how complex your data workflows are. If you\u2019re non-technical or need a quick setup, go for a no-code platform like Skyvia. If you\u2019re technical and want full control, look at open-source options like Airbyte or Talend. Fivetran or Hevo Data might be ideal for zero-maintenance pipelines. Does Skyvia support MySQL ETL, and what are its advantages? Yes, Skyvia fully supports ETL to and from MySQL . It\u2019s a no-code, cloud-based platform that lets you connect MySQL with over 200 other apps and databases. Skyvia is ideal for users who want an easy setup, powerful automation, and flexible pricing without writing code. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-free-and-paid-etl-tools-for-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=MySQL+ETL+Tools%3A+Top+Choices+for+Use+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-free-and-paid-etl-tools-for-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-free-and-paid-etl-tools-for-mysql/&title=MySQL+ETL+Tools%3A+Top+Choices+for+Use+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-ipaas-solutions/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Integration Platform as a Service (iPaaS) By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-ipaas-solutions/#respond) 1004 November 25, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-ipaas-solutions%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Integration+Platform+as+a+Service+%28iPaaS%29&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-ipaas-solutions%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-ipaas-solutions/&title=Best+Integration+Platform+as+a+Service+%28iPaaS%29) Many companies use dozens of different applications across departments. And, very often, it becomes necessary to share certain data between services. Here is where the challenges of data integration and management arise. An iPaaS platform can juggle all those applications within the company\u2019s ecosystem and integrate data between them. In this article, you\u2019ll learn the [iPaaS definition](https://skyvia.com/blog/what-is-ipaas/) , discover \u200b\u200biPaaS integration features, and find popular iPaaS solutions. Table of Contents What Is iPaaS? Benefits of iPaaS for Businesses Top iPaaS Providers in 2025 Skyvia SnapLogic Boomi MuleSoft Workato How an iPaaS Works? Finding the Best iPaaS for Your Needs Conclusion FAQ What Is iPaaS? The [iPaaS](https://skyvia.com/blog/what-is-ipaas/) abbreviation stands for integration platform-as-a-service. Similarly to other -aaS solutions, it\u2019s based in the cloud. A typical iPaaS platform contains a suite of services for the development, execution, and governance of various integration processes. This includes application connectivity, API management, and data integration. Modern iPaaS platforms, presented in this article, usually have a friendly UI, allowing for little to no development work. As a result, companies receive a hand-off experience for connecting both on-premises and cloud applications, services, systems, and data sources with minimal effort. Here are some functions a typical iPaaS platform carries out: Contains a library of pre-built connectors to SaaS apps, on-premises systems, databases, and other data sources. Moves data between different applications and systems connected to iPaaS. Builds, schedules, and monitors integration workflows. Transforms and maps data. Very often, iPaaS platforms are perceived as [ETL tools](https://skyvia.com/blog/etl-tools/) for [data integration](https://skyvia.com/data-integration) . In fact, there are many similarities between these solutions. What\u2019s more, iPaaS can be considered the next generation of ETL tools. Anyway, there are still three notable differences between iPaaS and ETL solutions: ETL tools usually work with batch data, while iPaaS can also move data across systems in real-time. ETL tools integrate only data, while iPaaS platforms can connect applications via APIs and integrate their data. iPaaS platforms are much more scalable and flexible than ETL solutions. Benefits of iPaaS for Businesses 1. Centralization Obtain centralized control over all data integrations through iPaaS. Instead of configuring [point-to-point integrations](https://skyvia.com/blog/point-to-point-integration-pros-cons/) , rely on the capabilities for centralized management offered by an iPaaS platform. 2. Surveillance Thanks to centralization, you can enjoy real-time monitoring of all connections and integrations. Take an overview of the principal dashboard to detect errors and performance issues and apply actions in a timely manner. 3. Automation Thanks to pre-built connectors and a visual interface, iPaaS platforms allow you to build and automate workflows. This promotes initiatives of digital transformation and minimizes efforts on workflow management. 4. Ease of Use Modern iPaaS platforms don\u2019t require programming skills and software architecture knowledge. Instead, they offer a drag-and-drop interface with a no-code approach for empowering businesses to orchestrate integration without involving IT teams. 5. Connectivity As a rule, iPaaS vendors are in charge of maintaining and updating connectors to SaaS apps, databases, and other data sources. You get support for existing connectors and the creation of custom ones. 6. Scalability Once you start using a new tool within your enterprise environment, it\u2019s easy to add it to the integration platform. You can add as many connectors as needed, which promotes scalability and handles workload increases. Top iPaaS Providers in 2025 Skyvia [Skyvia](http://skyvia.com) is a universal cloud data platform suitable for a wide variety of data-related tasks, including data integration, workflow automation, SaaS backup, OData and SQL endpoint creation, and data querying. Now, let\u2019s review the list of functions that an iPaaS platform is expected to carry out and see how Skyvia does that. Function How Skyvia does it Contains a library of pre-built connectors. Skyvia supports [200+ data sources](https://skyvia.com/connectors) , including apps, databases, and data warehouses. Moves data between different applications and systems. Skyvia offers three products that allow for data transfer between services: \u2013 [Data Integration](https://skyvia.com/data-integration) \u2013 [Connect](https://skyvia.com/connect) \u2013 [Automation](https://skyvia.com/automation) Builds, schedules, and monitors integration workflows. The above-mentioned Skyvia products allow you to build and automate workflows. They also provide detailed [monitoring](https://docs.skyvia.com/data-integration/package-run-history.html) and logging. Transforms and maps data. With [Data Integration tools](https://skyvia.com/blog/data-integration-tools/) , it\u2019s possible to perform standard or complex [data transformations](https://skyvia.com/learn/what-is-data-transformation) and [field mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/index.html) . Let\u2019s do the same for the benefits by exploring the typical advantages an iPaaS brings to businesses and how Skyvia aligns with them. Centralization. All the integrations and connectors can be managed from the Skyvia account. Monitoring. The status and outcome of the integration can be tracked in the monitoring dashboard. You will also get an email notification if there is any integration issues. Automation. All the workflows can be scheduled for execution. Ease of use. According to [G2 reviews](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , Skyvia is among the top user-friendly integration tools. Thanks to its visual interface and no-coding requirements, users find it easy to use and operate Skyvia. Connectivity. This platform contains [200+ pre-built connectors](https://skyvia.com/connectors) to various data sources, including apps, databases, and data warehouses. Scalability. With Skyvia, it\u2019s possible to scale up to infinity, adding as many tools and setting up as many integration scenarios as you need. The degree of scalability depends on the selected pricing plan. Pricing: You can start using Skyvia with a free tier and then switch to the plan that best matches your workload. It\u2019s also easy to scale up or down at any time! SnapLogic [SnapLogic](https://www.snaplogic.com/) is another iPaaS solution that allows companies to connect their apps, both online and offline, and build integrations. This product simplifies data ingestion and movement across pipelines with a no-code/low-code interface. It also contains an AI engine called Iris that assists users in designing and deploying integration scenarios. Similarly to the Skyvia overview, let\u2019s observe how SnapLogic implements the iPaaS functionality. Function How SnapLogic does it Contains a library of pre-built connectors. This solution embeds 350+ connectors. Moves data between different applications and systems. SnapLogic supports batch, real-time, and streaming integration scenarios. Builds, schedules, and monitors integration workflows. Integrations are created using so-called Snaps in the control pane called IIP. They are monitored within a SnapLogic Dashboard. Transforms and maps data. There are Snaps for performing these operations: Mapper and Transformer. Now, let\u2019s discover whether SnapLogic grants advantages to a typical iPaaS platform and in which way. Centralization. This tool offers an Intelligent Integration Panel, or simply IIP, where all the integrations can be created and managed. Monitoring. In that same IIP, there is a SnapLogic dashboard where users can track integrations in real-time, see the history of executions, and detect anomalies. Automation. Obviously, SnapLogic allows businesses to automate their processes from a general perspective. Automation is also applied during the creation of those integrations. The Iris AI engine gives hints on data pipeline building, offering suggestions on Snap components to be included and configured. Once a Snap is configured, it\u2019s automatically validated. Ease of use. SnapLogic provides a visual pane where users can perform all the actions for their integrations. There is also a visual builder for expressions when mapping fields of input and target schemas, though Javascript coding can also be used as an alternative. Connectivity. Similarly to other iPaaS solutions, this platform makes it possible to connect both on-premises and cloud-based apps. Scalability. SnapLogic offers a high degree of scalability and may be suitable for companies with moderate and high workloads. Pricing: The Project Edition plan might be a good option for companies with moderate workloads, with the price starting at $48,000. The Enterprise Edition plan starts at $100,000 and would work for enterprises with large-scale operations and high workloads. SnapLogic also provides custom solutions for real-time integrations and mission-critical applications. Boomi [Boomi](https://boomi.com/) is a platform for integration and orchestrating applications, APIs, and data. No other iPaaS solution in the industry, even the leading one, can\u2019t be compared to Boomi in terms of security. It implements additional protocols, such as ISO 27701, IRAP, and StateRAMP, adding extra security to network infrastructure, applications, and data. Similarly to the overview of tools mentioned above, let\u2019s explore the functionality of Boomi. Function How Boomi does it Contains a library of pre-built connectors. Boomi\u2019s library contains 600+ pre-build connectors. Moves data between different applications and systems. This platform provides AtomSphere, MDM, and API Management Functionalities modules for integration implementation. They support cloud, on-premises, and hybrid systems. Builds, schedules, and monitors integration workflows. Boomi contains a number of dashboards showing the status of executions, HTTP status codes, a summary of integration activities, recent errors, and execution trends. Transforms and maps data. There are mapping options and complex transformations available. Advantages of Boomi: Centralization. Boomi positions itself as a Master Data Mart (MDM), centralizing records about projects, CRM, HR, and other systems. Monitoring. With this service, you can monitor the executions on the dashboard as well as web server status and health. Automation. This tool is known for its high degree of automation for data mapping, connector configuration, error resolution, and even regression testing. Ease of use. Boomi comes with pre-built modules with drag-and-drop for easy integration. Connectivity. This iPaaS platform supports hundreds of on-premises, cloud, and hybrid data systems. Scalability. Its cloud-based nature allows Boomi to be highly scalable. Pricing: This solution is suitable mostly for SMBs and enterprises that are performing data-intensive tasks. The price depends on business requirements and can be discussed with Boomi sales representatives. MuleSoft [MuleSoft](https://www.mulesoft.com/) is an all-in-one integration platform that heavily relies on AI to deliver the best experience to businesses. It also offers seamless connectivity, automation, and control to users. Now, it\u2019s time to review the main features of Mulesoft. Function How Mulesoft does it Contains a library of pre-built connectors. There are 250+ out-of-the-box connectors to databases and enterprise applications. Moves data between different applications and systems. Mulesoft\u2019s Anypoint Exchange platform is aimed at connecting applications, devices, and data. It also offers a wide number of data integration templates to accelerate the pipeline-building process. Builds, schedules, and monitors integration workflows. It\u2019s possible to catalog and manage all integrations by definite central governance rules. Transforms and maps data. Mulesoft offers data transformation and mapping features. Mulesoft\u2019s benefits: Centralization. All the integrations are built and managed within the central dashboard. Monitoring. There is a unified interface to monitor API and integrations. Automation. Mulesoft allows business users to automate end-to-end integration processes with templates, coding, and natural language prompts. Ease of use. This platform is used within no-code and low-code development paces to build automations, connect data, and manage APIs effectively. Connectivity. Mulesoft allows connections to various enterprise applications and databases via pre-built connectors. Scalability. This tool allows users to move millions of records between applications, which makes it a highly scalable solution. Pricing: Mulesoft is available at a custom price and is mostly suitable for enterprises. Workato [Workato](https://www.workato.com/) is an integration and automation platform for businesses. Workato\u2019s digital-native architecture and AI-based engine help businesses design and manage automations with ease. Now, it\u2019s time to review the main features of Workato. Function How Workato does it Contains a library of pre-built connectors. This tool is equipped with 600+ connectors for databases, on-premises systems, cloud ads, etc. Moves data between different applications and systems. Workato allows users to transfer data in batch and in real-time. It comes with 225,000+ community recipes that address popular business workflows. You can use these recipes or create new ones from scratch with the help of the Recipe IQ machine-learning engine. Builds, schedules, and monitors integration workflows. It\u2019s possible to keep track of events in real time and keep a log of recipe jobs. Transforms and maps data. Recipe IQ helps users not only create integration and automation but also transform data with built-in ETL processes. Does Workato offer all the advantages expected from an iPaaS platform? Let\u2019s find that out. Centralization. This tool provides a single dashboard containing all the integrations and automations grouped in projects. As a result, users can find the needed workflow faster. Monitoring. Workato also contains a dashboard with the statuses of all recipes and version control. Automation. It\u2019s possible to automate workflow using the scheduling options and RECIPE IQ smart engine. Ease of use. Workato offers an ample set of integration features that are accessible and understandable even for non-tech professionals. Connectivity. This service contains hundreds of pre-built connectors and can transfer data between various kinds of devices, applications, and tools. Scalability. Workato implements a platform fee and usage fee that scale with your growing business requirements. Pricing: Workato targets different economic segments and companies, from startups to enterprises. How does iPaaS Work? We describe how iPaaS works using Skyvia tools as examples. Each of them performs integration in a different way. Each example contains simple steps to build, maintain, configure, and deploy the integration. Data Integration Setting data pipelines with the [Data Integration](https://skyvia.com/data-integration) product: Select the type of integration (Replication, Import, Sync, Data Flow). Select the source and target applications for the integration. Configure transformation and mapping if applicable. Set scheduling parameters for running integration at a specific time. Start the integration. Monitor the integration execution and logs. Automation Setting up the automated workflows with [Automation](https://skyvia.com/automation) takes only a couple of easy-to-implement procedures: Create a new automation flow. Select the trigger type. Add components to the automation flow. Define actions by selecting the needed data source and command for execution. Test and run the automation. Monitor the execution results. Connect With the [Connect](https://skyvia.com/connect) product, it\u2019s possible to create OData and SQL endpoints in less than 5 minutes. Let\u2019s have a look at the example of the OData endpoint creation. Create a new OData endpoint from the menu. Select Simple mode for endpoint configuration. Select the connection to your data sources. Specify data objects that you want to expose. Configure endpoint security settings. Specify other required settings for the endpoint. Finding the Best iPaaS for Your Needs As you see, the presented iPaaS platforms have more or less the same feature set. To help you choose the right solution, we\u2019d focus on G2 Crowd rating, number of connectors, price, and businesses it suits. iPaaS Tool Rating on G2 Price Suitable for Connectors Skyvia 4.8/5 Free tier available. Pricing starts at $79/month. Any business 200+ SnapLogic 4.3/5 Starts from $48,000. Enterprises 350+ Boomi 4.4/5 Starts from $50 per feature per month. SMBs and enterprises 600+ MuleSoft 4.5/5 Custom pricing. Enterprises 250+ Workato 4.7/5 Custom pricing with the usage-based model. Any business 600+ Conclusion An iPaaS platform allows businesses to integrate applications and data with ease. With hundreds of pre-built connectors, a visual interface with no coding, and AI assistants, connecting apps becomes a low-hanging fruit. Skyvia is a universal cloud data platform with all the benefits and features expected from iPaaS solutions. Benefit from a free tier to try Skyvia out and see how it seamlessly brings together Salesforce and other CRM systems, QuickBooks and other accounting software, ERP tools, and other corporate apps! FAQ for iPaaS Solutions Is iPaaS Secure? Securing data on integration and transfer is the top priority for all iPaaS providers. They implement all modern security mechanisms that ensure data protection according to modern standards. Data encryption and compliance with regulatory requirements are among the most common practices ensuring data safety. What Are Some Common Business Use Cases for iPaaS? IoT management. Connect and manage data from IoT devices. CRM integration. Connect a CRM system with other applications used by the organization. This enhances marketing automation, user experiences, and all other customer-related processes. HR integration. Sync information between HR systems and accounting software to improve payroll, administration, and hiring processes. Financial management. Send data from online stores, CRM systems, and inventory management tools to accounting software for better financial analysis, management, and planning. Can an iPasS Be Used by Non-Technical Users? The iPaaS solutions implement visual interface and no-code functionality, allowing even non-tech users to build and deploy data pipelines. Such an approach helps to reduce the burden on the IT departments within corporations and offers a bit more freedom of action to other departments in extracting the needed data and generating reports. Are There Any Issues Associated with Using iPaaS? There are two main challenges associated with iPaaS: complexity and performance. Even though modern platforms offer user-friendly GUI, the complexity of pipeline design still exists. Companies need to be precise in deciding which data needs to be transferred, how it should be processed, and with which tools it needs to be synchronized. Another issue is related to the external factors, such as network latency and throughput. This concert might be of high importance for real-time data transfers. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-ipaas-solutions%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Integration+Platform+as+a+Service+%28iPaaS%29&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-ipaas-solutions%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-ipaas-solutions/&title=Best+Integration+Platform+as+a+Service+%28iPaaS%29) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-mysql-reporting-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best 10 MySQL Reporting Tools for Data Analysis in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-mysql-reporting-tools/#respond) 837 December 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-mysql-reporting-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+10+MySQL+Reporting+Tools+for+Data+Analysis+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-mysql-reporting-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-mysql-reporting-tools/&title=Best+10+MySQL+Reporting+Tools+for+Data+Analysis+in+2025) MySQL reporting tools are like the bridge between storage and storytelling to make sense of all the data stored in MySQL databases. They pull out raw information and turn it into clear visual reports, dashboards, and charts. With such systems, you don\u2019t need to be a data wizard. Just a few clicks can get clear, shareable reports. They often come with handy features like: Drag-and-drop editors. Custom filters. Real-time data updates. Whether tracking sales, monitoring website traffic, or analyzing user behavior, these solutions help you transform complex data into easy-to-read insights. In this guide, we\u2019ll explore the Top MySQL Reporting Tools for 2025 that help businesses create dynamic reports, track KPIs, and make data-driven decisions faster. Table of Contents Considerations for Assessing MySQL Tools Top 10 MySQL Reporting Tools for 2025 Tableau Datadog Integrate.io Microsoft Power BI Zoho Analytics Domo Knowi RIB BI+ (ex datapine) Sisense SAP BusinessObjects Improve Your MySQL Reporting With Data Integration Tools Comparative Analysis for MySQL Reporting Tools Kickstarting MySQL Reporting Conclusion FAQ Considerations for Assessing MySQL Tools When choosing the right tool, a few key factors make all the difference in how well it fits your needs and delivers value. The table below compares them based on the essential criteria for efficient, secure, and scalable data reporting. Criteria Key Considerations Samples Compatibility Check support for different MySQL versions, compatibility with other databases, and cross-platform availability. [Microsoft Power BI ,](https://www.microsoft.com/en-us/power-platform/products/power-bi) [Zoho Analytics](https://www.zoho.com/analytics/) Performance Look for features like data caching, real-time updates, and performance optimization for big data loads. [Sisense](https://www.sisense.com/) , [Knowi](https://www.knowi.com/) , [Domo](https://www.domo.com/) Features Consider if it has drag-and-drop reporting, filtering, drill-down capabilities, and scheduled reporting. [Tableau](https://www.tableau.com) , [Datapine](http://datapine.com) , [SAP BusinessObjects](https://www.sap.com/products/technology-platform/bi-platform.html) Scalability Evaluate if the tool supports expanding queries, multi-source reporting, and handling increased data complexity. [Integrate.io](http://integrate.io) , [Domo](https://www.domo.com/) , [Datadog](https://www.datadoghq.com/) Security Look for encryption options, role-based access, secure connections, and compliance with security standards. [SAP BusinessObjects](https://www.sap.com/products/technology-platform/bi-platform.html) , [Sisense](https://www.sisense.com/) Top 10 MySQL Reporting Tools for 2025 Tableau [Tableau](https://www.tableau.com) is a data visualization and BI tool designed to simplify the creation of interactive, shareable dashboards. It connects easily with MySQL databases, allowing users to transform raw data into detailed charts, graphs, and maps. Known for its intuitive drag-and-drop interface, Tableau enables users to perform deep analysis without advanced technical skills. Tableau\u2019s robust community and extensive resources make it a favorite among beginners and advanced data professionals. With support for real-time analytics, it\u2019s good enough for organizations needing frequent data insights. Rating [G2](https://www.g2.com/products/tableau/reviews) score is 4.4/5 [Capterra](https://www.capterra.com/p/208764/Tableau/) score is 4.5/5 Pricing The [pricing](https://www.tableau.com/en-gb/pricing) starts at $70/user/month . The free plan is available. Pros Rich visualization options with drag-and-drop features. Excellent support and active community resources. Handles large datasets and complex queries effectively. Cons Relatively higher cost per user. Limited data manipulation capabilities outside visualizations. Datadog [Datadog](https://www.datadoghq.com/) is a cloud-based monitoring and analytics platform with powerful data reporting capabilities for applications and infrastructure, particularly MySQL databases. It helps track database performance, providing real-time metrics, automated alerts, and in-depth analytics. Though primarily used for monitoring, Datadog offers flexible dashboards and visualizations, enabling users to correlate MySQL performance with application and infrastructure metrics. Its cloud-native setup is good for DevOps and IT teams managing large-scale systems. Rating [G2](https://www.g2.com/products/datadog/reviews) score is 4.3/5 [Capterra](https://www.capterra.com/p/135453/Datadog-Cloud-Monitoring/) score is 4.6/5 [TrustRadius](https://www.trustradius.com/products/datadog/reviews) score is 8.4/10 Pricing The [pricing](https://www.datadoghq.com/pricing/) is flexible and starts at $15/host/month . The free trial is available. Pros Strong monitoring and alerting capabilities for MySQL. Real-time metrics with historical data insights. Easy integration with other infrastructure monitoring tools. Cons Less suitable for traditional BI reporting. Higher costs when monitoring large numbers of hosts. Integrate.io [Integrate.io](https://www.integrate.io/) is an all-in-one ETL platform tailored for cloud-based data integration, good for managing MySQL data workflows. It simplifies data pipeline creation, supporting MySQL integration with various applications and services. Integrate.io allows easy data transformation and scheduling, making data accessible for analysis and reporting. The platform is optimized for scalability and offers real-time data integration, enabling businesses to automate data movement as they grow. Rating [G2](https://www.g2.com/products/integrate-io/reviews) score is 4.3/5 [Capterra](https://www.capterra.com/p/153780/Xplenty/) score is 4.5/5 [TrustRadius](https://www.trustradius.com/products/integrate.io/reviews) score is 7.5/10 Pricing Custom [pricing](https://www.integrate.io/pricing/) based on data volume and usage. Pros Scalable platform supporting complex ETL operations. High compatibility with cloud databases and apps. Simplified drag-and-drop interface for workflow design. Cons Limited visualization options without third-party tools. Custom pricing may make costs less predictable. Microsoft Power BI [Microsoft Power BI](https://www.microsoft.com/en-us/power-platform/products/power-bi) is a business intelligence system that allows users to analyze and visualize MySQL data through a suite of powerful visualization tools. It supports data integration from various sources, including MySQL, providing real-time insights and interactive dashboards. With built-in AI capabilities, Power BI can uncover trends and patterns within MySQL data. The platform\u2019s integration with Microsoft products makes it popular for companies already within the Microsoft ecosystem. Rating [G2](https://www.g2.com/products/microsoft-microsoft-power-bi/reviews) score is 4.5/5 [Capterra](https://www.capterra.com/p/176586/Power-BI/) score is 4.6/5 [TrustRadius](https://www.trustradius.com/products/microsoft-power-bi/reviews) score is 8.4/10 Pricing The free version is available. The [Pro plan](https://www.microsoft.com/en-us/power-platform/products/power-bi/pricing) starts at $10/user/month . Pros Easy integration with other Microsoft products. Robust visualization and analytical capabilities. Cost-effective for small to medium-sized teams. Cons Some learning curve for non-technical users. Limited to cloud or desktop; the mobile version is less robust. Zoho Analytics [Zoho Analytics](https://www.zoho.com/analytics/) is a user-friendly BI tool that offers strong reporting and data visualization features for MySQL users. With various connectors and data integration options, Zoho Analytics allows businesses to create reports and dashboards without heavy technical expertise. It includes AI-powered forecasting and advanced analysis features. That makes it a practical solution for small to mid-sized businesses. Its scalability options and intuitive interface are helpful for companies looking to start with basic reporting and grow. Rating [G2](https://www.g2.com/products/zoho-analytics/reviews) score is 4.3/5 [Capterra](https://www.capterra.com/p/129749/Zoho-Analytics/) score is 4.4/5 [TrustRadius](https://www.trustradius.com/products/zoho-analytics/reviews) score is 8.2/10 Pricing [Paid plans](https://www.zoho.com/analytics/pricing.html) start at $24/month . The free version is also available. Pros User-friendly interface with AI-powered insights. Flexible data connectors and scalability. Affordable pricing with a range of features. Cons Fewer advanced visualization options compared to Power BI. Limited customization for complex reports. Domo [Domo](https://www.domo.com/) is a cloud-native BI platform for high-speed data integration, visualization, and analysis across MySQL and other sources. It provides pre-built connectors, interactive dashboards, and real-time insights. It\u2019s trendy for enterprises due to its scalability and powerful data handling capabilities, supporting IT and business users with customizable dashboards and mobile access. Rating [G2](https://www.g2.com/products/domo/reviews) score is 4.3/5 [Capterra](https://www.capterra.com/p/119119/Domo/) score is 4.3/5 [TrustRadius](https://www.trustradius.com/products/domo/reviews) score is 8.4/10 Pricing Custom [pricing](https://www.domo.com/pricing) based on features and users. The free trial is available. Pros Real-time, cloud-based data access. Collaborative features for team-based reporting. Scalable for large enterprises with multiple departments. Cons Higher cost, with pricing only available upon request. Complex setup and configuration for new users. Knowi [Knowi](https://www.knowi.com/) is a modern BI platform with native MySQL support that provides real-time analytics focusing on cloud-based data. It offers SQL-based data extraction, flexible visualization options, and machine learning integration. Knowi\u2019s ability to handle SQL and NoSQL databases makes it versatile, and it\u2019s perfect for businesses that require advanced analytics with minimal setup. It is known for its simplicity and speed and is trendy for real-time data monitoring and interactive dashboards. Rating [G2](https://www.g2.com/products/knowi/reviews) score is 4.8/5 [Capterra](https://www.capterra.com/p/166145/Knowi/) score is 4.8/5 Pricing Custom [pricing](https://www.knowi.com/plans/) , with free trials available. Pros Real-time data analysis with MySQL and NoSQL support. Machine learning capabilities for predictive analytics. Intuitive dashboard design for quick insights. Cons Limited customization in dashboard design. Pricing can be high for advanced features. RIB BI+ (ex datapine) [RIB BI+ (ex datapine)](http://datapine.com) is a self-service BI platform offering intuitive data visualization and reporting for MySQL databases. It provides an easy drag-and-drop interface and a variety of connectors and visualization types for real-time data reporting and interactive dashboards. Its powerful analytics and forecasting capabilities make it popular for businesses looking to understand trends and improve decision-making. Rating [Capterra](https://www.capterra.com/p/133176/datapine/) score is 4.9/5 [G2](https://www.g2.com/products/datapine-datapine/reviews) score is 4.8/5 [TrustRadius](https://www.trustradius.com/products/datapine/reviews) score is 9.7/10 Pricing The pricing starts at $249/user/month . The free trial is available. Pros User-friendly with drag-and-drop report building. Real-time data access for MySQL databases. Supports advanced analytics with forecasting. Cons Higher starting price. Limited customization options for advanced users. Sisense [Sisense](https://www.sisense.com/) is a BI platform helping users connect, analyze, and visualize MySQL data in real-time with minimal latency, thanks to its in-chip technology and data processing capabilities. It supports a wide range of connectors, allowing for seamless integration with MySQL and other databases. The solution is highly scalable and suitable for companies that anticipate growing data needs. Its security features fit industries with stringent compliance standards. The embedded analytics allow companies to share insights across departments and customer-facing applications. Rating [G2](https://www.g2.com/products/sisense/reviews) score is 4.2/5 [Capterra](https://www.capterra.com/p/86955/Sisense/) score is 4.5/5 [TrustRadius](https://www.trustradius.com/products/sisense/reviews) score is 8.4/10 Pricing The pricing is custom and based on features and user numbers with a free trial. Pros Optimized for large datasets with quick data processing. Highly customizable dashboards and reports. Wide range of data connectors, including MySQL and cloud services. Embedded analytics for seamless data sharing within applications. Cons Higher cost compared to other BI tools, with pricing available upon request. Initial setup and configuration can be complex for new users. Advanced features may require a learning curve. SAP BusinessObjects [SAP BusinessObjects](https://www.sap.com/products/technology-platform/bi-platform.html) is an enterprise-grade BI tool with reporting, data visualization, and data management capabilities for MySQL and other databases. It offers a highly customizable platform with advanced features suited to large organizations. The tool supports real-time analytics, data governance, and powerful ad-hoc reporting. With deep security controls and extensive reporting features, SAP BusinessObjects is trendy in sectors with strict compliance needs. Rating [G2](https://www.g2.com/products/sap-businessobjects-business-intelligence-bi/reviews) score is 3.8/5 [Capterra](https://www.capterra.com/p/92075/SAP-BusinessObjects/) score is 4.3/5 [TrustRadius](https://www.trustradius.com/products/sap-businessobjects-business-intelligence/reviews) score is 8/10 Pricing Custom pricing for enterprise clients. Pros Highly customizable and good enough for complex reporting needs. Strong security and compliance features. Scalable for large enterprises with complex data. Cons High cost, targeted at enterprise users. Steep learning curve and setup complexity. Improve Your MySQL Reporting With Data Integration Tools Reporting tools are only as powerful as the data they process. However, getting information directly from scattered sources into reports can overwhelm your dashboards and lead to inaccuracies. That\u2019s where integration comes in. By combining reporting tools with data integration solutions like Skyvia, you can automatically consolidate data from various sources into a single, organized destination like a data warehouse (DWH). This streamlined flow ensures clean, up-to-date information for your reports while enabling alerts and error monitoring to keep everything running smoothly. Skyvia [Skyvia](https://skyvia.com/data-integration) is a versatile, cloud-based data integration platform that connects seamlessly with MySQL, supporting all possible integration scenarios: ETL, ELT, reverse ETL, data replication, sync, and more. Except for integration, it provides other data-related products, like backup, query, automation, etc. With its user-friendly, no-code interface, Skyvia connects to [200+](https://skyvia.com/connectors) sources, allowing you to consolidate data into a single location seamlessly. It also offers features to transform, map, and refine data as needed, making extracting what you require from a unified source easy. It also supports [OData endpoints](https://skyvia.com/connect/sql-endpoint) and API. According to many reviews on rating platforms, like G2 Crowd, TrustRadius, and Capterra,\u00a0 customers recommend the platform as very easy to use for handling simple and complex data transformations and scheduling regular data syncs. Rating [G2](https://www.g2.com/products/skyvia/reviews) score is 4.8/5. [Capterra](https://www.capterra.com/p/146167/Skyvia/) score is 4.8/5 [TrustRadius](https://www.trustradius.com/products/skyvia/reviews#overview) score is 9.8/10 Pricing The pay-as-you-go [pricing](https://skyvia.com/pricing) starts at $79/month . The free plan is available. Pros Easy-to-use interface with no coding required. Flexible scheduling for automated data integration. High compatibility with multiple data sources and systems. Cons Some advanced features require a paid plan. Real-time data integration may be API-limited based on source usage. Comparative Analysis for MySQL Reporting Tools The table below provides a comprehensive view of each platform\u2019s key features, ideal use cases, strengths, and weaknesses to help identify the best fit based on organizational requirements for MySQL reporting and security standards. Tool Key Features Best For Key Pros Key Cons Security and Compliance Tableau Advanced visuals, drag-and-drop reports, data blending. Advanced visualizations and data exploration. Excellent visualization options, easy drag-and-drop. Higher cost per user, limited data manipulation. Role-based permissions, SSL/TLS encryption. Datadog Real-time monitoring, automated alerts, performance metrics. Real-time database performance tracking. Real-time insights, customizable dashboards. Limited BI reporting features, high cost for multiple hosts. Encrypted data transmission, secure user controls. Integrate.io Drag-and-drop workflows, multi-source support, scheduling. Scaling data pipelines for large businesses. Scalable platform, high compatibility with cloud services. Limited visualizations without add-ons. SOC 2, HIPAA compliance for healthcare data. Microsoft Power BI Real-time data access, AI-powered analytics, mobile app. Interactive dashboards and Microsoft integrations. Affordable, strong analytical capabilities. Limited mobile app capabilities, learning curve. Compliant with GDPR, encryption standards. Zoho Analytics AI-powered insights, customizable dashboards, data connectors. Small to mid-sized business analytics. User-friendly, scalable with AI-driven analysis. Limited report customization for complex needs. Role-based access, SSL data encryption. Domo Real-time data access, custom dashboards, mobile access. Enterprise collaboration and scalable insights. Scalable, team-friendly collaboration features. Higher cost for large teams, complex setup. Data encryption, user permissions, SOC 2 compliance. Knowi Machine learning, SQL/NoSQL compatibility, real-time reports. Real-time analytics and quick deployment. Machine learning capabilities, quick setup. Limited dashboard customization. Data encryption, secure API access. Datapine Advanced analytics, trend forecasting, interactive dashboards. Quick data insights and trend forecasting. User-friendly interface, forecasting tools. High initial cost, limited customization. Encrypted connections, multi-factor authentication. Sisense In-chip data processing, embedded analytics, custom dashboards. Large datasets and high scalability needs. Quick data processing, flexible dashboards. High cost, complex initial setup. SOC 2, ISO 27001 compliance, secure user roles. SAP Business Objects Custom reporting, ad-hoc analysis, data governance. Large enterprises with complex data needs. Strong compliance features, customizable reports. High learning curve, high cost. SOC 2, ISO 27001, GDPR compliant. Kickstarting MySQL Reporting Getting started with MySQL reporting can feel like a big step, but the right solutions make it a smooth and rewarding process! Whether managing data for the first time or looking to improve your existing reports, these platforms easily turn raw data into actionable insights. The steps below show how to do it: Choose the Right Tool for Your Needs . Each system has its strengths. Some, like Tableau and Power BI , excel at visualization, while others, such as Datadog and Sisense , are optimized for handling large data volumes or real-time monitoring. Ones like [Skyvia](https://skyvia.com/) are perfect for flexible integration, allowing for data connection across multiple sources. Define Your Reporting Goals. Think about what you want to achieve. Are you tracking sales performance or monitoring real-time database performance? Defining your goals upfront allows you to tailor your reports to deliver the necessary insights. Design Your Dashboards and Reports. Organize reports according to your goals. Use drag-and-drop report builders, pre-built templates, or AI-driven insights to start building visualizations. Automate and Schedule Your Reports. Many MySQL reporting tools support automated report scheduling, making it easy to get updates regularly. Conclusion Choosing the right MySQL reporting tool in 2025 depends on what companies need for their data analysis journey. Each tool offers unique strengths. Tableau and Power BI are go-to\u2019s for powerful, visually rich dashboards. If real-time performance tracking is crucial, Datadog and Sisense provide high-speed, data-driven insights for immediate action. Zoho Analytics and Knowi offer user-friendly experiences with solid visualization and analytics for businesses looking for affordability and ease of use. Consider your primary needs to find a platform that aligns with your goals and technical comfort. Scalability. Ease of use. Advanced reporting and security. FAQ for MySQL Reporting Tools Which MySQL reporting tool is best for beginners? Zoho Analytics\u00a0and\u00a0Microsoft Power BI\u00a0are beginner-friendly, intuitive interfaces with straightforward drag-and-drop features. They provide excellent visualization tools without requiring deep technical skills, making them ideal for those just starting with data reporting. What\u2019s the best tool for real-time MySQL reporting? Datadog\u00a0and\u00a0Sisense\u00a0excel in real-time reporting. Datadog provides live monitoring, which is especially useful for tracking database performance, while Sisense offers high-speed data processing, enabling quick, real-time insights for decision-making. How do I choose the right MySQL reporting tool for my business? Consider factors like\u00a0compatibility\u00a0with MySQL,\u00a0performance\u00a0needs,\u00a0scalability, and\u00a0security\u00a0requirements. For example, if you need automated data syncing,\u00a0Skyvia\u00a0offers flexible scheduling and integration features that may suit you well. Are these MySQL reporting tools secure? Many tools, such as\u00a0SAP BusinessObjects,\u00a0Sisense, and\u00a0Skyvia, include robust security features like data encryption, role-based access, and compliance with standards like GDPR and SOC2. Always review each tool\u2019s security capabilities to ensure it aligns with your business\u2019s data protection needs. Can I use multiple MySQL reporting tools together? Many organizations use a combination of tools. For example,\u00a0Skyvia\u00a0can handle data integration and syncs between systems, while\u00a0Tableau\u00a0or\u00a0Power BI\u00a0could handle the visualization aspect, giving you the best of both worlds for your reporting and analysis needs. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-mysql-reporting-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+10+MySQL+Reporting+Tools+for+Data+Analysis+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-mysql-reporting-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-mysql-reporting-tools/&title=Best+10+MySQL+Reporting+Tools+for+Data+Analysis+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-redshift-etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) The Best ETL Tools for Redshift in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-redshift-etl-tools/#respond) 2368 January 19, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-redshift-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Best+ETL+Tools+for+Redshift+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-redshift-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-redshift-etl-tools/&title=The+Best+ETL+Tools+for+Redshift+in+2025) Data warehouses are the absolute protagonists for storing and managing data in the modern tech world. They can be seen as the replacement for traditional RDBMS and the platform for analytics and machine learning. Amazon Redshift is among the most popular and widely used data warehouses today. Populate a DWH with data first to see it in action and reveal its potential. While doing that manually is a crime, consider using [ETL tools](https://skyvia.com/blog/etl-tools/) to bring data into a DWH promptly. This article discusses the best [ETL tools](https://skyvia.com/blog/etl-tools/) for Redshift and how they assist in data integration and management. Companies of any size and industry using Redshift data warehouse can help streamline business operations and make data-driven decisions. Table of contents Key Advantages of Amazon Redshift What is ETL? How It Transforms Data Management in AWS Redshift Top ETL Tools for Redshift in 2025 Skyvia AWS Glue Apache Spark Talend Hevo Integrate.io Stitch Fivetran Optimize Your Redshift ETL: Achieving Speed, Efficiency, and Cost-Effectiveness Conclusion Key Advantages of Amazon Redshift Redshift is a cloud data warehouse that is a part of the huge Amazon Web Services infrastructure. It\u2019s designed with the large-scale data sets in mind \u2013 to store and analyze them effectively. Amazon Redshift is based on [PostgreSQL](https://skyvia.com/blog/top-etl-tools-for-postgresql/) , so it uses a traditional relational database schema for data storage. This guarantees efficient performance and fast querying, which is crucial for teams that aim to find answers quickly. Key features: Cloud-based. Being hosted on the cloud, Amazon Redshift doesn\u2019t require any on-premises installations. Scalability. DWH can handle any workload, even if there\u2019s an unpredicted data spike or large data set to process. Elasticity. The capacity of the data warehouse is automatically provisioned depending on the current needs. Integration. Redshift easily connects to other AWS services and can integrate with other apps with Redshift ETL tools. What is ETL? How It Transforms Data Management in AWS Redshift ETL stands for Extract \u2013 Transform \u2013 Load \u2013 an approach originally used for converting transactional data into formats supported by relational DBs. Now, it\u2019s mainly applied for better data management in digitalized settings by moving data between apps or consolidating it within a DB or DWH. Redshift uses ETL scripts to populate a data warehouse, but those are complex. Therefore, it makes sense to use AWS Redshift ETL tools, which usually offer a user-friendly graphical interface. ETL tools are directly associated with the ETL concept and bring each step (extraction, transformation, and loading) to life. Extract \u2013 ingesting data from a certain app or database. Transform \u2013 applying filtering, cleansing, validation, masking, and other transformation operations to data. Load \u2013 copying the extracted and transformed data into a destination app or data warehouse. All the above-mentioned steps make up a so-called [ETL pipeline](https://skyvia.com/learn/etl-pipeline-meaning) . It also includes source and destination, data transformations, mapping, filtering, and validation functionalities. Even though the concept of ETL was the pioneer in data integration, its successors, such as [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) and ELT, are gaining momentum now. Both notions are also applicable to the Reshift: Reverse ETL loads data back from a DWH into apps for data activation and robust operability. ELT is a faster alternative to ETL as it loads data into a DWH before transforming it. This is crucial for accommodating large and fast-flowing data sets in a DWH. Top ETL Tools for Redshift in 2025 As ETL has two modern variations, it implies that some ETL tools have built-in ELT and reverse ETL algorithms. Below, find the list of the Redshift ETL tools, specifying their features and pricing models. Skyvia [Skyvia](https://skyvia.com/) is a no-code universal data cloud platform for a wide set of operations on data. It has the ETL, ELT, and Reverse ETL functionality for building simple and complex data pipelines with 200+ connectors available. Also, this service offers backup, workflow automation, and data querying options. Key features Skyvia is the best ETL tool for Redshift because it has the spectrum of all the necessary functions and attributes that make data warehousing simple but powerful. [Replication](https://skyvia.com/data-integration/replication) component embodies ELT concept \u2013 it copies data from the preferred app or database into a DWH and can even automatically create tables for that data. This scenario is ideal for working with AWS Redshift and other DWHs because it allows scheduling regular incremental data updates. [Import](https://skyvia.com/data-integration/import) component connects AWS Redshift to other sources and allows loading data into a DWH (ETL) and vice versa (Reverse ETL). Data Flow + Control Flow tools. Data Flow enables the creation of complex data integration scenarios with compound data transformations.\u00a0 Control flow orchestrates tasks by creating rules for the data integration task order considering specific conditions. Skyvia also has other data integration components as well as tools for workflow automation and creating web API points with no coding. The biggest advantage of this service is the possibility of building all the data integration pipelines with no coding \u2013 just with the drag-and-drop visual interface! Pricing Skyvia offers a free plan with all principal features and connections available. Though, there are certain limitations on the frequency of integration and the amount of operated data. The basic plan extends these limits slightly, while the Enterprise and Advanced plan practically wipe them away. The latter even offers complex integration scenarios where you can include more than three sources and specify task execution conditions. AWS Glue [AWS Glue](https://aws.amazon.com/glue/) is a modern serverless data integration service developed by Amazon. It connects natively to 70+ sources on one side and AWS Redshift on another side by carrying out data preparation and transfer operations in between. This tool simplifies data discovery and cleansing for developers by supporting coding solutions as well as for analysts by introducing a visual interface. Key features Supports ETL, ETL, batch, and streaming workloads. Connects to 70+ sources. Automatically recognized the data schema. Uses ML algorithms to point out data duplicates. Schedules jobs. Scales on demand. [See the comparison between Skyvia and AWS Glue](https://skyvia.com/etl-tools-comparison/aws-glue-alternative-skyvia) Pricing The cost for the AWS Glue service is based on the progressive model, where the pricing depends on the duration of the job run. Try out the [pricing calculator](https://calculator.aws/#/addService) to get to know the approximate cost or [get a quote from Sales](https://aws.amazon.com/contact-us/sales-support/) . Apache Spark [Apache Spark](https://spark.apache.org/) is an open-source distributed system primarily dedicated to big data processing for analytical purposes. It uses the MapReduce programming paradigm to handle large-scale data by implementing massively parallel operations. In case you need to process big data stored on Amazon Redshift, you can use AWS Glue or other third-party ETL tools for that. Key features Supports real-time data streaming and batch data. Executes ANSI SQL queries and builds ad-hoc reports. Trains machine learning models with large datasets. Works with [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) . Pricing Being an open-source solution, Apache Spark doesn\u2019t have an exact pricing model. The total cost depends on the installation and configuration works performed by specialists. Talend Talend is another solution for data integration as well as data management and quality. Its Data Integration product connects to multiple sources, including AWS Redshift, creates ETL/ELT pipelines, and makes sure that data arrives in a favorable condition to the final destination. Talend can work both with batch and streaming data, which makes it possible to obtain real-time insights. Key features Ingests, transforms, and maps data. Designs and deploys pipelines with the possibility to reuse them. Prepares data either automatically or with self-service tools. Ensures data monitoring and advanced reporting. [See how Talend is different from Skyvia](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) Pricing Hevo Another AWS Redshift ETL tool is called [Hevo](https://hevodata.com/) \u2013 an automated platform that extracts data from ready-to-use popular sources and pushes it into a data warehouse. All this can be done without the predefined schema since Hevo can apply automatic mapping settings on export. This tool also allows users to create data pipelines with no coding and transfer data in real-time. Key features Provides complete visibility of how data flows across the data pipelines. Cleans and transforms data before it\u2019s loaded into a DWH. Ensures data security with end-to-end encryption and 2-factor authentication. Complies with SOC 2, GDPR, and HIPAA standards. Notifications about any delays and errors. Pre-aggregates and denormalizes data for faster analytics. [See how Hevo is different from Skyvia](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) Pricing Integrate.io [Integrate.io](https://www.integrate.io/) is another solution that makes it possible to consolidate data from multiple SaaS apps and databases in a DWH. This service practically provides a 360-degree view of your data with constant observability and monitoring. It can also prepare data by loading it into a DWH, so teams can focus on deriving insights while Integrate.io is working on the data quality for analysis. Key features Data transformation with a low-code approach, suitable even for non-engineers. REST API of Integrate.io allows users to connect to any other sources using REST API. Data security is provided by SSL/TLS encryption and firewall-based access control. Compliance with SOC 2, GDPR, CCPA, and HIPAA. [See how Integrate.io differs from Skyvia](https://skyvia.com/etl-tools-comparison/integrateio-alternative-skyvia) Pricing The monthly cost of the plan for building ELT pipelines starts from $159. Stitch Another AWS Redshift data warehousing tool is named [Stitch](https://www.stitchdata.com/) . It\u2019s a cloud-based tool for data integration and management. This solution helps to move data from preferred sources to a DWH without specific IT knowledge or coding expertise. Key features API key management for adding connections programmatically. Logs and notifications on any errors. Advanced scheduling options. Automatic scaling that adjusts to the actual data load. [See how Stitch is different from Skyvia](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) Pricing Fivetran [Fivetran](https://www.fivetran.com/) is an automated platform using [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) approaches to move data between sources. This tool helps businesses gather data from diverse datasets and centralize it in a DWH for further analytics or BI purposes. Key features Automatic data cleansing and duplicate removal. Preliminary data transformations before its load into a DWH. Data synchronization on schedule. Masking for protecting sensitive data. [See how Fivetran is different from Skyvia](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) Pricing Optimize Your Redshift ETL: Achieving Speed, Efficiency, and Cost-Effectiveness Even though the refined ETL pipelines bring already prepared data into AWS Redshift, they also need to consider this DWH\u2019s architecture for better results. So here we provide several pieces of advice for optimizing your Redshift ETL in order to speed up the decision-making and make the process more efficient and cost-effective. Load Data in Bulk Amazon Redshift is a DWH designed to operate on petabytes of data with the massive parallel processing (MPP) approach. This refers not only to the computing nodes but also to the Redshift Managed Storage (RMS). Loading data in bulk ensures that the DWH resources are used efficiently, with no nodes standing by. So, consider this when ingesting data from other sources directly within AWS Redshift or when using external ETL tools. Perform Regular Table Maintenance With incremental updates, the fresh data is added to the tables, while the replaced data is moved to the unsorted region of the table and only marked for deletion. However, the space previously occupied by that data isn\u2019t reclaimed, which could significantly degrade the user query performance as the unsorted region grows. To overcome this, apply the VACUUM command against the table. Use Workload Manager Amazon has an in-build solution for monitoring the workload queues inside the DWH \u2013 Workload Manager (WLM). It shows the number of queues, each dedicated to a specific workload. The overall number of queues shouldn\u2019t be greater than 15 to ensure the effective execution of all processes. In particular, the WLW helps to improve the ETL runtimes. Choose the Right ETL Tool The key to achieving quick and highly performing Redshift ETL is the pickup of the [right ETL tool](https://skyvia.com/blog/top-no-code-etl-tools/) . Data load frequency, transformations, and the number of sources are the most critical criteria in making that choice. [Skyvia](https://skyvia.com/) could be a perfect choice as it designs simple and complex ETL pipelines with no code, applies advanced transformations on data, performs data filtering, and ensures constant monitoring of their execution! Conclusion Amazon Redshift is the preferred data warehouse choice for many companies. Its cost-effectiveness, scalability, speed, and built-in tools ensure a favorable environment for data storage and management for any company. Even though Redshift has its own ETL mechanisms, they mightn\u2019t always be the best option for populating a DWH. Here come professional ETL tools that connect to apps and databases, get data from there, transform it, and copy it into a data warehouse. Skyvia is the universal Redshift ETL tool with everything necessary to populate your DWH. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-redshift-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Best+ETL+Tools+for+Redshift+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-redshift-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-redshift-etl-tools/&title=The+Best+ETL+Tools+for+Redshift+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-reverse-etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Reverse ETL Tools for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-reverse-etl-tools/#respond) 1929 April 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-reverse-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Reverse+ETL+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-reverse-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-reverse-etl-tools/&title=Best+Reverse+ETL+Tools+for+2025) If we need to reach another shore, using a bridge is the most reliable and convenient way. The reverse [ETL tools](https://skyvia.com/blog/etl-tools/) are the same bridges between the data warehouses and other applications, like Salesforce, that people use for business operations and predictions. According to the SeaGate report, only [32% of enterprise data is used](https://www.seagate.com/files/www-content/our-story/rethink-data/files/Rethink_Data_Report_2020.pdf) because warehouse data doesn\u2019t reach BI tools in time. So, Reverse ETL systems that automatically retrieve real-time data from data repositories and feed them into BI tools help solve this problem. In this article, we\u2019ll discuss the top 10 Reverse ETL tools for 2025 to help your business grow. Table of Contents What is a Reverse ETL Tool? Main Types of Reverse ETL Tools 10 Best Reverse ETL Tools Factors to Think About When Selecting a Reverse ETL Tool Why Skyvia Should Be Your Organization\u2019s Go-To for Reverse ETL Solutions Reverse ETL: To Wrap Up What is a Reverse ETL Tool? A Reverse ETL tool transfers transformed data from a data warehouse, data lake, or other storage to CRM, marketing automation platforms, analytics dashboards, etc., allowing real-time data activation. It means taking the fresh data that\u2019s constantly being prepared in the data warehouse and immediately using it to update various business applications and tools the company uses. The main difference between [ETL (Extract, Transform, Load) and Reverse ETL](https://skyvia.com/learn/) tools and processes is the direction of data flow and the focus of their operations. ETL extracts data from multiple sources, transforms it to fit a needed data structure, and loads it into the warehouse for analysis. However, Reverse ETL tools do the same but in the opposite way. This technology takes cleaned and processed data from the data warehouse and ingests it into the companies\u2019 business applications. Main Types of Reverse ETL Tools The choice of a reverse ETL tool mostly depends on the specific business needs, existing tech stack, data strategy, and scalability requirements. Such tools can be grouped into five categories based on infrastructure and supporting organization or vendor. Each type of tool comes with its own set of features and trade-offs, such as: Costs. Ease of use. Integration abilities. Maintenance requirements. Here\u2019s an overview of the main types of Reverse ETL tools: Cloud-Native Reverse ETL Tools operate in the cloud, handle large data volumes, and are scalable depending on business needs. Cloud-based tools often provide APIs for integrating various SaaS products and work seamlessly with existing cloud data warehouses. Open-Source Reverse ETL Tools provide a flexible and cost-effective option for companies willing to set up and maintain them. These tools might require more configuration and manual management but allow extensive customization to fit specific business needs. Enterprise Reverse ETL Tools offer advanced features like data governance, security, and full-scale data integration abilities. They suit large companies with complex data environments and strict compliance requirements. Enterprise tools typically provide extensive support and integration capabilities with various data sources and business applications. Specialized Reverse ETL Tools are focused on specific use cases or industries, providing tailored functionalities that address particular business processes or sector-specific needs. For instance, tools might specialize in marketing automation, customer relationship management, or financial services, ensuring that the reverse ETL process aligns closely with industry standards and practices. Real-Time Reverse ETL Tools focus on real-time data transfer from the data warehouse to operational systems. They are crucial for applications needing instant data updates, like e-commerce or customer service. Real-time reverse ETL tools ensure that the operational systems reflect the most current data. 10 Best Reverse ETL Tools Now, let\u2019s review the top ten Reverse ETL tools, describing their abilities, pros, and cons, and determining the price of each solution. Skyvia [Skyvia](https://skyvia.com/) is a cloud data platform for data integration, backup, and management. It offers [180+](https://skyvia.com/connectors/) connectors and a robust reverse ETL capability. What\u2019s it like? Think of a Swiss Army knife for your data needs. It\u2019s super user-friendly, connecting data sources and destinations with just a few clicks. According to the last [G2 rating](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , the platform is second among the top 10 easiest-to-use ETL tools.\u00a0 Plus, it backs up your data, like having a safety net for your digital assets. Pros User-friendly interface. No coding is required for data integration. The solution offers data backup features. The convenient and flexible schedule allows integration processes to be set according to the user\u2019s needs. Cons Having more video tutorials for customer support would be a great idea. The platform provides a wide range of connectors, but having some new ones is always good. Pricing Skyvia offers pay-as-you-go [prices](https://skyvia.com/pricing/) that start at $15 per month and scale with features and data volume. The paid functionality provides users with the following: Maximum execution frequency. Scheduled integrations. Source and expression lookup mapping and synchronization. Data import and splitting, etc. You may also choose a free plan with basic features. Census [Census](https://www.getcensus.com/pricing) syncs data from data warehouses to business apps, making it actionable for operational teams. What\u2019s it like? Census is like a bridge between your data warehouse and your marketing or sales tools. Imagine having a superpower that automatically updates your customer tools with the latest insights so your teams always have the freshest data at their fingertips. Pros Robust synchronization supports many SaaS applications and ensures data consistency. Cons It can be expensive for smaller businesses to have a complex initial setup. Pricing The solution starts with a free trial; [paid plans are custom-priced](https://www.getcensus.com/pricing) based on usage and features needed. Hightouch [Hightouch](https://hightouch.com/) is a data integration platform that streams changes from the data warehouse into business apps. What\u2019s it like? Imajine a delivery service for your data. It takes the latest and greatest info from your warehouse and delivers it straight to your operational apps in real-time. It\u2019s all about ensuring your data doesn\u2019t just sit around \u2014 it gets where it needs to go, fast. Pros The platform provides real-time data syncing. Broad support for different data warehouses and SaaS apps is available here. Cons Pricing can escalate quickly with increased data volumes and sync frequencies. Pricing The solution offers a free version; monthly premium [plans](https://hightouch.com/pricing) start at $150. Grouparoo [Grouparoo](https://www.grouparoo.com/) is the open-source reverse ETL tool that simplifies the process of syncing your data warehouse with business tools and services. What\u2019s it like? It\u2019s open-source, so you can tinker with it, tweak it, and tailor it to exactly what your business needs. It\u2019s perfect for those who love to have their hands on the controls and dive deep into customization. Pros Highly customizable. Community-drive. The system supports real-time data processing. Cons It requires more technical skills to set up and manage. Pricing It\u2019s free to use as it is open-source, with [paid](https://www.grouparoo.com/pricing) support and enterprise features available on request. Hevo Activate [Hevo Activate](https://hevodata.com/) unifies disparate data sources into actionable insights and operational workflows. What\u2019s it like? Hevo Activate is like having a smart assistant who takes care of all the tedious work of moving data. It automates the process, ensuring that all your systems are talking to each other without you lifting a finger. Pros Automated data pipelines. Pre-built integrations. The solution supports an extensive set of sources and destinations. Cons It might be complex to set the system up for non-technical users. Pricing [Plans](https://hevodata.com/pricing/pipeline/) start at $249 per month after a free trial. Stitch [Stitch](https://stitchdata.com/) is a straightforward ETL service that consolidates all your data sources into a single data warehouse. What\u2019s it like? Stitch is no-nonsense. If you want to get your data from point A to point B without any fuss, Stitch is your go-to. It\u2019s like the reliable bus service for your data, getting it where it needs to go on schedule. Pros Easy to use. It supports a wide array of data sources. It\u2019s a part of the Talend data fabric. Cons Limited in-built data transformation capabilities. Pricing Free tier available; standard [pricing](https://stitchdata.com/pricing/) plans start from $100 per month based on data volume. Airbyte It\u2019s an open-source data integration engine that can handle both batch and real-time data movements. What\u2019s it like? [Airbyte](https://airbyte.com/) is for adventurers. It\u2019s built to handle any data challenge you can throw at it, from the simplest tasks to the most complex workflows. If you like exploring and creating custom solutions, Airbyte is your map and compass. Pros The platform is highly flexible and customizable. It\u2019s community-supported and continuously evolving. Cons It may require more technical overhead to manage and scale. Pricing Free if self-hosted; managed services [pricing](https://airbyte.com/pricing) varies based on the scale and support. Fivetran [Fivetran](https://www.fivetran.com/) provides automated data integration from sources like databases, applications, and event logs into your data warehouse. What\u2019s it like? It\u2019s a high-speed train for your data, designed to be set up quickly and run in the background, automating the data transfers with high efficiency. The solution is great for those who want to set it and forget it. Pros Strong automation and maintenance. High reliability. It supports extensive data connectors. Cons The solution can be costly for small to medium-sized businesses. Pricing The [pricing](https://www.fivetran.com/pricing) model is based on data volume and connectors used. Please get in touch with sales for detailed pricing. The free plan is included. Astera [Astera](https://www.astera.com/) is about simplifying complex data integration tasks with a visual design interface and a focus on data quality. What\u2019s it like? Astera makes dealing with complex data feel like a breeze. Its visual interface lets you drag and drop elements to create workflows, which is as close as you can see your data move with your eyes. Pros Strong support for data transformation and integration without coding. It\u2019s suitable for handling complex data scenarios. Cons It isn\u2019t suitable for complex reverse ETL tasks. Pricing There is no public pricing. It\u2019s only available on request. Matillion [Matillion](https://www.matillion.com/) is explicitly built for cloud data warehouses and offers data transformation and integration abilities. What\u2019s it like? It\u2019s the powerhouse of data tools. The solution is about handling big, complex jobs with ease. That\u2019s like having a robust and smart giant on the team for a large enterprise with heavy-duty data needs. Pros High-performance. Scalability. It\u2019s designed for enterprise use with a strong emphasis on security. Cons The platform is primarily suited for large companies. High cost for startups and small teams. Pricing : The [pricing](https://www.matillion.com/pricing) operates on a usage-based credit system, so it\u2019s better to contact sales for pricing. A free trial is also available here. Factors to Think About When Selecting a Reverse ETL Tool Choosing a specific reverse ETL tool depends on business needs, budget, technical requirements, features, etc. It\u2019s not only about lifestyle and complications. It\u2019s more about whether the tool fixes the problems, how fast it fixes them, and how much it would cost the company. Here\u2019s how to make a wise choice: Tech Stack Compatibility . Make sure the tool plays nicely with what you already have. It\u2019s like getting a phone charger that fits the phone model. The reverse ETL tool has to connect easily with the existing data warehouse, databases, and the company\u2019s SaaS apps. Ease of Use . No one wants to spend weeks learning how to use the tool. Look for something as easy as your favorite app. A good user interface and straightforward setup can save a lot of headaches and eye rolls. Feature Set . Think about what people need the tool to do. It\u2019s like picking a phone with a great camera if you love photography. Does the tool support real-time data syncing? Can it handle the necessary data transformations? Make the must-have list. Scalability . Choose a tool that can grow with the business. Just like you might opt for more storage on a new phone if you take a lot of photos or download tons of apps, the reverse ETL tool should be able to handle increasing amounts of data as the business grows. Pricing . Budget is always the key. Nobody would buy the most expensive smartphone without checking if it fits the budget. Look at the pricing models of different tools. Some charge by data volume, others by the number of connectors or updates. Pick one that gives the most bang for the costs spent. Security and Compliance . Just like people want a secure phone that protects their data, the reverse ETL tool must be secure and compliant with regulations like GDPR or HIPAA, if applicable. Check what security levels the tool offers. Support and Community . Good support can be a lifesaver \u2014 it\u2019s like having a warranty for the phone. Check out what kind of support is offered. Is it 24/7? Is there good documentation and a vibrant community around the tool? These can be incredibly helpful when businesses run into issues or want to learn new tricks. Why Skyvia Should Be Your Organization\u2019s Go-To for Reverse ETL Solutions Skyvia could be your go-to because it\u2019s not just about doing one thing well. It\u2019s about doing many things well and making them as easy as possible. It\u2019s the tool that works for you, ensuring your data tasks are as painless as possible. This allows you to focus more on the insights your data provides rather than the management of the data itself. Flexibility and Versatility Skyvia isn\u2019t just about moving data. You can also use it for backups, creating reports, or managing data across different platforms. It\u2019s all about making data do what you want, how you want. Intuitive UI It\u2019s like some favorite shortcuts\u00a0 \u2014 intuitive, no complex manuals needed, and you can start using it almost right out of the box, which makes it perfect for teams without heavy technical resources. Cloud-Based Convenience Being cloud-based, Skyvia offers the convenience of accessing your data operations from anywhere, just like you\u2019d expect to access your cloud photo album or documents on the go. Plus, you don\u2019t have to worry about installing software or managing infrastructure \u2014 it\u2019s all handled in the cloud. Connectivity Skyvia supports a wide range of data sources and SaaS applications, from [Salesforce](https://skyvia.com/connectors/salesforce) to [Dropbox](https://skyvia.com/connectors/dropbox) , making integrating various aspects of your data ecosystem easy. It\u2019s like universal control for all your apps and data sources. Affordable Pricing Skyvia offers a very attractive pricing model. There\u2019s even a free tier, which is great for small businesses or startups just dipping their toes into the world of data integration. The paid plans are competitively priced for larger enterprises and scale with the usage. Strong Security Measures With Skyvia, your data\u2019s safety is a top priority. It has robust security protocols, so you don\u2019t have to lose sleep over data breaches or compliance issues. It\u2019s like having a top-notch security system for your digital house. Responsive Support Quick and helpful support can make a huge difference, just like good customer service when dealing with a tricky issue. Skyvia to the rescue Let\u2019s discover Skyvia\u2019s client\u2019s response to understand how the solution works in real life. Reverse ETL: To Wrap Up Choosing the suitable reverse ETL tool is all about finding the right fit for modern businesses\u2019 specific needs and ensuring it\u2019ll help to achieve their data goals without adding extra stress. It\u2019s a tool that should make any company\u2019s life easier, not harder. So, select your one according to the business-specific criteria, but consider the tool\u2019s simplicity, connectivity, security compliance, and the price you\u2019re ready to pay. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-reverse-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Reverse+ETL+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-reverse-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-reverse-etl-tools/&title=Best+Reverse+ETL+Tools+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-tibco-software-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) TIBCO Alternatives & Competitors: A Comprehensive Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-tibco-software-alternatives/#respond) 3375 April 11, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-tibco-software-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=TIBCO+Alternatives+%26+Competitors%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-tibco-software-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-tibco-software-alternatives/&title=TIBCO+Alternatives+%26+Competitors%3A+A+Comprehensive+Guide) TIBCO is a multi-faceted tool that is used for process automation, data integration, file management, and other corporate operations. It\u2019s definitely popular among enterprises due to its comprehensive feature set and advanced security features. Meanwhile, TIBCO solutions aren\u2019t so widely used by small and mid-size companies. As a result, we\u2019d like to present some of the best TIBCO alternatives that would suit any business size. We\u2019ll also provide the benefits and limitations of each solution to help you decide upon the most suitable option. Table of Contents Introduction to Tibco Why Might You Need to Migrate from TIBCO? Top TIBCO Alternatives & Competitors Skyvia SAS Data Management Informatica Talend Boomi IBM InfoSphere Mulesoft Summary Introduction to Tibco [TIBCO](https://www.tibco.com/) is a software company that develops multiple solutions for data integration. Its pioneer product is called TIBCO Platform, a GUI-based enterprise-level tool that helps businesses obtain a unified overview of their data-related operations. Tibco Key Products TIBCO Platform is a multi-functional solution that comprises such components: Process Management enables automation and digitization of daily business operations. Data Grid provides highly scalable database solutions that support real-time operations, such as transactions. Event Processing monitors the enterprise environment and detects changes in databases, IoT devices, and applications, helping businesses to take timely actions as a response. Messages enable real-time data distribution between disparate endpoints, being a backbone of event-driven and streaming architectures. Integration accelerates corporate-level connectivity with no-code experience. In this article, we mainly focus on the Integration component and its capabilities to bring the enterprise application data into a single space. Key Features of TIBCO Now, let\u2019s review the core characteristics of each TIBCO product. Product Features Process Management \u2013 Flexible and independent organizational models \u2013 Elastic organizational structures \u2013 Built-in business process patterns \u2013 Ad-hoc activities supporting regular processes Data Grid \u2013 In-memory data caching \u2013 High availability and fault tolerance \u2013 Support for multi-datacenter deployment \u2013 Platform and language independency Event Processing \u2013 Event monitoring \u2013 Data management with clustering, built-in caching, and persistent storage \u2013 Cloud deployment \u2013 Machine learning capabilities Messages \u2013 Fine-grained service control \u2013 Hybrid deployments \u2013 Secure communication channels Integration \u2013 Integration of traditional and modern enterprise tools \u2013 Development of APIs and microservices for integrations \u2013 B2B processes automation \u2013 Secured file transfer Most Typical TIBCO Use Cases TIBCO software is currently being used in Banking, Healthcare, Manufacturing, Retail, and other sectors of the economy. Companies implement this tool to accomplish their business goals effectively and rapidly. We\u2019ve prepared some of the most typical cross-industry use cases for the Integration product. Data integration. Extract data from multiple sources, blend it, apply transformations, and map it to the structure of the destination system. This is particularly useful for getting a general overview of the company\u2019s performance and making predictions for the future of the business. API management. Create and share APIs with a low-code approach for B2B app integration. This helps enterprises improve data visibility across departments and enhance overall collaboration. Event-driven integration. Embed event-driven flows into applications using a visual flow designer. This provides better control and monitoring over your regular business operations. Centralized file management. Automate the process of sending and receiving files with scheduling options. This helps companies accelerate data exchange between various entities, partner organizations, and stakeholders. Why Might You Need to Migrate from TIBCO? Even though TIBCO grants enterprise-level security on data integration along with other benefits, it mightn\u2019t be an ideal solution for each and every company. Obviously, it also has some drawbacks, which might be notable for certain businesses. Therefore, many organizations opt for TIBCO competitors instead. The most common reasons for switching from TIBCO Integration to alternative solutions are: A limited number of pre-built connectors for cloud-based data sources. The endpoint-based pricing model can be hard to understand. Learning curve is steep compared to similar tools. Top TIBCO Alternatives & Competitors While TIBCO might be relevant for large enterprises with high data loads, it isn\u2019t likely the right fit for small companies. Therefore, it makes sense to start exploring TIBCO alternatives, paying attention to their key features and benefits. In this article, we\u2019d like to explore several TIBCO competitors: Skyvia SAS Data Management Informatica Talend Boomi IBM InfoSphere Mulesoft In the table below, there is a brief description of each solution, along with its principal characteristics. We\u2019ll also present tips on selecting the one that would suit your company. Rating G2 Pros Cons Best for Pricing Skyvia 4.8/5.0 \u2013 Ease of use- Suitable for anyone- Connectivity- Detailed error messages \u2013 Only one scheduled integration on a free plan- Email notifications need to be configured Businesses of any size and any industry Free plan available. Paid plans start at $79/month. SAS 4.2/5.0 \u2013 Connectivity- Unified metadata environment- SQL queries \u2013 High costs- Initial training needed- Infrastructure updates required Banking, Finance, and Retail companies Custom Informatica 4.3/5.0 \u2013 Microservice-based- Avanced encryption- Advanced reporting \u2013 No detailed message description- Occasional issues Insurance, Healthcare, and Education companies Volume-based pricing model Talend 4.0/5.0 \u2013 Large number of connectors- Low-code approach- Pipeline reuse \u2013 No concurrent licenses- Occasional bugs Telecommunications, Government, and Healthcare companies The cost is discussed with their Sales team based on the four pricing plans Boomi 4.4/5.0 \u2013 Integration testing- Onboarding courses- 99.9% uptime \u2013 Many connectors come at an extra cost- Error messages are not clear Manufacturing and Education industry The cost is discussed with their Sales team based on the five pricing plans IBM InfoSphere 4.1/5.0 \u2013 Holistic data management- Connection with other IBM products- Native metadata support \u2013 Technical expertise required- High costs- Difficult initial setup Automotive, Insurance, Telecommunications, and Travel sectors First, the tool demo is scheduled. Then, a Sales representative comes to discuss the cost. Mulesoft 4.4/5.0 \u2013 Advanced data transformations- Legacy system support- API usage analytics \u2013 Steep learning curve- API portal limits Mid-size and large enterprises that use Salesforce in any industry Feature-based\u00a0 pricing model Skyvia: Universal Alternative to Tibco [Skyvia](https://skyvia.com/) is among the notable Tibco software alternatives on the market, as it\u2019s easy to use and affordable. It offers a universal multi-functional cloud platform suitable for simple and complex data integration scenarios. Skyvia contains products for automating daily business operations, creating API endpoints without coding, querying data with SQL, and information integration. The latter function is provided by the [Data Integration](https://skyvia.com/data-integration) product, which will be reviewed in detail. It can easily gather data from multiple cloud applications, databases, data warehouses, and flat files and send it to the desired system. The [Data Integration solution](https://skyvia.com/blog/data-integration-tools/) supports both [ELT and ETL](https://skyvia.com/blog/elt-vs-etl/#What-is-ELT-(Extract,-Load,-Transform)?) technologies for data transfer and integration. Reviews G2 rating: 4.8 out of 5.0 G2 reviews state that users value Skyvia\u2019s ease of use, along with its comprehensive data integration and management capabilities. Best for This data integration solution is an excellent choice for businesses of any size operating in any industry. Skyvia has [subscription plans](https://skyvia.com/pricing) that cover various feature sets and data volumes. Key Benefits of Skyvia Ease of use. This service is accessed via a browser, so no extra desktop installations are required. There are also tips and hints throughout the account environment, which makes it easier for Skyvia users to configure and surveil data-related processes to go smoothly. Suitable for anyone. Skyvia doesn\u2019t require any programming language knowledge even to set up complex data integration scenarios and pipelines. It introduces a smart visual wizard that helps establish the connection between apps in seconds. Data connectivity. This platform offers [200+ pre-built connectors](https://skyvia.com/connectors) to various data services, including databases, data warehouse, cloud applications, storage systems, and many others. Detailed error description. In case any data integration issues occur on integration, detailed log statements are provided. Fremium plan available. Skyvia offers a free, fully-featured version so that users can try it out and decide whether this solution is the right one for them. There are also no limits on the data connectors available for use. The only thing is the limited number of scheduled integrations and the amount of data processed. Skyvia relies on the volume- and feature-based pricing model. Feel free to select the appropriate plan to scale digital data integration operations! Pricing Skyvia has four pricing tiers, including the free one. It also offers a custom plan with the cost calculated individually for each particular case, which is particularly suitable for enterprise-grade companies. Free Basic Standard Professional Price starts at $0 $79 $79 $199 Records per month Max 10K Min 5M Min 500K Min 5M Scheduling Once a day Once a day Once an hour Every minute Integration scenarios Basic Basic and advanced Scheduled integrations 2 5 50 Unlimited SAS Data Management SAS Data Management integrates information from various sources, ranging from legacy systems to modern data lakes. It supports Spark and MapReduce computer tiers, S3 and Parquet file systems, as well as Cassandra and MongoDB databases. This software supports practically any data type and can apply any transformations to it. The data from multiple platforms gets extracted and blended and then prepared with SAS Data Management. Such an approach makes the data available for analytical tools to support informed business decisions. Reviews G2 rating: 4.2 out of 5 G2 user reviews admit that initial training and robust data management features are the major strengths of this tool. Best for SAS Data Management functionality enables timely risk detection of fraudulent transactions. Therefore, this solution could be particularly suitable for businesses operating in the Banking, Insurance, Retail, and Life Sciences spheres. Advantages The main benefits of SAS Data Management are the following: Connectivity between digital platforms, apps, databases, etc. A unified metadata environment for a clear understanding of data from any source. Support for SQL queries and operations. See the comparison between the [Skyvia vs. SAS Data Management](https://skyvia.com/etl-tools-comparison/) features. Cons The most notable limitations of this solution are the following: High cost of licencing and maintenance. Requires initial training due to the complexity of this platform. Requires ongoing support and infrastructure updates to ensure optimal performance. Pricing SAS Data Management provides individual fares for each business, from private entrepreneurs to integrational enterprises. It also offers a free trial that allows users to try out how this platform functions within their corporate infrastructure. Informatica Informatica offers multiple solutions for enhanced data visibility from different perspectives. We\u2019ll mainly focus on the Intelligent Cloud Data Management tool that constructs [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) processes involving high data volumes. This tool also allows businesses to scale up or down quickly according to their current needs using the pay-as-you-go pricing model. Informatica considerably simplifies the developer\u2019s work as it allows them to automate routine tasks and schedule integrations. All this is also accompanied by transparency for workloads and pipelines. Reviews G2 rating: 4.3 out of 5.0 The users of Informatics Cloud Data Integration value its ability to handle scaling workloads and a high number of data connectors. Best for Intelligent Cloud Data Management usually applies to Insurance, Healthcare, Education, Life Sciences, and Financial Services companies. Thanks to this platform, digital transformation in the insurance industry becomes easy to manage as it unites all the components of the value chain within the data cloud. When it comes to Life Sciences, Informatica provides an exclusive opportunity to quickly respond to public health emergencies by gathering research results from various sources and providing a complete overview of the situation. Advantages Here are some of the most notable benefits of Informatice Cloud Integration: Microservices implementatio n for optimal performance of data pipelines. Advanced encryption mechanisms for secure information transfer over network. Accurate information on actual compliance regulations. Advanced reporting and auditing for enhanced governance. See [Skyvia vs Informatica](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) features comparison. Cons This tool is often associated with such limitations: Lacking detailed error message descriptions . High prices compared to similar data integration solutions. Occasional issues . Pricing This tool adheres to the volume-based pricing model that is defined in the quote based on your specified needs. Informatice applies an AI engine for resource and cost optimization. Talend Talend provides a Data Integration solution that collects information from dispersed sources at high speed. This tool ensures automatic data checks for quality, transformation, and mapping options. This tool supports ELT/ETL and change data capture (CDC) approaches, making it appropriate for both batch and streaming flows. Despite being a cloud-native solution, it also supports certain on-premises applications and systems. Reviews G2 rating: 4.0 out of 5.0 Talend Data Integration users particularly value the ease of setup and use along with the overall quality of data integration and automation. Best for Talend Data Integration is particularly efficient for companies in the Telecommunication, Government, Healthcare, and Financial industries. For instance, Healthcare organizations might benefit from Talend gathering and systemizing all the patient data in clinical trials to make precise analyses of the research findings. In the governmental sector, this solution can help gather incoming information from various public apps and ensure its compliance with GDPR regulations. Advantages The core advantages of this tool are: Mechanisms for building and reusing data pipelines in the cloud environment. Low-code approach to working with data. Data migration from legacy systems into modern applications. Extensive connector library . See the [Skyvia vs Talend](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) features comparison. Cons Here are the drawbacks of Talend Data Integration: No concurrent licenses, only seat licenses on the cloud. Occasional unidentified bugs . Low responsiveness of technical support. Pricing There are four basic pricing plans, which mainly differ in terms of features, deployment options, and connector types. The cost of Informatice solutions is not available to the public and should be discussed with their Sales teams. Boomi Boomi is an iPaaS (Integration-as-a-Service) platform that contains a set of tools for data-related tasks. We\u2019ll particularly focus on one of its components called Integration. It allows businesses to blend data from different applications using pre-built connectors. Boomi enables users to spend less time on data maintenance and put more effort into business development. This service also dramatically reduces manual work and minimizes human-generated errors. Reviews G2 rating: 4.4 out of 5.0 Boomi users appreciate the tool\u2019s features for data integration, along with a high number of pre-built connectors to popular applications, databases, and other sources. Best for Boomi is used across different teams within companies in the Manufacturing and Education industries as well as within public sector institutions. For instance, many renowned universities use this solution to gather information from official academic sources and custom student apps to provide the most actual classroom locations, exam dates, and job opportunities for graduates. Advantages Boomi is known for the number of benefits it provides to users: Integration testing helps to explore the data flows before implementing them in the production scenario. 99.9% uptime for the integration processes, which enables data integration flows to run even without an internet connection. Onboarding courses are available for those with no experience in data integration as well as for seasoned technical experts. See the [Skyvia vs Boomi](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) feature comparison. Cons Boomi is known to have such limitations: Many connectors can be purchased at an extra cost . The user interface is slightly outdated . Misleading error messages . Pricing Boomi has five major subscription plans, which mainly differ in terms of the connectors\u2019 quantity, support, and security features. The prices are not publicly available, so there is a need to contact their Sales Team for such details. IBM InfoSphere IBM InfoSphere Information Server is a modern tool for data cleaning, management, transformation, and integration. All these operations can be performed at high speed due to its parallel processing and load-balancing features. This helps to drive out meaningful deductions in a timely manner to contribute to company development and competitiveness in the industry. This solution supports various cloud environments and can be deployed in public, private, or hybrid settings. Such an approach makes IBM InfoSphere adapted for scaling workloads. Reviews G2 rating: 4.1 out of 5.0 Users value this tool for its outstanding data management and analytics capabilities. Best for IBM InfoSphere Information Server is primarily popular in the Automotive, Insurance, Telecommunications, and Travel sectors of the economy. For instance, Automotive companies need to keep track of new product features and their testing results. Since these operations are carried out by various departments, there is a need to align them across an organization. This is where IBM InfoSphere can greatly help. Advantages Here are the main advantages of this tool: A comprehensive set of solutions for holistic data management . Seamless connection with other IBM products . Native metadata querying and analysis. See [Skyvia vs IMB InfoSphere](https://skyvia.com/etl-tools-comparison/) features comparison. Cons Here are some of the known drawbacks of this tool: Technical expertise is required. Difficult initial setup. High costs. Pricing Before getting the pricing information, it\u2019s necessary to book a demo with the data integration consultant. If you have a buying intent after exploring the product, a Sales Manager will get back to you with the quote. Mulesoft Mulesoft is an all-in-one software for data integration purposes. It connects SaaS applications, legacy systems, and on-premises databases within a single environment. This tool was acquired by Salesforce in 2018, so it has advanced integration features for connecting this CRM to other systems. MuleSoft and Salesforce together unlock data across systems, build scalable integrations, and design automation frameworks to create holistic client profiles and ensure better customer experiences. Reviews G2 rating: 4.4 out of 5.0 Most Mulesoft users like its API management and integration features. Best for This tool is suited for mid to large enterprises across industries. It helps businesses to connect diverse data sources, especially those heavily reliant on Salesforce. Advantages Here are the key benefits of Mulesoft: Advanced data mapping and transformation . Containerization from cross-platform integration. Support of legacy systems . API usage analysis . Cons Here are some tangible limitations of this solution: API portals can have a maximum of 1,000 APIs . A steep learning curve . Pricing Mulesoft provides feature-based pricing plans, the cost of which is discussed individually with their Sales representatives. Summary Choosing the appropriate data integration tool might sometimes be a real challenge. Companies have to check that the software matches their integration needs and doesn\u2019t exceed budget limits. While TIBCO appears to be an excellent tool for establishing connectivity between cloud and on-premises services, data merging, and transformations, this tool isn\u2019t suitable for every business. There are many [alternative services](https://skyvia.com/etl-tools-comparison/) , such as IBM InfoSphere, Skyvia, Talend, and many others, that provide excellent data management features. We\u2019d suggest exploring [Skyvia](https://skyvia.com/) as a universal platform for data-related tasks. It\u2019s a multifunctional solution with features for data ingestion, blending, and transformation. What makes this tool even more attractive is the variety of pricing plans, allowing companies of any size and activity type to take advantage of data integration. FAQ for Tibco Software Alternatives How does Skyvia compare to TIBCO as a data integration solution? Both Skyvia and TIBCO are multi-module solutions with several components for different purposes. When comparing their [data integration tools](https://skyvia.com/blog/data-integration-tools/) , Skyvia is more user-friendly and requires less time to start. Skyvia\u2019s pricing model is more transparent than that of TIBCO. Is Skyvia a suitable alternative to TIBCO for complex enterprise integrations? Since Skyvia offers tools (Data Flow and Control Flow) for building complex integration scenarios, it might be considered a decent TIBCO alternative. For instance, Data Flow involves several data sources on integration and applies complex multi-stage data transformations. Does Skyvia support real-time data integration like TIBCO? Skyvia doesn\u2019t support real-time data streams, but it offers 1-minute intervals for [batch processing](https://skyvia.com/blog/batch-etl-processing/) , which is close to the real-time pace. Can I migrate my existing TIBCO integrations to Skyvia? As TIBCO and Skyvia are separate tools developed by different providers, the existing TIBCO integrations can\u2019t be migrated to Skyvia. Instead, you\u2019ll need to create new integration scenarios using pre-built connectors that interest you in Skyvia. What kind of support does Skyvia offer compared to TIBCO? Both tools offer similar support options: a knowledge base for self-service and a contact form to report a technical issue and get assistance over email. In addition, Skyvia offers a forum where popular questions are discussed. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-tibco-software-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=TIBCO+Alternatives+%26+Competitors%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-tibco-software-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-tibco-software-alternatives/&title=TIBCO+Alternatives+%26+Competitors%3A+A+Comprehensive+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-way-to-transfer-data-from-facebook-ads-to-bigquery/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Way to Transfer Data from Facebook Ads to BigQuery By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/best-way-to-transfer-data-from-facebook-ads-to-bigquery/#respond) 1905 April 26, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-way-to-transfer-data-from-facebook-ads-to-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Way+to+Transfer+Data+from+Facebook+Ads+to+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-way-to-transfer-data-from-facebook-ads-to-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-way-to-transfer-data-from-facebook-ads-to-bigquery/&title=Best+Way+to+Transfer+Data+from+Facebook+Ads+to+BigQuery) Facebook and Google BigQuery are both valid enough to allow businesses to track or analyze their ads and provide convenient reporting capabilities. So why are we talking about such a transfer? Why do companies need it? As usual, both solutions have their own pros and cons, and combining these services allows users to unlock more potential of each system. For instance, the reports and analytics in Facebook Ads are very basic, but if you send the appropriate data to BigQuery, you\u2019ll get robust data analytics abilities, like blending data with ad information from other sources and insightful cross-channel reports. You can also create an interactive dashboard by connecting BigQuery and BI tools. This article is about integrating Facebook and Google BigQuery and how to do it smoothly and easily. We\u2019ll go through the most common methods and then investigate such integration\u2019s top benefits and challenges. Table of Contents 3 Methods to send Facebook Ads data to Google BigQuery Manual transfer of Facebook Ads data to BigQuery Send Facebook Ads to BigQuery using API Connect Facebook Ads to BigQuery Automatically What\u2019s the best method for transferring data from Facebook Ads to BigQuery? Conclusion 3 Methods to send Facebook Ads data to Google BigQuery There are several methods to transfer Facebook Ads\u2019 data to BigQuery. You can do it manually , using API or no-code solutions . Let\u2019s consider each one\u2019s capabilities. Note: You need Facebook Ads and Google BigQuery accounts to perform transfers using each method. If you still don\u2019t have one, sign in to [Google Cloud Console](https://console.cloud.google.com/) and create an account and project. Manual transfer Manual transfer sounds simple, but in reality, it\u2019s challenging to scale, may lead to data errors, and, as a result, waste your time. In short, you must manually export data from Facebook Ads Manager into a CSV file to the local storage and then upload it to BigQuery. This method may be ok for small businesses that perform this operation from time to time and have limited Facebook Ads data volumes. Facebook Ads API Compared to the manual method, Facebook Ads API is faster, allows operating with large data volumes, and is more reliable. Here you can: Automate the Facebook Ads data transfer process. Set up a schedule to update the data regularly. This method is not suitable for non-tech users because coding skills are required. Automated Transfer (No-code integration) Automated transfer or no-code integration means automatically using the third-party data integration tool to send the Facebook Ads info to BigQuery. The term \u2018automatically\u2019 is the key one in this case. This method is simple and scalable, convenient, and doesn\u2019t need additional coding skills. Sure, it may be costly depending on the solution selected, but for businesses operating large data volumes, such a choice saves time and money . Now, let\u2019s dive deeper into each method. Manual transfer of Facebook Ads data to BigQuery Follow the steps below to transfer Facebook Ads data into BigQuery manually. Step 1: Export Data from Facebook Ads Log in to Facebook Ads Manager. Navigate to the Reports section or wherever you access detailed reporting. Customize the report to include all the data fields you need for your analysis in BigQuery. Click Export Table Data to export the report. Facebook typically allows you to export data in formats like CSV or Excel. Choose CSV for compatibility with BigQuery to load the file to the local device. Step 2 (Optional): Prepare the Data Depending on your needs, you might have to preprocess or clean the data using a spreadsheet like Microsoft Excel. Ensure that the data types in the CSV file match those in your BigQuery table schema. Step 3: Create a Dataset in BigQuery Go to the [BigQuery Console](https://console.cloud.google.com/bigquery) . Select your project and click Create Dataset on the right. Fill in the Dataset ID and set the data location and other options as needed. Step 4: Create a Table in BigQuery In the newly created dataset, click + Create Table . Choose Upload under Create table from , and upload the CSV file exported from Facebook Ads. Specify the table name and select the appropriate options for your file format. You can manually specify the schema or opt for BigQuery to detect it automatically if the CSV file includes a header row. Ensure the schema matches the data in the CSV. Click Create table. Step 5: Upload the Data The data upload will begin as soon as you create the table. BigQuery will import the data from your CSV file into the new table. Step 6: Verify the Data Import Once the upload is complete, run a simple query in BigQuery to ensure your data has been imported correctly. Send Facebook Ads to BigQuery using API To automate the transfer of Facebook Ads data to Google BigQuery via API, you must be familiar with API usage, Google Cloud services, and scripting languages like Python. It may sound a bit scary, but let\u2019s start. Note: To enable BigQuery API for your project, ensure you have the account with permission to create and manage BigQuery datasets and tables. Step 1: Set Up Facebook Ads API Obtain an access token from Facebook for the Graph API. Create an app in the Facebook Developer portal, get an App ID and App Secret, and then use these to generate an access token. Ensure your access token has permission to access Ads data (ads_read). Step 2: Set Up Google Cloud Project Create a Service Account. Download its key file. Step 3: Prepare Your Environment Choose a server or local environment where your script will run. Ensure you have libraries installed for making HTTP requests, like requests in Python, and for interacting with the BigQuery API ( [Google Cloud SDK](https://cloud.google.com/sdk) or libraries like [google-cloud-bigquery in Python](https://cloud.google.com/python/docs/reference/bigquery/latest) ). Step 4: Fetch Data from Facebook Ads API Determine what data you need from Facebook Ads. Consider including campaign performance metrics, audience insights, ad spending, conversion rates, etc. Knowing what you need will guide the extraction process. Use Facebook Ads Manager Go to your Facebook Ads Manager dashboard. Select Reports to customize the type of data and metrics you\u2019re interested in. Once you\u2019ve configured your report, you can download it in various formats (e.g., CSV, Excel) for further processing or integration. Use Facebook Graph API for advanced or automated data needs The Facebook Graph API allows for processing detailed ad performance data. This method is suitable for extracting large volumes of data or automating data extraction processes. Ensure you have developer access to the Facebook app associated with the ad account. Create an access token with permissions to access ads insights (ads_read). Use the Graph API endpoint for ad insights, specifying the necessary fields and parameters. The API call might look like this: https://graph.facebook.com/v12.0/{ad-account-id}/insights?fields=impressions,clicks,spend&access_token={your-access-token}. Process the API response, typically JSON, and extract the needed data. Automate the data extraction process for recurring needs using scripts or [data integration platforms](https://skyvia.com/blog/data-integration-tools/) to schedule and execute the data fetch regularly. Write a Python script to fetch and process the data using the Facebook SDK or direct API calls. Schedule the script to run at desired intervals using task schedulers (e.g., cron jobs in Linux, Task Scheduler in Windows). Alternatively, use data integration services that support Facebook Ads and can automate data extraction and preparation. Note: Remember [Facebook\u2019s API rate limits](https://developers.facebook.com/docs/graph-api/overview/rate-limiting/) to avoid disrupting your access. Ensure your data handling practices comply with Facebook\u2019s terms and data privacy regulations relevant to your region or industry. Step 5: Prepare Data for BigQuery Prepare your Facebook Ads data for Google BigQuery to ensure the data is clean, consistent , and structured for efficient querying and insightful analysis in BigQuery and data-driven decisions based on comprehensive advertising performance data. Determine what information you need from your Facebook Ads for analysis in BigQuery. Standard metrics include impressions, clicks, conversions, and spending. Dimensions might include campaign ID, ad set details, demographics, and time of day. Ensure the data format aligns with BigQuery\u2019s requirements. BigQuery supports CSV, JSON, and other formats, but CSV is often the most straightforward for tabular data like ad metrics. For CSV files , include headers in the first row and quote text fields that may contain commas or newlines. For JSON , structure your data as an array of objects, each representing a record. Clean your data to ensure accuracy. Remove any irrelevant or incomplete records and correct any discrepancies. This process might include aggregating data at the desired granularity (e.g., daily metrics), splitting columns if a single column contains multiple pieces of information (e.g., full names to first and last names), and ensuring data types match BigQuery requirements (e.g., converting string dates to DATE format). Create a schema that reflects your dataset. For each column in your dataset, define Name : The column name. Type : The data type (e.g., STRING, INTEGER, FLOA T, DATE). Mode : Whether the field can be null (NULLABLE), must have a value (REQUIRED), or can contain multiple values (REPEATED). Consider partitioning your table based on a date or timestamp column to improve query performance and manage costs more effectively. Use clustering to organize your data further based on frequently queried columns. Choose your upload method. You can upload data directly through the BigQuery UI, use the bq command-line tool, or automate uploads via the BigQuery API. Execute the upload. For CSV files, ensure you specify options like delimiter, encoding, and whether the file contains a header row. Make sure the file adheres to BigQuery\u2019s JSON format requirements. After uploading, run test queries to check the integrity and accuracy of your data. Look for any import errors or mismatches in the schema. Adjust your preparation process as needed based on these findings to ensure reliable Step 6: Upload Data to BigQuery Use the service account key file to authenticate your script with Google Cloud. Initialize a BigQuery client using the Google Cloud library in the script. Create Dataset and Table if they don\u2019t already exist, using the BigQuery client. Use the client to upload your formatted data to the BigQuery table. You can stream the data directly or upload a file. Step 7: Automate and Schedule Use tools like Google Cloud Scheduler and Cloud Functions to automate data fetching and uploading. Schedule the script to run at regular intervals that match your reporting needs. Here is a simplified example to illustrate part of the process. It assumes you have [google-cloud-bigquery](https://cloud.google.com/bigquery/docs/reference/libraries) and [requests libraries](https://cloud.google.com/storage/docs/reference/libraries) installed. Note : This is just a guide and example. Real-world implementations require securely handling authentication, managing API rate limits, dealing with data transformation complexities, and handling errors from both the Facebook API and BigQuery API. Connect Facebook Ads to BigQuery Automatically Automatically connecting Facebook Ads to BigQuery is the easiest and least time-consuming way to save costs and keep you calm. Such a method is also called no-code integration. Let\u2019s consider how to do it using [Skyvia](https://skyvia.com/data-integration/) \u2013 a no-code, cloud-based data integration platform supporting [180+](https://skyvia.com/connectors/) sources and destinations, including [Facebook Ads](https://skyvia.com/connectors/facebook-ads) and [Google BigQuery](https://skyvia.com/connectors/google-bigquery) . Here are the scenarios supported: Data import and export from Facebook Ads to CSV files. Replication of Facebook Ads data to relational databases or data warehouses. Synchronization of Facebook Ads data with other cloud apps and relational databases. The solution\u2019s [user-friendly UI](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) allows data transfer from Facebook Ads to BigQuery in just a few steps. The guide below shows how easy it is. The Step-by-Step Guide on how to connect Facebook Ads and Google BigQuery You first need to connect with [Facebook Ads](https://skyvia.com/connectors/facebook-ads) and [Google BigQuery](https://skyvia.com/connectors/google-bigquery) . Click + Create New in the Skyvia top menu and select Connection in the list on the left. Select the connector or type keywords in the search filter. Click Sign in with Facebook , enter your credentials (Access Token), and click Create Connection . Enter your Facebook credentials and click Log In . Specify the Ad Account Id. You can find the Ad Account Id in your Facebook Ads Manager. The Ad Account Id value is located above the search and filter bar, in the account dropdown menu, or in the page URL. In the same way, you have to create the BigQuery connector. Go [here](https://docs.skyvia.com/connectors/databases/google_bigquery_connections.html#establishing-connection) for more details. Note: You must provide your Google Cloud credentials and a service account key file for authentication. When the connections are set up, you can start the integration. In the Skyvia dashboard, click +Create New in the top menu, navigate to the Integration section, and select the appropriate scenario. For instance, you can choose Import to move data from Facebook Ads to BigQuery. For the source, select Source Type \u2013 Data Source \u2013 Facebook Ads. Choose Google BigQuery as the Target (don\u2019t forget to create a new dataset in BigQuery where the Facebook Ads data will be stored). Create a task and select the specific ad data (e.g., ad performance metrics, click-through rates, costs, etc.) you wish to transfer to BigQuery. Map the data fields from Facebook Ads to the corresponding fields in your BigQuery dataset. Skyvia offers automatic mapping, but you can customize the mappings based on your needs. Note: All data from Facebook Ads are immediately available here, without special preparation, and it is straightforward to manage them. You can configure special logic in a few clicks, filter data, add special conditions, etc. Decide if you want the data transfer to occur immediately or on a schedule. Skyvia allows you to set up a recurring schedule for automatic data updates. Save the integration package. If immediate data transfer is needed, execute the package manually, or wait for the scheduled run if you\u2019ve set one up. Note: Use Skyvia\u2019s monitoring tools to check the status of your data integration tasks. If necessary, adjust your mappings or schedule based on the initial results to ensure your data is accurately and efficiently transferred as needed. Benefits Enhanced Data Analysis and Insights Combining Facebook Ads data with other data sources in BigQuery allows for comprehensive analytics. You can derive more nuanced insights into ad performance, customer behavior, and ROI across multiple channels. Historical Data By acquiring all raw data within the DWH, you can be sure that historical data won\u2019t be leaked, lost, or sampled. You can retrieve them into the report anytime. Real-time Decision Making By automating data flows into BigQuery, businesses can analyze data in near real-time. This timely access supports quicker, more informed decisions regarding ad adjustments and marketing strategies. Scalability BigQuery\u2019s infrastructure supports seamlessly scaling from small datasets to petabytes of data. As your advertising efforts and data grow, BigQuery can accommodate this growth without significant re-architecture. Advanced-Data Processing Capabilities BigQuery offers powerful data processing features, including machine learning models directly within the platform for sophisticated analysis like predictive modeling and customer segmentation based on ad data. Automation and Efficiency Automating data transfer from Facebook Ads to BigQuery reduces manual efforts and errors associated with data handling, increasing operational efficiency. Challenges Complexity of Integration Setting up a reliable and automated pipeline from Facebook Ads to BigQuery can be technically complex, requiring knowledge of APIs, data formats, and possibly third-party integration tools. Data Consistency and Quality Ensuring that the data transferred retains its integrity and is correctly formatted for analysis in BigQuery can be challenging. Discrepancies or errors in data can lead to inaccurate analytics. Cost Management BigQuery charges for data storage, streaming inserts, and queries. High volumes of ad data and frequent querying without optimization can lead to unexpected costs. Compliance and Privacy Transferring and analyzing personal data from Facebook Ads necessitates adherence to data protection regulations (e.g., GDPR, CCPA). Ensuring compliance adds another layer of complexity. Dependency on External Platforms Any changes in Facebook\u2019s API for BigQuery\u2019s features may require adjustments to the integration pipeline. Maintaining the pipeline is about keeping up with updates and modifications from both platforms. Data Latency While near real-time analysis is a goal, latency issues can be based on the data transfer method and processing, affecting timely decision-making. What\u2019s the best method for transferring data from Facebook Ads to BigQuery? Choosing the best method for transferring data from Facebook Ads to BigQuery depends on data volume, frequency of updates, technical expertise, and specific business needs. Here\u2019s a comparison table of the most valuable methods to help identify the best approach for your situation. Feature/Aspect Manual Export/Import Custom Integration (APIs) Third-party [ETL Tools](https://skyvia.com/blog/etl-tools/) Technical Skill Required Low High Low Automation None Full Full Real-time Data Transfer No Yes Depends on the tool Scalability Low High High Customization Low High High Initial Setup Complexity Low High Low Maintenance Effort High (for repeated use) Medium Low Cost Free Variable (development cost) Subscription-based Data Transformation Capabilities Manual Manual or Automated (coding required) Automated (tool-dependent) Best for Use Cases One-time or infrequent transfers High-volume, real-time analytics Regular syncs without heavy coding Conclusion In real business, the best tool for integrating Facebook Ads data into Google BigQuery varies by each company\u2019s specific needs, technical capabilities, budget, and strategic goals. Third-party ETL tools often provide a good balance between ease of use and functionality for many businesses. At the same time, organizations with specific needs and the capacity to build custom integrations might opt for API-based solutions. And, of course, you can select your own scenario. But remember that all your steps must be simple and not to confuse the user. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-way-to-transfer-data-from-facebook-ads-to-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Way+to+Transfer+Data+from+Facebook+Ads+to+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-way-to-transfer-data-from-facebook-ads-to-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-way-to-transfer-data-from-facebook-ads-to-bigquery/&title=Best+Way+to+Transfer+Data+from+Facebook+Ads+to+BigQuery) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/best-youtube-channels-data-analytics/", "product_name": "Unknown", "content_type": "Blog", "content": "[Integration](https://skyvia.com/blog/category/integration/) 10 Best YouTube Channels To Learn Data Analytics for 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/best-youtube-channels-data-analytics/#respond) 2119 July 4, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-youtube-channels-data-analytics%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+YouTube+Channels+To+Learn+Data+Analytics+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-youtube-channels-data-analytics%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-youtube-channels-data-analytics/&title=10+Best+YouTube+Channels+To+Learn+Data+Analytics+for+2025) A human brain processes images much faster than text, so watching videos could be more effective than reading a book in the learning process. Such facts about the human organism can somehow explain the popularity of YouTube not only for entertainment but also for educational purposes. Thousands of educational YouTube channels help people learn different subjects, including data analysis. But how to decide which one is the best for you? How to select videos to gain new skills and discover recent trends? This article addresses these questions by providing a selection of decent YouTube channels on data analytics and adjacent fields. You\u2019ll get a brief overview of each YouTube channel along with a list of recommended videos. Table of Contents Importance of Data Analytics Why YouTube Is a Great Learning Platform Top YouTube Channels for Data Analytics How to Choose the Right Channel Final Thoughts Importance of Data Analytics More and more organizations are hiring data analysts and other data professionals due to their contribution to business decision-making. This is also reflected in the [U.S. Bureau of Labor Statistics](https://www.bls.gov/careeroutlook/2023/data-on-display/data-occupations.htm) report, which estimates that employment growth for data scientists will increase by 36% between 2021 and 2031. Why YouTube Is a Great Learning Platform If you\u2019d like to become a data analyst or upgrade your professional expertise, consider YouTube. Videos explain complex concepts in clear language accompanied by images and real-world examples. Also, you\u2019ll find many YouTube videos with best practices and current trends. According to [Semrush](https://www.semrush.com/blog/youtube-stats/) , YouTube is the second-most visited website in the world, with around 2.49 billion users regularly visiting it monthly. Even though it was long perceived as an entertainment platform, its role was rethought, so now it\u2019s also seen as a powerful learning platform due to: Accessibility. Videos can be accessed from any country and any device. Only a high-speed internet connection and a computer, tablet, or smartphone are needed. Availability. Learning on YouTube is an experience available at no cost. Diversity. Given that 500+ hours of content are uploaded each minute on YouTube, there\u2019s a wide selection of content on topics related to data analysis and other fields. Individual pace. Users can access videos at any time, pause a video, replay certain parts of it, etc. Overall, each person can study with YouTube videos at their own pace. Live streaming. YouTube supports live streaming, allowing teachers and content creators to conduct online lessons and webinars. Interactivity. Comments under videos are like product reviews, helping to understand its value. Discussions also take place in the comment section, so you can talk about subjects related to the video that interest you with other geeks. Top YouTube Channels for Data Analytics 1. Simplilearn Link to channel: [https://www.youtube.com/@SimplilearnOfficial](https://www.youtube.com/@SimplilearnOfficial) Subscribers: 4.2 M Simplilearn is the world\u2019s biggest online boot camp based in San Francisco and Bangalore. This platform provides training courses in the areas where technology is crucial. You can find programs ranging from Digital Marketing Specialist to Azure Cloud Architect, each of which contains extensive training materials, practical exercises, and final exams. Simplilearn has a YouTube channel with thousands of videos and millions of subscribers. The videos explain difficult topics in simple and understandable language. So, this channel would be particularly suitable for beginners in data analysis. Popular videos: [Data Analytics Full Course 2024](https://youtu.be/ZUdlc5LsmHA?si=qPadrbOH43Q8jahx) [Top 5 Reasons to Become a Data Analyst](https://www.youtube.com/watch?v=JD1PhiIbC8w) [ChatGPT and Excel for Data Analysts](https://youtu.be/Z2D1hQKo_-k?si=0xfiGN_Qf811TPoD) 2. Alex The Analyst Link to channel: [https://www.youtube.com/@AlexTheAnalyst](https://www.youtube.com/@AlexTheAnalyst) Subscribers: 801K This channel is managed by Alex Freberg, who has more than a decade of practical experience in data analytics. Alex explains how to become an expert in this field and shares tips about data analysis in the videos. He also provides recommendations on using Excel and Power BI, introduces the basics of SQL and Python, and reveals some useful tricks for making the data analyst\u2019s daily workflow engaging. Popular videos: [SQL Basic Tutorial for Beginners](https://youtu.be/PyYgERKq25I?si=ee1hjxi65UkMmqFi) [Cleaning Data in Excel](https://youtu.be/_jmiEGZ6PIY?si=nVkKXI9bR_EqvWOP) [Using DAX in Power BI](https://youtu.be/vcijg0gUXSg?si=w1QbmBapOS0ox7He) [Data Analyst vs Data Scientist](https://www.youtube.com/watch?v=fUpChfNN5Uo) 3. Data School Link to channel: [https://www.youtube.com/@dataschool/videos](https://www.youtube.com/@dataschool/videos) Subscribers: 240K Data School is an online platform providing in-depth learning materials about data science, including courses, blog articles, and YouTube videos. If you\u2019re a data analyst aiming to advance your career to the next level, the Data School is the right place for you. This YouTube channel assists enthusiasts interested in data science regardless of one\u2019s educational or professional background. The videos cover essential topics in an easy-to-understand manner. Popular videos: [Making Predictions with scikit-learn](https://youtu.be/RlQuVL6-qe8?si=buO0f2SAYj3H1-Qq) [Fixing Bad Data](https://youtu.be/8U8ob9bXakY?si=VSJTbAY0wYh3PSGw) [Hands-on dplyr Tutorial for Faster Data Manipulation in R](https://youtu.be/jWjqLW-u3hc?si=Q0X6ayEIv6i05aj_) 4. 365 Data Science Link to channel: [https://www.youtube.com/@365DataScience](https://www.youtube.com/@365DataScience) Subscribers: 317 K 365 Data Science is an online learning platform with dozens of courses and certification programs. It\u2019s suitable for those who want to start or change a career in the data field. There are also plenty of courses for those eager to update their knowledge and qualifications in data analytics and data science. This channel offers materials for data scientists as well as data analysts and business analysts. It also provides thematic playlists on how to start a career path for each profession. Best videos: [How to Become a Data Analyst in 2024](https://www.youtube.com/watch?v=1SJUKm3JkCk&list=PLaFfQroTgZnyMYwwvXSC8bnB7SF5JU9kt&index=2) [The Best Chart Types](https://youtu.be/qGaIB-bRn-A?si=eh5dH8q38rBQ0c-P) [From Economist to Data Scientist](https://youtu.be/3gssAG0agO8?si=fbt9yASo08SQHkOt) [Why Is Linear Algebra Useful](https://youtu.be/X0HXnHKPXSo?si=yZ7KwI2tMSQ26kvu) 5. Ken Jee Link to channel: [https://www.youtube.com/@KenJee_ds](https://www.youtube.com/@KenJee_ds) Subscribers: 260 K Ken Jee is a professional data scientist interested in artificial intelligence and sports analytics. He produces fun and informative content to help others better understand complex concepts. In addition to useful videos with tips, he also produces videos on the news and trends in the data market. Best videos: [Data Science Advice for College Students](https://youtu.be/xjhW1rSQeik?si=utTxNA-QOLXaJ5ow) [How to Go from Data Analyst to Data Scientist](https://youtu.be/EMq4PH7PCeA?si=GMgX5WJ4TbygomDY) 6. DataCamp Link to channel: [https://www.youtube.com/@DataCamp/videos](https://www.youtube.com/@DataCamp/videos) Subscribers: 165 K DataCamp is a popular learning platform that provides courses in data analysis, data manipulation, software development, data engineering, AI and machine learning, statistics, etc. These materials are suitable both for individuals and companies seeking to advance the skills of their employees in certain subjects. DataCamp YouTube channel provides some educational materials and video courses. However, the majority of its content relies on videos about current trends in the data field, best practices of companies in different industries, and short tutorials covering narrow topics. Best videos: [A Beginner\u2019s Guide to Data Analysis with SQL](https://www.youtube.com/watch?v=CBQzaLYBueA&list=PLjgj6kdf_snYc7OjL3P-QnUorZHBsTqMJ) [The Future of Marketing Analytics](https://youtu.be/5ac5vAZ8000?si=YCZF0GduN_kpn8hp) 7. Skyvia Link to channel: [https://www.youtube.com/@SkyviaPlatform/videos](https://www.youtube.com/@SkyviaPlatform/videos) [Skyvia](https://skyvia.com/) is a universal data platform designed for a variety of data-related tasks, such as workflow automation, querying, backup, data integration, and OData endpoint creation. This tool is an ideal solution for data analysts since it supports them in processes, starting from data collection to loading it into [BI and analytics tools](https://skyvia.com/blog/top-data-analysis-tools/) . Skyvia\u2019s YouTube channel is one of the informative and educational resources provided by the company. Even though it\u2019s not totally focused on data analytics or data science, Skyvia\u2019s YouTube channel contains plenty of videos that could be useful for such professionals. The most popular video categories on the channel are the following: Webinars covering data-related topics Video tutorials on Skyvia functionality Examples of data integration scenarios Best videos related to data analytics: [Skyvia Data Platform Overview](https://www.youtube.com/watch?v=iaPn8s8PVuc) [In-Browser SQL Client and Visual Query Builder](https://www.youtube.com/watch?v=l1Cz7clFh_c) 8. StatQuest with Josh Starmer Link to channel: [https://www.youtube.com/@statquest/videos](https://www.youtube.com/@statquest/videos) Subscribers: 1.19 M If you\u2019ve made the first steps towards upgrading your data analyst career to a data scientist path, you might have noticed how much statistics is required there. But don\u2019t worry, Josh Starmer explains complex statistical formulas and their use in machine learning algorithms. He started this channel to help his research colleagues understand what to do with such huge amounts of data generated from experiments. And now, Josh Startmer is a gold YouTube button owner, helping others to tame statistics. Best videos: [Machine Learning Fundamentals: Bias and Variance](https://youtu.be/EuBBz3bI-aA?si=FNpfx8HddjvSuNRg) [Machine Learning Fundamentals: Cross Validation](https://youtu.be/fSytzGwwBVw?si=OckUlE1ia7cwG9DI) [Linear Regression Clearly Explained](https://www.youtube.com/watch?v=nk2CQITm_eo) [K-Means Clustering Method](https://youtu.be/4b5d3muPQmA?si=cfazZviMy_heU2Ij) 9. freeCodeCamp.org Link to channel: [https://www.youtube.com/@freecodecamp](https://www.youtube.com/@freecodecamp) Subscribers: 9.6 M Even though a data analyst isn\u2019t obliged to have the skills of a professional developer, some understanding and basic experience with programming languages is required. The freeCodeCamp.org platform offers courses to help you learn to code, starting from HTML to Python for data analysis. Their YouTube Channel has more than a thousand videos for different programming languages. Enjoy a full video course on how to use Python for data analysis, SQL tutorials, and Microsoft Excel video materials. Best videos: [Data Analysis with Python Playlist](https://www.youtube.com/watch?v=EsDFiZPljYo&list=PLWKjhJtqVAblvI1i46ScbKV2jH1gdL7VQ) [SQL Tutorial \u2013 Full Database Course for Beginners](https://www.youtube.com/watch?v=HXV3zeQKqGY) [Microsoft Excel Tutorial for Beginners](https://www.youtube.com/watch?v=Vl0H-qTclOg) 10. Power BI Link to channel: [https://www.youtube.com/@MicrosoftPowerBI/videos](https://www.youtube.com/@MicrosoftPowerBI/videos) Subscribers: 432 K Power BI is the widely used software for BI and data analysis, with around 33% of the BI tools market share. It\u2019s popular due to the Microsoft-like interface and integration with other Microsoft products. What\u2019s more, it\u2019s suitable for non-techs with no coding experience or programming knowledge. Microsoft Power BI mobile application also makes it a preferred choice among many data analysts. Power BI has its own YouTube channel with more than 500 videos and thousands of subscribers. It contains videos about recent updates to the Power BI tool and detailed feature descriptions with examples. Best videos: [Power BI Update \u2013 June 2024](https://youtu.be/fbw09nHOm-c?si=YjjD6hfnN_lRN2tH) [Copilot in Power BI Overview](https://youtu.be/Pmt9TyvNsQM?si=zLR7gJRflB51mUwl) How to Choose the Right Channel Each channel presented in this article is worth your attention, though some might suit you more, others less. The choice of the right channel depends on your background and professional goals for the future. If you want to start a career as a data analyst, then [Simplilearn](https://www.youtube.com/@SimplilearnOfficial) , [DataCamp](https://www.youtube.com/@DataCamp/videos) , and [Alex The Analyst](https://www.youtube.com/@AlexTheAnalyst) channels would help you. In case you\u2019d like to switch your career path from data analyst to data scientist, feel free to focus on [Ken Jee](https://www.youtube.com/@KenJee_ds) , [365 Data Science](https://www.youtube.com/@365DataScience) , and [Data School](https://www.youtube.com/@dataschool/videos) channels. If you\u2019d like to master your programming skills, [freeCodeCamp.org](http://freecodecamp.org) would be the best channel for that. To deepen your knowledge of statistics in data science, have a detailed look at the channel [StatQuest with Josh Starmer](https://www.youtube.com/@statquest/videos) . To gain more knowledge about [tools for data preparation](https://skyvia.com/blog/data-preparation-tools/) and analysis, look into [Skyvia](https://www.youtube.com/@SkyviaPlatform/videos) and [Power BI](https://www.youtube.com/@MicrosoftPowerBI/videos) channels. Final Thoughts YouTube channels provide not only a great learning base but also highlight news and trends in the data field. You can watch videos at your own pace completely for free and review them at any time, which is a great advantage of YouTube. While YouTube channels could be a great starting point for your data analyst journey, you should also consider supplementary learning materials. Books, courses, and practical assignments in the live setting will help you become a highly qualified specialist and boost your proficiency. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbest-youtube-channels-data-analytics%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+YouTube+Channels+To+Learn+Data+Analytics+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbest-youtube-channels-data-analytics%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/best-youtube-channels-data-analytics/&title=10+Best+YouTube+Channels+To+Learn+Data+Analytics+for+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Integration](https://skyvia.com/blog/category/integration/) [Data Mesh vs Data Lake: Comprehensive Comparison](https://skyvia.com/blog/data-mesh-vs-data-lake/) [Integration](https://skyvia.com/blog/category/integration/) [10 Best Data Aggregation Tools for 2025 for Your Business Needs](https://skyvia.com/blog/data-aggregation-tool/)" }, { "url": "https://skyvia.com/blog/bigcommerce-csv-import-and-export/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) BigCommerce CSV Import & Export By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/bigcommerce-csv-import-and-export/#respond) 4710 August 3, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbigcommerce-csv-import-and-export%2F) [Twitter](https://twitter.com/intent/tweet?text=BigCommerce+CSV+Import+%26+Export&url=https%3A%2F%2Fblog.skyvia.com%2Fbigcommerce-csv-import-and-export%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/bigcommerce-csv-import-and-export/&title=BigCommerce+CSV+Import+%26+Export) Table of contents Introduction Updating Products Export a CSV File Export Templates Edit Exported CSV File Import a CSV File Import Product Images via CSV Automated Import and Export BigCommerce CSV Data using Skyvia Introduction BigCommerce uses CSV (Comma Separated Value) and XML (Extensible Markup Language) files for all import and export operations. However, the users lean towards CSV as most of the platforms and editors support this plain text format. BigCommerce CSV import and export functionality covers all the common data transfer needs. You can import : Products and Products Options Product SKUs Customers Tracking Numbers 301 redirects You can export : Products and Products Options Product SKUs Customers Orders Analytics Reports 301 redirects Newsletter Sign-ups In this article we cover common use cases of CSV import and export operations in BigCommerce and show how to automate these processes. Updating Products Updating products is the most commonly used import/export operation, since you can make adjusments in bulk. Long story short, you export your data from BigCommerce in a CSV format. Open it in a convenient editor. Add changes. Import it back to BigCommerce. Let\u2019s have a closer look. Export a CSV File To perform a BigCommerce CSV export: While in BigCommerce navigate to Products -> Export . Choose the Bulk Edit export template. That\u2019s a built-in export template that pre-sets the column headers of you CSV files to match the headers used by BigCommerce. This option allows you to skip the manual field mapping when you import the updated data back. Click Continue . Wait until export is complete and click Download . Note: You can always create a new [Custom Export Template](https://support.bigcommerce.com/s/article/Creating-a-Custom-Export-Template-for-Data-Transfer?language=en_US) or edit the existing one. Export Templates To export data from BigCommerce you need to choose an export template first. It is a set of settings that dictates which data fields are included in the CSV file and how they are formatted. There are several built-in BigCommerce CSV templates such as Bulk Edit we just mentioned. However you can always create your custom one to match your needs. To create a BigCommerce custom export template: Navigate to Settings->Export Templates , and click Create Export Template . Enter the name for your template. Choose the data type for the data you are exporting (Products, Orders, or others). (Optionally) Configure the Field Formatting Settings and Advanced CSV Form . BigCommerce CSV template generator has these fields configured to default values. At the top of the Settings page, next to Template Details click on the data type(s) you chose in the Step #3 and select the fields you want to export. Sometimes you need to exlude fields to match the destination store data structure. Click Save and Exit to create a template. Edit Exported CSV File To edit an exported CSV file, open the file in any convenient editor. Most commonly used editors are Microsoft Excel and Apple Numbers. However, even Notepad supports CSV file format. Navigate to the column you want to edit, apply changes, and save your file. Import a CSV File To import the CSV file you exported with the Bulk-Edit template, do the following: Navigate to Products -> Import Set the location of your file. If you are importing the CSV file exported with the Bulk Edit template, select File Was Exported Using the \u2018Bulk Edit\u2019 Template in the Import Options These will save you time during the field mapping process. Set other Import Options according yo your preference. Click Next to continue to the Link Import Fields page On the Link Import Fields page you match field titles to the headings used in your CSV. If you used Bulk Edit template, these are already mapped for you. If you are using a CSV from your manufacturer or distributor you may need to map these fields manually. For example, if your Product Name column is named Brand Name instead of Product Name you will need to select Brand Name from the Match Product Name dropdown. Click Next and Continue to start importing your products. When the import is complete, you will see the import summary. Click More Information to check if there are any warnings generated during the import process. To check your imported products click on the View imported products in the control panel. Import Product Images via CSV With CSV import you are not limited to importing orders, products, customers and other plain text data. You can import your product images. Before starting the image import you need to [upload your images to BigCommerce server](https://support.bigcommerce.com/s/article/Using-the-Image-Manager?language=en_US#:~:text=fail%20to%20upload.-,Uploading,additional%20image%2C%20then%20click%20Upload.) or to any other third party cloud storage. When you have your images uploaded, open your CSV file and navigate to the Product Image File \u2013 1 column. Paste the link to your image to the corresponding product field and save your file. Product image will appear for the CSV import. If you want to add more images, create more columns and name those Product Image File \u2013 2 Product Image File \u2013 3 and so on. Automated Import and Export BigCommerce CSV Data using Skyvia If you want to extend the list of possibilities provided by BigCommerce out of a box, you can use a third party solution such as Skyvia. Skyvia allows you to import and export CSV data from BigCommerce to another e-commerce system, database, CRM, or cloud storage automatically, on schedule. Setup a Connection Between Skyvia and BigCommerce To start working with BigCommerce using Skyvia you need to create a [connection](https://docs.skyvia.com/connectors/cloud-sources/bigcommerce_connections.html) first. Login to your [Skyvia account](https://app.skyvia.com/) . Click New and choose Connection Choose BigCommerce from the list of Connectors Fill out the authentication credentials. Click Create Connection Import CSV Data to Bigcommerce with Skyvia Once you created a connection between Skyvia and BigCommerce you can import CSV data exported from BigCommerce or another data source. To run import you need to create an Import package. Click New and choose Import . Set the prefered package name at the top of the page. Select CSV file as Source and BigCommerce connection as Target. Add a Task to the package. You can import several CSV files by adding more tasks to the package. Choose your CSV file, optionally modify the CSV file format, and proceed to the next step. Choose the Object(table) you want to update in BigCommerce and one of the supported actions: Insert, Upsert, Update, Delete. Map the required fields and save your progress. Click Create to create an Import package. Click Run to execute. Export CSV Data from Bigcommerce with Skyvia To export files from BigCommerce using Skyvia you need to create an Export package. The process is similar to creating its Import brother. Click New and choose Export . Set the prefered package name at the top of the page. Select CSV file as Target and BigCommerce connection as Source. Add a Task to the package. You can export several CSV files by adding more tasks to the package. Choose the Object(table) you want to export from BigCommerce Optionally, modify the column names for the exported file and click Save . Click Create to create an Export package. Click Run to execute. Automate the Process When you are running Import and Export packages for the first time, we recommend you to run those manually to ensure that everything set up properly. After that you may use Skyvia to automate the process and run it on schedule. To schedule the package run, do the following: Open your Import or Export package Click on Schedule Select the time, time zone, and frequency Click Save to confirm changes Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbigcommerce-csv-import-and-export%2F) [Twitter](https://twitter.com/intent/tweet?text=BigCommerce+CSV+Import+%26+Export&url=https%3A%2F%2Fblog.skyvia.com%2Fbigcommerce-csv-import-and-export%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/bigcommerce-csv-import-and-export/&title=BigCommerce+CSV+Import+%26+Export) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/box-with-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Set Up Box and Salesforce Integration By [Babu Tharikh](https://skyvia.com/blog/author/tharikh/) [0](https://skyvia.com/blog/box-with-salesforce-integration/#respond) 4212 December 14, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbox-with-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Set+Up+Box+and+Salesforce+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fbox-with-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/box-with-salesforce-integration/&title=How+to+Set+Up+Box+and+Salesforce+Integration) Cloud adoption is on the rise, and most businesses are migrating to the cloud. Enterprises primarily use Salesforce\u2019s Cloud-Based CRM (Customer Relationship Management) to manage customer records and engagement. Salesforce offers many other services, like Sales Cloud, Marketing Cloud, Mobile Connectivity, etc. Salesforce empowers organizations to improve their relationships with clients, customers, and partners, providing companies with tremendous adaptability, unlimited scalability, and a wholly linked workforce. Collaboration on the cloud is a natural pathway as businesses expand their use of Salesforce, seeking centralized and secure control to facilitate seamless team collaboration. Here, Box Salesforce integration is an optimal solution. Businesses seek secure and accessible document storage. Enterprises seek to integrate Box and Salesforce for increased collaboration opportunities. In this article, you\u2019ll see how to integrate Box with Salesforce to automate the secure storage of documents in two different ways. One is through the native Box App and another through Skyvia \u2013 a no-code automated integration. Before diving into the specifics, let\u2019s briefly review Box and Salesforce. Table of contents: About Box Benefits of Box and Salesforce Integration Integration via Native App Integration via Skyvia Conclusion About Box Box is a cloud content management service that serves as a secure content repository for all of your company\u2019s data, papers, presentations, and so on. Box transforms how businesses operate digitally by providing a safe and simple-to-manage workspace that drives employee productivity and team collaboration from anywhere. Box, founded in 2005, centralizes content online and enables secure and easy collaboration from any device. To ensure increased productivity, your business employees can effortlessly access and collaborate on content using Box\u2019s different collaboration tools. Box gives customers complete control over their content, allowing them to securely share huge files or folders with persons outside their company in seconds. The following characteristics contribute to Box\u2019s tremendous popularity: The Box application enables you to keep your files in the cloud and provides you control over who can view and edit them. Box ensures seamless collaboration. It offers a centralized location for collecting feedback and automating approval operations. Box\u2019s encryption, centralized controls, and reporting make it simple for businesses to connect and communicate. It supports integrating more than 1,500 applications, including Google, Salesforce, etc. The Box Sign functionality enables the collection of electronic signatures. Besides that, the Box Sync feature allows you to work offline as well. It enables offline access to the files on your desktop (from the cloud). Benefits of Box and Salesforce Integration Easy point-and-click configuration of the Box app; no coding necessary. Box can handle Lead conversion actions and reassign various folders like Lead to Account, Contact, and Opportunity. The standard application manages the automatic sharing of the Box application between users. The native functionality allows you to review Box files directly within Salesforce. You can also add new files here. Additionally, you may assign tasks to team members, add comments to files, and edit documents. You can also exchange files with remote users who aren\u2019t Salesforce customers. Customers can exchange files and folders using the native application\u2019s regular sharing capabilities. This capability makes it possible to request documents from external clients straight from Salesforce. Box integration with Salesforce is the quickest and most cost-effective solution to meet some of your requirements. Integration via Native App Note: Before setting up the Box for Salesforce integration, you must have two Box accounts and two Salesforce accounts. A Box Service Account and a Box User Account A Salesforce Admin Account and a Salesforce User Account A Box service account : It can be any managed user unless the Restrict Content Creation enterprise setting is enabled. If this setting is enabled, the service account must be either the primary administrator OR a co-administrator with the manage groups OR manage users\u2019 permission. Create a Box service account if you don\u2019t already have one, and add xyz@example.com as a managed user. It\u2019s advisable to keep this account solely for integration purposes and not assign it to any specific user. Furthermore, this account: Owns the Salesforce content on Box. Sends API requests from Salesforce to Box on behalf of the entire organization. A Box user account connects a person\u2019s Salesforce and Box accounts. Note: To access the Box for Salesforce integration, you must also have two Salesforce accounts: A Salesforce Admin Account for deploying packages in Salesforce and connecting them to the Box Service Account, which can be a frequent user account of Box for Salesforce integrations. A Salesforce User Account to connect it to a Box User Account and access Box content. Step 1: Install Box for Salesforce The initial step in this integration process is to install Box for Salesforce , as shown below. As Salesforce Admin, sign in to your account. Click the dots in the upper left corner of your site, followed by the Visit AppExchange icon. You can search and find the Box for Salesforce integration. Click the Get it now button and then either Install in Production or Install in Sandbox as needed. Allow access to third-party websites and proceed with the installation. Enable access for all users. The Box for Salesforce integration will be listed in your Salesforce App Launcher once the installation is complete. Step 2: Create or Define a Salesforce Folder in Your Box You may now establish a Box folder to hold all the data/files you wish to access in Salesforce. You can define this under the Companies folder, for example. All files are the file path. All filesCompaniesYour CompanyFile.pdf You may also notice a folder called Your Salesforce Org, which is a duplicate of your Salesforce Org. For the time being, keep it in mind and go to the next stage. Step 3: Reviewing Box Files in Salesforce Return to your Salesforce account and click on the Accounts button at the top to review your Box files. Click on the name Your Company to go to the Box tab. You will view all the content/files saved by default. To alter this, choose the Your Company folder from the Box file hierarchy by clicking All Files. Step 4: Adding a File to the Box Now you can upload files and sync between both platforms. To upload a file into Salesforce\u2019s Box tab, choose your selected Salesforce Record and navigate to the Box tab. Click Upload and then File . Select the file to upload and press the Open button. To ensure that the sync is functioning correctly, return to your Box account and refresh your folder to see the newly uploaded file. Step 5: Insert Box into Standard Objects Salesforce objects should now include Box capabilities. The Salesforce standard objects supported by the Box for Salesforce integration are as follows: Accounts Cases Opportunities Contacts Leads To embed Box in regular [Salesforce Lightning](https://skyvia.com/blog/salesforce-connect-guide/) objects, navigate to the page of the Salesforce item to which you wish to add Box functionality. It\u2019s shown here for a Salesforce Account. Click on the gear icon in the top right corner of your Account page, then select Edit Page. It activates the Edit mode, which allows you to add the Box module to the website layout. Search or scroll down to locate Box under Custom \u2013 Managed components on the left panel. Drag and drop the Box element to the desired location on the page layout. Save your settings. Reload the Account page to view the Box functionality added to your page. Your Box Salesforce integration is currently operational. Integration via Skyvia With Skyvia, you can start exchanging data between Box and Salesforce in a few easy steps. Let\u2019s check how to do that based on the example of exporting a CSV file from Salesforce to Box. There are three steps involved: Create a Box connection. Create a Salesforce connection. Create and run Export package. The instructions below assume that you have already created a Skyvia account. In case you haven\u2019t, you can do that for free by visiting the [Skyvia](https://app.skyvia.com/) app page. Create Box Connection To create a Box connection, go to +New > Connection , select Box, and do the following: Click Sign In with Box . Enter and confirm your credentials. Click Create Connection . Create Salesforce Connection To create a Salesforce connection, go to +New > Connection , select Salesforce, and do the following: Click Sign In with Salesforce . Enter and confirm your credentials. Click Create Connection . Create and Run Export Package Once connections to Box and Salesforce are created, you can start creating an Export CSV package. To create an Export CSV from Salesforce to Box package, do the following: Go to New > Export . Choose Salesforce connection as Source . Choose CSV to storage service and Box connection as Target . Optionally choose a Box folder to store your CSV file from the Folder dropdown. Click Add New to add a task. You can add any number of tasks to a package. Enter the name for the exported CSV file at the top. Choose what object you want to export and click Next Step . Optionally configure the column names in your CSV file and click Save . Name your package and click Create . Your package will appear under Objects > Packages. To execute the package, click Run . Conclusion In this quickly changing world, organizations increasingly rely on cloud-based applications. It\u2019s essential to have safe, centralized access to your files and data. As the number of work increases, businesses must facilitate teamwork, which is one of the keys to success. It satisfies all of your wants and standards. You have learned about Salesforce, Box, and how to integrate Box with Salesforce in this article. However, working with the Box Salesforce connector might be somewhat laborious. If you wish to automate the data flow between Box and Salesforce, you may choose to investigate Skyvia and other no-code automatic integration options. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbox-with-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Set+Up+Box+and+Salesforce+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fbox-with-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/box-with-salesforce-integration/&title=How+to+Set+Up+Box+and+Salesforce+Integration) [Babu Tharikh](https://skyvia.com/blog/author/tharikh/) Salesforce Technical Writer Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/business-intelligence-conferences-and-events/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [Data Integration](https://skyvia.com/blog/category/data-integration/) Top 15 Business Intelligence Events to Attend in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/business-intelligence-conferences-and-events/#respond) 1448 August 16, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbusiness-intelligence-conferences-and-events%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+15+Business+Intelligence+Events+to+Attend+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbusiness-intelligence-conferences-and-events%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/business-intelligence-conferences-and-events/&title=Top+15+Business+Intelligence+Events+to+Attend+in+2025) Business intelligence is a dynamic discipline that greatly contributes to smart decision-making. It continuously evolves along with the development of new information technologies, so new approaches and best practices in BI appear. But how to find them and understand they suit your business? Business intelligence conferences and other BI events aim to introduce such new approaches and best practices in the field. Scholars, researchers, analysts, and business leaders come to BI conferences to share their findings and experiences. Such BI events are also great for networking and collaboration. You can\u2019t imagine how many business intelligence conferences are scheduled for the 2025 season in offline and hybrid formats. Here, you can find top BI events with speakers from world-famous companies and educational institutions. Table of Contents Benefits of Attending Business Intelligence Conferences Top 15 Business Intelligence Conferences in 2025 How to Choose the Right BI Conference for You? Preparing for a Business Intelligence Conference How Can Skyvia Help Prepare for BI Events? Summary Bonus: 5 Business Intelligence Conferences in 2025 Benefits of Attending Business Intelligence Conferences If you\u2019re skeptical about BI conferences, it\u2019s time to reveal their numerous advantages. If you\u2019re a fan of BI events, we\u2019ll just remind you how useful they are. Benefit 1. Professional Development Most BI conferences offer the option to participate as a speaker or listener. The latter option is advantageous for both seasoned professionals and newbies. Specialists with years of experience can enhance their skills, while beginners can advance their expertise. BI events are also suitable for those interested in starting a career as an analyst or data expert. They reveal the insider\u2019s view of business intelligence and help people decide whether this field would work for them. Benefit 2. Networking Opportunities Like many other IT industry events, business intelligence conferences have returned to the offline format. This format allows people to communicate face-to-face and create new professional connections. Most BI events also offer a hybrid format, where listeners can participate online. Although this might be suitable from the logistics perspective, it slightly deprives participants of the networking benefits. Benefits 3. Presentation of the Latest Trends and Technologies The primary purpose of any BI conference is to introduce the latest findings in the field. This refers to technologies used for analysis and data visualization, as well as methodological approaches for problem-solving. For instance, speakers may discuss new [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) and their functionality for BI tasks. Benefit 4. Case Studies and Real-world Applications Some BI conferences offer practical sessions, such as workshops and case studies. This is where you can try new technologies or techniques right at the place. As a result, you\u2019ll have a better idea of how it works and whether it would be suitable for your business setting. Top 15 Business Intelligence Conferences in 2025 1. International Conference of Business Intelligence and Analytics (ICBIA) Date: September 12-13, 2024 Location: Tokyo, Japan, and online Participation fee: starting from $100 Event website: [link](https://www.researchfora.net/event/index.php?id=2446821) In fact, multiple ICBIA conferences are held in different countries throughout the year. This one in Japan is one of the biggest, with around 500 participants. The primary purpose of this conference is to bring together academics, practitioners, and professionals at the crossroads of the business and analytics spheres. During this event, there will be a number of lectures, presentations, symposia, and workshops. They aim to present the current trends and new developments in business intelligence and analytics and discuss all that in the dedicated sessions. 2. International Conference on Business Analytics, Operational Research and Management (ICBAOPRM) Date: September 12-13, 2024 Location: Rome, Italy Participation fee: available upon registration Event website: [link](https://waset.org/conferences-in-september-2024-in-rome/program#human-and-social-sciences-research) This multidisciplinary conference gathers researchers and scholars from different fields of study. A section is dedicated to engineering and physical sciences research, where the findings in business analytics and operational management are also discussed. In particular, plenty of papers discuss big data analytics and its application in different industries. 3. ANA Measurement & Analytics Conference Date: September 16-18, 2024 Location: Radisson Blu Aqua Hotel, Chicago, US, and online Participation fee: starting from $1799 Event website: [link](https://www.ana.net/content/show/id/ms-roi-sep24) This event is presented and sponsored by Google. The speakers are professionals with multi-year experience in analytics and marketing from world-known companies. For instance, [Christine Turner](https://www.linkedin.com/in/christine-turner-2966772/) , the managing director of measurement and analytics at Google, will talk about how AI shapes the future of data and analysis. The [Senior Director of the LEGO Group](https://www.linkedin.com/in/in%C3%A9s-nadal-8b544a2/) will explain the short-term and long-term impact on marketing activations based on the company\u2019s experience. 4. Big Data LDN Date: September 18-19, 2024 Location: Olympia, London, UK Participation fee: it\u2019s free for visitors Event website: [link](https://www.bigdataldn.com/) This two-day event on data, analytics, and AI is open to data enthusiasts. It provides an exceptional opportunity to learn and share best practices in the field. Big Data LDN presents a great number of tools and effective solutions that help organizations shift towards data-driven decisions. There will also be 300+ speakers discussing various facets of data and analytics. 5. Chief Data & Analytics Officers (CDAO) Summit Date: October 1-2, 2024 Location: Washington DC, US Participation fee: starting from $795 Event website: [link](https://dc.cdosummit.com/) This is a two-day event with Chief Digital, Data, and Analytics Officers as speakers from renowned companies. There will be a Head of Data & Analytics from PepsiCo, a Chief Data officer from the US Securities Commission, and many others. The first day is dedicated particularly to data analytics and digital transformation subjects, while the second day agenda focuses on AI. 6. AI & Big Data Expo Date: October 1-2, 2024 Location: RAI, Amsterdam, Netherlands Participation fee: from \u20ac0 to \u20ac899 Event website: [link](https://www.ai-expo.net/europe/) AI & Big Data Expo is the leading event for Machine Learning, Deep Learning, Enterprise AI, Ethical AI, and Data Ecosystems. The directors of data analytics from IKEA and Henkel are among the major speakers of this conference. There will be dozens of speakers from companies in different industries who aim to discuss big data and analytics of the present and future. 7. The 11th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2024) Date: October 6-10, 2024 Location: San Diego, US Participation fee: starting from $350 Event website: [link](https://dsaa2024.dsaa.co/) This event is dedicated to the synergy of statistics and computing in data science and analytics. It aims to promote cross-domain interactions between academics and businesses. So, this conference will host teachers, government officials, and developers of big data solutions. If you want to present your data science and analytics findings, feel free to submit a paper on this event\u2019s official website. Make sure to provide innovative research findings and their possible applications in commercials. 8. International Conference on Business Analytics, Operational Research and Management (ICBAOPRM) Date: October 12-13, 2024 Location: New York, US Participation fee: the price is available upon registration Event website: [link](https://waset.org/conferences-in-october-2024-in-new-york/program#human-and-social-sciences-research) This is another event in the series of ICBAORM conferences, like the one presented above in Rome. Researchers and scholars from engineering and physical sciences fields of study attend this conference. They will share their findings in business analytics and operational management. 9. TDWI Transform Date: October 20-25, 2024 Location: Orlando, US Participation fee: starting from $1765 Event website: [link](https://tdwi.org/events/conferences/orlando/home.aspx) This is a unique event comprising a series of lectures and master classes from leaders in modern analytics. They will share their best practices on how to build data strategy and provide a bunch of AI use cases. During the master class sessions, participants will be able to apply theoretical knowledge to real business scenarios. 10. Marketing Analytics & Data Science Conference Date: October 21-23, 2024 Location: San Diego, US and online Participation fee: starting from $899 Event website: [link](https://madsconference.com/) This conference aims to bring together professionals from the marketing and data science fields. There will be a series of workshops and lectures, where speakers will discuss marketing analytics and its role for data-driven organizations and provide hints on how to adopt GenAI within marketing strategies. 11. International Conference of Business Intelligence Reasoning and Modeling (ICBIRM) Date: November 7-8, 2024 Location: Istanbul, Turkey Participation fee: starting from \u20ac250 Event website: [link](https://waset.org/business-intelligence-reasoning-and-modeling-conference-in-november-2024-in-istanbul) This event is organized for scientists and researchers in business intelligence modeling. They will provide innovative solutions for organizing large data amounts within enterprises and governmental organizations. You can participate as a presenter or a listener in this conference. 12. Digital Identity Innovation Summit Date: November 7-8, 2024 Location: Radisson Blu Hotel, Amsterdam Airport, Netherlands Participation fee: starting from \u20ac499 Event website: [link](https://www.confx-identity.com/) This summit welcomes innovators and leaders interested in shaping the future of digital identity. It presents the latest trends in identity verification and explains its impact on customer experiences. During this summit, you will also explore how the mix of biometrics and AI creates more secure and user-friendly authentication solutions. You\u2019ll also discover the ethical implications of AI-driven identity systems. 13. International Conference of Business Intelligence, Analytics, and Knowledge Management Date: November 18-19, 2024 Location: Reykjavik, Iceland Participation fee: starting from \u20ac250 Event website: [link](https://waset.org/business-intelligence-analytics-and-knowledge-management-conference-in-november-2024-in-reykjavik) This conference is the last in the pool of the ICBIAKM events of the 2024 season. Academic scientists, scholars, and researchers come to this event to share their findings in business intelligence, analytics, and knowledge management. To present research findings, an academic paper must be prepared and submitted in advance through the website. Those who want to participate as listeners will just need to register for the event. 14. Machine Learning Week Europe Date: November 18-19, 2024 Location: Munich, Germany Participation fee: starting from \u20ac1395 Event website: [link](https://machinelearningweek.eu/) This conference explores the practical application of predictive and prescriptive analytics, machine learning, [data mining](https://skyvia.com/blog/data-mining-tools/) , and Generative AI for business and government. Sessions will present real case studies, and workshops will provide in-depth knowledge and practical advice. 15. Big Data Conference Europe Date: November 20-22, 2024 Location: Vilnius, Lithuania and online Participation fee: starting from \u20ac730 Event website: [link](https://bigdataconference.eu/) This event comprises technical discussions in the areas of big data, machine learning, and AI. A series of workshops and sessions will bring IT professionals and users together to share experiences. This conference would be very informative and inspiring for everyone passionate about data. How to Choose the Right BI Conference for You? In this myriad of BI conferences, how to select the right one to attend? Indeed, it\u2019s always more challenging to make the right choice in the ocean of options. Ask yourself these four simple questions to decide which BI conference to choose. Do the speakers and agenda interest me? In most cases, the speakers and topics covered matter most. This information is provided in a dedicated section of a conference website and is usually available long beforehand. By exploring the topics covered and the presenters, you can decide whether they align with your professional interests. Can I afford to attend this BI event? As you see, the participation fee for some BI conferences exceeds a thousand dollars. If the event of your interest takes place far from the city you live in, consider extra costs associated with traveling, transportation, accommodation, and food. See how all this spending meets your budget. Do I have enough time to visit this event? Most BI conferences last at least two days, though some of them offer one-day passes. Decide how much time you can dedicate and how it correlates with your personal and work schedules. Is the feedback of previous years positive about the conference? Even though an event can be rather promising with lots of famous speakers and trending topics, you should also consider the opinions of listeners of the previous years. If the answer is \u2018yes\u2019 for all four questions, there you go! In case there are several winners, then it\u2019s up to you to decide. See which speakers you like more, or which city is more convenient for you to visit, or evaluate other criteria that matter. Preparing for a Business Intelligence Conference Going to a business intelligence conference isn\u2019t just like visiting the cinema or going to a concert. Such events require some effort and preliminary planning. Here are some tips that will help you prepare for a BI conference. Have a look at the event website. See all the details about the conference such as agenda, speakers, location details, dates, etc. Pay attention to dates. This is particularly important for those who want to present their findings at scholar conferences, as there are multiple deadlines for application and paper submission. Register for the conference. Fill in the required information on the registration form and pay the participation fee. Explore BI blogs. Conferences are designed not only for presenters who share valuable insights and experience with the public. They also assume the active participation of listeners in the discussions. To prepare well, see business intelligence blogs and YouTube channels to discover current trends in the field. Watch highlights from the previous year\u2019s event. Most conferences dedicated to BI subjects are recurrent, taking place annually or quarterly. As a rule, conference organizers post highlights of past events on the official website. So, you can look at the available materials to get a general idea of the event\u2019s vibes. Book accommodation and flight. Take care of your stay during the conference in advance. Select the flight and accommodation as there are usually more affordable rates for early booking. If a conference takes place in a hotel, special fares for participants may be available. How Can Skyvia Help Prepare for BI Events? After being inspired by new discoveries and approaches, it\u2019s tempting to bring them into practice. While business intelligence needs lots of data, it\u2019s worth using dedicated tools to collect it from various sources for further [data analysis](https://skyvia.com/data-analysis) . This is possible with the [Skyvia](https://skyvia.com/) universal cloud platform, which is also suitable for a wide range of data-related tasks. For instance, you can consolidate data from multiple sources in a single database or a data warehouse using the [Import](https://skyvia.com/data-integration/import) or [Replication](https://skyvia.com/data-integration/replication) tool. Skyvia can also regularly send updated data almost in real-time. Then, analyze or visualize the consolidated data with a preferred BI tool. Skyvia is accessed directly from a browser and is one of the [top user-friendly tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . You can start using Skyvia completely for free and then select the plan that best suits your business needs. Summary Business intelligence conferences are excellent events for discovering new trends and creating new professional connections. They are also great for professional development and experience exchange. Luckily, dozens of BI events take place every year in different parts of the world. To select the ones that work best for you, see whether the agenda meets your professional goals and the expected spending aligns with your budget. To get a deeper insight, look at the reviews of the event attendees from previous years. Once you have made the choice, register for an event and explore the conference website for details again. Also, take a look at BI blogs and modern tools, such as Skyvia, that assist BI professionals in their daily workflows. Bonus: 5 Business Intelligence Conferences in 2025 As more than half of 2024 is already behind, let\u2019s take a look at the future BI conferences in 2025. Here is a selection of the most expected BI events at the beginning of the next year. 1. International Symposium on Computational and Business Intelligence (ISCBI) Date: February 14-16, 2025 Location: Macau, China Participation fee: starting from $220 Event website: [link](https://www.iscbi.com/) During this conference, scientists will present their latest findings in the field of Computational and Business Intelligence. Presenters and participants will be able to share their experiences and ideas in face-to-face discussions. ISCBI also provides great opportunities for networking and collaboration. 2. Gartner Data and Analytics Summit Date: March 3-5, 2025 Location: Orlando, US Participation fee: starting from $3825 Event website: [link](https://www.gartner.com/en/conferences/na/data-analytics-us) This summit addresses data challenges along with contemporary issues in analytics and AI. The best practices in data management and governance are presented to address those challenges. Therefore, the attendees of this conference get technical know-how to drive organizational success. 3. Domopalooza Date: March 18-21, 2025 Location: Salt Lake City, US Participation fee: starting from $1395 Event website: [link](https://www.domo.com/domopalooza) Domopalooza is a notable BI event organized by the DOMO company, famous for its business intelligence solutions. This event unites thousands of analysts, executives, and CEOs from all over the world. Its main purpose is to discuss day-to-day digital transformation and the role of data analytics in it. 4. Tableau Conference Date: Spring 2025 Location: San Diego, US Participation fee: starting from $1550 Event website: [link](https://www.salesforce.com/tableau-conference/) Key players of the Tableau company, along with partnering organizations, handle this conference. They share recent product updates and best practices for using Tableau in analytics and data visualization across industries. This event could be a great opportunity for those already using Tableau and those who want to switch to it to maximize the impact of business intelligence within an organization. 5. Big Data & Analytics Summit Date: Spring 2025 Location: Toronto, Canada Participation fee: starting from $1395 Event website: [link](https://www.bigdatasummitcanada.com/) This event aims to bring together data experts from various companies in the US and Canadian markets. It will include a series of lectures, practical workshops, case studies, and brainstorming sessions about extracting value from data with modern technologies. Meal times will also be provided for networking and communication. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fbusiness-intelligence-conferences-and-events%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+15+Business+Intelligence+Events+to+Attend+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fbusiness-intelligence-conferences-and-events%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/business-intelligence-conferences-and-events/&title=Top+15+Business+Intelligence+Events+to+Attend+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/complete-guide-on-how-to-import-and-export-csv-files-to-postgresql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Import and Export CSV Files in PostgreSQL: A Full Guide By [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) [0](https://skyvia.com/blog/complete-guide-on-how-to-import-and-export-csv-files-to-postgresql/#respond) 5325 March 28, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fcomplete-guide-on-how-to-import-and-export-csv-files-to-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+and+Export+CSV+Files+in+PostgreSQL%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fcomplete-guide-on-how-to-import-and-export-csv-files-to-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/complete-guide-on-how-to-import-and-export-csv-files-to-postgresql/&title=How+to+Import+and+Export+CSV+Files+in+PostgreSQL%3A+A+Full+Guide) This article is a complete guide on how to import and export CSV files to the Postgres database. We will look into different methods to import CSV file to PostgreSQL and export Postgres data to CSV. The methods include the command line, i.e., psql, using the user interface of pgAdmin, and the\u00a0Skyvia\u00a0tool, which allows you to automate import and export on schedule. First, we will consider the Postgres export to CSV and later focus on the data import to the Postgres database. Table of contents Sample Table for PostgreSQL CSV Export and Import How to Import CSV to PostgreSQL Table Method 1: Import from CSV File to PostgreSQL via Command Line Using COPY Statement Method 2: Import CSV to PostgreSQL with pgAdmin 4 Method 3: Importing Data to PostgreSQL Using Skyvia How to Perform Postgres Export to CSV Method 1: Copy Postgres Table to CSV File via Command Line Method 2: Export Data from PostgreSQL to CSV with pgAdmin 4 Method 3: Exporting Data Using Skyvia Troubleshooting Conclusion Sample Table for PostgreSQL CSV Export and Import We will use the sample table with employee data to show the scenarios of CSV export and import to PostgreSQL. Here is the structure of the table: CREATE TABLE employees (\n emp_id SERIAL, \n first_name VARCHAR(20),\n last_name VARCHAR(20), \n date_of_joining DATE, \n email VARCHAR(255), PRIMARY KEY (emp_id) \n); How to Import CSV to PostgreSQL Table So far, we have seen how to export the data to a CSV file. Now we shift our focus to importing data from CSV to Postgres table. There are different methods like using the command-line interface, third-party tools, etc., that you can use for import to PostgreSQL. We will elaborate these methods more below. We won\u2019t show how to create a table from the CSV file in Postgres in our tutorials. The table needs to be created first then we can import the data into the table otherwise you might see an error while importing the data. Method 1: Import from CSV File to PostgreSQL via Command Line Using COPY Statement We can make use of the PostgreSQL copy command to copy from the CSV file and import it to Postgres. The command can import a file with or without a header into the table, provided that the file has the same columns of the same type. While exporting the data, we use the TO keyword in the copy command, whereas while importing the data into a Postgres table, we need to use a FROM keyword. Both server-side COPY command and client-side \\copy command can be used to import data to PostgreSQL tables. Here are examples of PostgreSQL COPY commands both for the client side and server side. COPY employee_import FROM '/var/lib/postgresql/employee.csv' csv header; \n\\copy employee_import FROM '/var/lib/postgresql/employee.csv' csv header; Method 2: Import CSV to PostgreSQL with pgAdmin 4 Let\u2019s show how to import CSV files to Postgres tables with pgAdmin. To import data to PostgreSQL with pgAdmin, perform the following steps: Right-click the table and select Import/Export from the shortcut menu to open the wizard. Click the Import/Export flag button to import the data. Select the file that needs to be imported. Enter the delimiter of the file and enable the header option if the file has a header. Click OK to start the importing process. After clicking OK , a window indicating that the process has been successfully completed. Method 3: Importing Data to PostgreSQL Using Skyvia We have described Skyvia and how you connect it to PostgreSQL and Dropbox in the Export section, so let\u2019s get to importing CSV to Postgres immediately. To create an Import integration on Skyvia, perform the steps below: Click NEW and select Import under Integration. Under Source, select CSV from storage service , and then select the source Dropbox connection . Move on to Target and choose your PostgreSQL connection . Click Add new on the right top of the page to add an Import task. In the task, you specify the file and where and how to load it. Select the file to load data from and click Next . Select the employees table to load data to and click Next . Map source and target columns to each other and click Finish . Note that Skyvia also offers integration tools for other scenarios than PostgreSQL CSV import and export. It supports over 200 of different cloud apps, databases, and file storage services, and suits for most data integration scenarios. How to Perform Postgres Export to CSV This section will demonstrate several methods for exporting data from PostgreSQL. Method 1: Copy Postgres Table to CSV File via Command Line First, we will show how to use the PostgreSQL COPY command to export a PostgreSQL table. You can use the COPY command on the client-side and on the server side. Use it on the client-side to export a PostgreSQL table to CSV and save it on a local computer. Use it on the server-side to copy your data to CSV on the server. For both options, the export process from Postgres to CSV using the copy command is pretty straightforward, and we can repeat it to export all tables to CSV. PSQL \\Copy Command for Client-Side Export Psql \\copy command is used when you want to export the data from a Postgres table to a CSV file on a client machine. To use this command, you will need access to the psql prompt. You will understand it more with the following psql copy examples. To copy the entire table to a CSV file, use\u00a0\\copy. This will copy the contents of a table to the client computer. The file will not contain the headers of the table. \\copy employees to '/var/lib/postgresql/emp.csv' csv; You can also export the results of a query instead of copying the entire table. \\copy (select * from employees where first_name='Alex') to '/var/lib/postgresql/emp_alex.csv' csv; As output, it will show the total number of records that have been copied using psql copy. Export Postgres Table to CSV with Header By default, the result file, exported by the client-side Psql \\copy command, has no header. To get a CSV file with headers, we need to add the \u2018header\u2019 keyword to the command. It looks like this: # Complete table \n\\copy employees to '/var/lib/postgresql/emp_header.csv' csv header; \n\n# Specific records using a query \n\\copy (select * from employees where first_name='Alex') to '/var/lib/postgresql/emp_alex.csv' csv header; The above commands allow you to include headers in the result files. Postgresql Copy Command for Server-Side Export Let\u2019s show how to export the data to a CSV file on the server, using the PostgreSQL copy command. This command will copy the contents to a specified location on the server. First, you need to connect to the database of the server through a Postgres shell prompt. Then you can use the following Postgres copy example. # Complete table \nCOPY employees TO '/var/lib/postgresql/emp_server.csv' csv header; \n\n# Specific records using the query \nCOPY (select * from employees where first_name='Alex') TO '/var/lib/postgresql/emp_server_alex.csv' csv header; You can read more about the command in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-copy.html) . Method 2: Export Data from PostgreSQL to CSV with pgAdmin 4 [pgAdmin](https://www.pgadmin.org/) is an open-source tool that helps in the administration and management of Postgres databases and their development. In this section, we will focus on exporting the table contents. To export data using pgAdmin, follow the steps below: Right-click the employees table and select Import/Export in the shortcut menu. Specify the path where the file needs to be saved. In the Miscellaneous section, select Headers to export the headers along with the data. Click OK to start the export process. The data will be successfully exported into the file. Method 3: Exporting Data Using Skyvia Skyvia is a cloud-based solution that helps you [automatically import/export data from PostgreSQL to CSV files](https://skyvia.com/data-integration/postgresql-csv-file-import-and-export) . Skyvia allows not only importing/exporting CSV files from a local computer but also automatically loading files to/from file storage services or FTP/SFTP servers. So, we will show loading CSV files between PostgreSQL and Dropbox. You need to have an account on [Skyvia](https://app.skyvia.com/) and [Dropbox](https://www.dropbox.com/) . Both are free to register. Creating Connections on Skyvia After you register on Skyvia, you need to create connections to your PostgreSQL and Dropbox. To create a connection on Skyvia, click NEW and select Connection in the menu on the left. Then, in the list of connectors, select the data source you want to connect to and configure the required parameters. For Dropbox, you just sign in with Dropbox and allow Skyvia to access your data. For [PostgreSQL connection](https://docs.skyvia.com/connectors/databases/postgresql_connections.html) , you need to specify the server address, port, username and password, database and schema. Optionally, you can configure encryption parameters for setting up secure connection. Creating Integration for Data Export from PostgreSQL Click NEW and select Export under Integration , similar to creating an import integration. Under Source, select the PostgreSQL connection. Move on to Target and select CSV to storage service . Then select\u00a0the connection to Dropbox you have created, and then select the folder to place the result file with exported data will be located to. Click Add new on the right top of the page to add an Import task. In the task, you specify a table to export. In the Object list, select the table to export. Here, you can also configure filters to export only some of the records, you can clear checkboxes for some of the fields in order not to export them, specify the file name and compression, etc. Finally, you can schedule the export integration (as well as any other integration on Skyvia) for automatic execution on certain days and at specific time. Enjoy the process with no more effort! Troubleshooting When importing or exporting data to PostgreSQL, the following common mistakes may occur. Delimiter Is Incorrect It\u2019s essential to specify the delimiter used in the CSV file correctly. All solutions described, including Skyvia, COPY command, and pgAdmin, support specifying a delimiter. Data Types are Not Matching You need to make sure that columns of the source file match to the PostgreSQL table columns in case of using COPY command or dbAdmin. Skyvia offers more flexibility, since it allows mapping columns manually and performing type conversions via expression mapping. Not Enough Privileges It\u2019s also important to correctly configure privileges on PostgreSQL. The user must have access to the corresponding table, as well as enough privileges for the directory to where the file is saved, and from where it is imported (for the server-side COPY command). Conclusion In this blog article we have demonstrated several ways to export and import CSV files to PostgreSQL tables. We demonstrated both standard PostgreSQL tools and third-party no-coding solutions, like Skyvia. Skyvia includes a free plan that allows importing/exporting up to 10000 records per month, so such tasks of import/export to PostgreSQL can be performed for free, as with standard tools. Besides, it offers flexible mapping settings that allow you to transform data via different string, mathematical, datetime functions, lookups, support for file storages and FTP, automatic file compression for export, etc. You are welcome to share with us what kind of method you personally prefer among the ones listed above in the Comments section. You can also suggest other methods, not mentioned in this article. FAQ to CSV into PostgreSQL How do I export a PostgreSQL table to a CSV file? You can use either the standard PostgreSQL COPY command or use no-coding on-premises tools like pgAdmin or cloud solutions like Skyvia. How do I import a CSV file into a PostgreSQL table? You can use the same tools as for export \u2013 native PostgreSQL COPY command, pgAdmin, etc. Note that these tools require the file to have the same structure as the table. However, Skyvia doesn\u2019t have such a requirement, as it provides detailed mapping settings. How can I handle headers when exporting and importing CSV files in PostgreSQL? The PostgreSQL COPY command has a \u201cheader\u201d keyword to indicate that the file has a header. pgAdmin has the corresponding setting as well. In Skyvia, there is the corresponding setting for export; for import the header is required. Can I export/import CSV files with delimiters other than commas in PostgreSQL? The PostgreSQL COPY command allows you to specify a delimiter via the DELIMITERS keyword. There also are corresponding settings in pgAdmin and Skyvia too. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fcomplete-guide-on-how-to-import-and-export-csv-files-to-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+and+Export+CSV+Files+in+PostgreSQL%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fcomplete-guide-on-how-to-import-and-export-csv-files-to-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/complete-guide-on-how-to-import-and-export-csv-files-to-postgresql/&title=How+to+Import+and+Export+CSV+Files+in+PostgreSQL%3A+A+Full+Guide) [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) Sergey combines years of experience in technical writing with a deep understanding of data integration, cloud platforms, and emerging technologies. Known for making technical subjects approachable, he helps readers navigate complex tools and trends with confidence. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Comprehensive Guide to ERP and CRM Integration By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/#respond) 4417 November 5, 2021 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fcomprehensive-guide-to-erp-and-crm-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Comprehensive+Guide+to+ERP+and+CRM+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fcomprehensive-guide-to-erp-and-crm-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/&title=Comprehensive+Guide+to+ERP+and+CRM+Integration) ERP and CRM systems are vital to your business\u2019s day-to-day operations, but these systems alone cannot provide you with an all-encompassing view of your company. Juggling multiple databases can result in a cluttered view of your company and if your ERP and CRM systems aren\u2019t working together there may be major blind spots within your business, losing you clients and costing you time and money. Most modern companies employ a Customer Relationship Management (CRM) system to manage their client interactions on the front end and an Enterprise Resource Planning (ERP) system to handle back-end business processes. It is important to integrate the two systems so you and your team can have all the information needed when it matters most. Siloed data systems can have such catastrophic effects on your business as inaccurate quotes, outdated data, incorrect predictions, lack-luster client relations, disjointed operations, and slow manual processes. Now let us take a look at the main ERP and CRM integration approaches, challenges, benefits, and tools. Table of contents Main Challenges of CRM and ERP System Integration General Integration Points Between ERP and CRM Systems Main Benefits of CRM/ERP System Integration Tools for ERP and CRM System Integration Conclusion Main Challenges of CRM and ERP System Integration The first challenge businesses often face when integrating their ERP and CRM systems is the planning and managing of this intensive project. Those involved need to choose what type of integration they want to attempt and whether they should do it alone or hire a Software as a service (SaaS) ERP CRM third party. Secondly, businesses must ensure data quality before CRM and ERP system integration. This means adhering to data integration architecture principles by cleaning up the data found within both systems and standardizing the data format. Lastly, businesses often underestimate the amount/types of data they should integrate in order to create a successful unified system. Below we highlight the main CRM ERP integration architecture points a business should consider synchronizing. General Integration Points Between ERP and CRM Systems Client and Account Information Synching your client and account information is probably the most common integration point and is fundamental to your company\u2019s successful ERP and CRM system integration. Capturing basic information such as order history, shipment details, contracts, financial records, and more provides both your factory floor and your sales force with easy access to valuable client insights. Contact Data Contact data is often changing as people move positions or leave companies regularly. That is why it is important that contact data flows freely between CRM and ERP systems. Those within your team that are in regular contact with your clients can update contact data as soon as new information comes to light, ensuring everyone within the company has the most up-to-date information regarding a client. Product Information Sales teams require product information to be available within the CRM and not just siloed in the ERP to create accurate quotes. New items can easily be created within an integrated system as the two records in both ERP and CRM systems are already linked, and any updates can be made within either system. Product Price As already stated, sales teams need product information and most importantly product price. ERP and CRM systems can contain different information regarding prices and any discounts your company may offer. Sales teams can use this information to provide clients with accurate price orders within either system immediately allowing your sales team to close deals faster. Sales Process It is considered best practice to synchronize the process of generating quotes in both your ERP and CRM systems. Before the time of CRM and ERP system integration quotes were created within the ERP, then someone would have to manually enter the same information within the CRM. Once the client confirmed the quote, a sales order was generated in the ERP. The process was still not complete as then someone would have to update the opportunity stage within the CRM to complete. This laborious workaround resulted in sales teams hopping from one system to another to gather the information needed to complete the sales process. Integrating both ERP and CRM systems allows sales teams to work on one system streamlining the process. It also provides your sales team with the ability to forecast expected demand and supply. Sales Order Synchronization doesn\u2019t end when a quote within the CRM transforms into a sales order in ERP. A sales order should be integrated back to the CRM to maintain a comprehensive view of the account. Changes can still occur within the fulfillment process whether it is on the part of the client, invoicing, or distribution. Sales History Having the sales history available in both the ERP and CRM systems is an incredibly powerful tool for your business. Integrating this touchpoint creates a valuable pool of data from which your team can predict future demand and track sales trends through the year. Accurate sales history influences your marketing techniques, client interactions and production. Payment Information Client payment information should not remain siloed within the ERP system. Sales teams can use this information found within the CRM to see payment patterns or overdue/credit amounts to better negotiate a quote for a client. Main Benefits of CRM/ERP System Integration CRM and ERP system integration provides you with a comprehensive view of your clients, saves you time on data entry and improves client relations. The ERP and CRM system integration also provides you with improved reporting and analytics, allowing you to better predict buying habits and track clients\u2019 relationships and history with your business. The benefits of CRM architecture in sales productivity are easy to see, but the synchronization between ERP and CRM systems can provide sales teams with valuable information when and where they need it, allowing them to close deals faster and more efficiently. ERP and CRM system integration not only benefits the sales team but also benefits departments throughout your business. A cross-departmental approach allows teams to collaborate easier as everyone has access to the same information in real-time resulting in departments working together to accomplish the same goals. A completely integrated ERP and CRM system provides employees with valuable information when and where they need it most, allowing them to effectively aid clients in their requests. Faster access to crucial information allows sales reps to quickly and accurately produce quotes. All departments from sales and support to finance and accounting benefit greatly from uninhibited access to information. ERP and CRM systems can contain very similar information, for example, contact data. Integrating these systems negates the need to input the same information twice and also prevents data duplication. Using one integrated system allows quotes generated within the CRM to be automatically updated into orders within the ERP. This integration reduces the time required for data management, therefore, increasing business efficiency. Sales teams can also easily track orders throughout the sales process and make any updates required by clients. It is easier for employees to familiarise themselves with one integrated system rather than learning the ins and outs of two individual systems. This reduces the time and money spent on training. Businesses often require trained individuals to separately manage their ERP and CRM systems as well as the data migration between the two. It is easier to maintain one unified platform which reduces IT, staffing and training costs. Tools for ERP and CRM System Integration Skyvia [Skyvia](https://skyvia.com/) is a universal SaaS (Software as a Service) cloud-based platform for data integration, management, backup and access. It provides quick and easy data integration, (import, export, replication, synchronization, migration, etc.) The platform requires little to no configuration and no programming skills. It supports a number of cloud applications and databases and requires no software except a web browser. Skyvia\u2019s data flow allows building integrations with powerful transformations between multiple data sources such as a CRM and cloud ERP integration. Xplenty [Xplenty](https://www.xplenty.com/) is an ETL (extract-transform-load) cloud-based solution that automates data flows. The solution allows users to create powerful data pipelines between CRM ERP tools with almost no coding. Xplenty systems require no configuration, maintenance or updates. If users require customization they can make use of Xplenty\u2019s Application Programming Interface (API). Celigo [Celigo](https://www.celigo.com/) is an [integration platform as a service](https://skyvia.com/blog/what-is-ipaas/) (iPaaS) designed to automate the integration of data into cloud applications including CRMs and ERPs. Their user-friendly no-code integration of CRM with ERP system and its approaches is ideal for less technical users. Celigo implements CRM and ERP system integration as the first step to data automating and scaling. Dell Boomi [Dell Boomi](https://boomi.com/) is also an iPaaS provider with a cloud-native data integration platform. It performs integrations within complex hybrid cloud ecosystems such as CRMs and ERPs providing a user-friendly end-to-end data integration experience. Most common integrations are automated within the platform. Other features include API management, data discovery and data quality governance. Conclusion We have tried to answer the question of whether CRM can be integrated with ERP. We hope we\u2019ve managed to do it successfully, but it is also clear that the ERP and CRM system integration is a complicated process but ultimately benefits any businesses that undergo it. The best practice is to leave such involved integrations to the experts in order to achieve optimal results. Contact\u00a0the [Skyvia team](https://skyvia.com/company/contacts) to learn more about its data integration platform for CRM and ERP system integration. Was our article useful? You are welcome to leave comments, suggestions, and questions if you have any. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fcomprehensive-guide-to-erp-and-crm-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Comprehensive+Guide+to+ERP+and+CRM+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fcomprehensive-guide-to-erp-and-crm-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/&title=Comprehensive+Guide+to+ERP+and+CRM+Integration) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/connect-salesforce-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) 5 Ways to Connect Salesforce to SQL Server By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/connect-salesforce-to-sql-server/#respond) 4968 October 17, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fconnect-salesforce-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=5+Ways+to+Connect+Salesforce+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fconnect-salesforce-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/connect-salesforce-to-sql-server/&title=5+Ways+to+Connect+Salesforce+to+SQL+Server) Connecting data from Salesforce to SQL Server contributes to more accurate data analysis and efficient reporting. Moving data in another direction helps companies enrich their customers\u2019 profiles. However, SQL Server to Salesforce integration has always been a challenging task. In this article, we describe 5 different methods to [connect Salesforce and SQL Server](https://skyvia.com/data-integration/integrate-salesforce-sql-server) with ease and no coding: Import and Export Wizard ODBC Drivers SSIS Data Flow Skyvia Data Integration Skyvia Connect Table of Contents Salesforce to SQL Server Integration: Import and Export Wizards Salesforce to SQL Server Integration: ODBC Driver and Linked Server Salesforce to SQL Server Connection: SSIS Data Flow Code-free Salesforce and SQL Server Integration: Skyvia Data Integration Real-time Salesforce and SQL Server Connection: Skyvia and Salesforce Connect Use Cases for Salesforce to SQL Server Integration Conclusion FAQ In our article, we will pay close attention to methods 4 and 5 and describe in detail all nuances of data migration when using these methods. Salesforce to SQL Server Integration: Import and Export Wizards SQL Server has a whole ecosystem of native and third-party tools for importing data from other sources. Let\u2019s start by exploring [SQL Server Import and Export Wizards](https://learn.microsoft.com/en-us/sql/integration-services/import-export-data/import-and-export-data-with-the-sql-server-import-and-export-wizard?view=sql-server-ver16) , standard tools for [data movement](https://skyvia.com/blog/top-data-movement-tools/) . SQL Server Import Wizard can extract data from different data sources: SQL Server databases Oracle Flat files like CSV PostgreSQL MySQL Azure Blob Storage. Importing data directly from Salesforce requires a third-party ADO.NET provider or an ODBC driver. However, there\u2019s a workaround involving a CSV file containing Salesforce data. There are several Salesforce tools for exporting Salesforce data to CSV files, both in the cloud, like [Skyvia Data Loader](https://skyvia.com/data-integration/salesforce-data-loader) or [Dataloader.io](http://dataloader.io) , and locally installed, like the [Salesforce Data Loader](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/loader_install_mac.htm) tool. We\u2019ll explore the data import case using the Data Loader tool. In your Salesforce account, go to Setup . Under Platform Tools , click Data Management and select Data Loader . [Download and install](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/loader_install_mac.htm) an appropriate version of Data Loader. In your Salesforce account, select the necessary data objects to be exploited into a CSV file. Use SQL Server Import Wizard to upload Salesforce data into SQL Server tables. Check more information on SQL Server Import Wizard in [Microsoft documentation](https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/start-the-sql-server-import-and-export-wizard?view=sql-server-ver15) . NOTE: Please note that this integration method is unidirectional. You cannot migrate data in the reverse direction. Benefits Limitations \u2013 Built-in tool, no need for extra downloads and installations. \u2013 Easy to use with Microsoft-typical interfaces. \u2013 Compatible with other SQL databases and flat files . \u2013 No direct connection to Salesforce. \u2013 No way to compare lost or modified data . \u2013 Data export limitations on data restoration, deduplication, etc. Salesforce to SQL Server Integration: ODBC Driver and Linked Server Linked servers allow SQL Server to read data from external sources and execute SQL commands against them. You can link another data source to SQL Server via OLE DB or ODBC interface. There are quite a few ODBC drivers for Salesforce on the market. Consider using Devart ODBC Driver for Salesforce, which is available for all popular operating systems: Linux, Windows, and macOS. Below, find a short tutorial on how to create a linked server to Salesforce in SQL Server Management Studio on Windows OS. [Download ODBC Driver for Salesforce](https://www.devart.com/odbc/salesforce/) , double-click on the .exe file, and follow the installation instructions. Go to System and Security > Administrative Tools > ODBC Data Sources to open the ODBC Data Source Administrator tool. Select the System DSN tab and click Add to create a new data source. Select Devart ODBC Driver for Salesforce from the list of available drivers and click Finish . Indicate the name and Salesforce account together with credentials and other necessary information about the source. In the ODBC Data Source Administrator tool, click Test to test the connection with Salesforce. Once the connection is successfully established, you can retrieve data from Salesforce to SQL Server. It\u2019s also possible to perform a range of data management tasks with INSERT, UPDATE, and DELETE operations. NOTE: This method is also for unidirectional integration from CRM to database. Benefits Limitations \u2013 Can access external data from outside of SQL Server. \u2013 Optimizes query performance by pushing processing to the remote data source rather than bringing all data into your local server. \u2013 Addresses diverse data sources similarly. \u2013 Performance overhead , which might significantly reduce runtime. \u2013 Running queries against a remote server can lead to difficulties in improving query performance . Salesforce to SQL Server Connection: SSIS Data Flow [SQL Server Integration Services (SSIS)](https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) is a powerful [data integration solution](https://skyvia.com/blog/data-integration-tools/) designed for a broad range of [data migration tasks](https://skyvia.com/blog/data-migration-tools/) . If you need more than a simple export of Salesforce data to SQL Server, [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) could be a good option for that. You can perform data transformation and build complex integration scenarios with this tool. To load data from Salesforce, you will also need third-party SSIS components, such as [Devart SSIS Data Flow Components for Salesforce](https://www.devart.com/ssis/salesforce.html) . This solution helps to synchronize Salesforce with SQL Server, migrate data from/to Salesforce, and automate integration via SSIS Data Flow tasks. To connect Salesforce and SQL Server using SSIS: Download [Devart SSIS Data Flow Components for Salesforce](https://www.devart.com/ssis/salesforce.html) . Proceed with on-screen instructions to install and configure this solution. In SSIS, create an Integration Package . Create a Data Flow task. Add a Devart Salesforce Source component on the diagram and configure it thoroughly to get the necessary Salesforce data. Add components to a diagram, linking them and configuring column mapping. If needed, add the respective transformation components and connect them with links. You can use the standard ADO.NET Destination component with an ADO.NET connection to load data to SQL Server. Similarly, you can load data in the reverse direction or configure bi-directional data flow with the SQL Server Integration Services. Benefits Limitations \u2013 Can handle data from heterogeneous sources . \u2013 Provides powerful transformations . \u2013 Data can be loaded to many different destinations simultaneously. \u2013 Designed with tech professionals in mind, it might be difficult for non-developers. \u2013 Associated with on-premises limitations since it needs porting for cloud or hybrid setups. \u2013 It functions 100% only in the Microsoft environment but less so with other SaaS apps. Code-free Salesforce and SQL Server Integration: Skyvia Data Integration Skyvia offers several products (ways) to [integrate Salesforce and SQL Server](https://skyvia.com/data-integration/integrate-salesforce-sql-server) data. In this section, we will start with the Skyvia Data Integration product for building [ETL](https://skyvia.com/learn/what-is-elt) and [ELT](https://skyvia.com/learn/etl-pipeline-meaning) pipelines. In the next section, will go on with the Skyvia Connect (Web API Server), which works in conjunction with the [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) through OData protocol. These methods don\u2019t require any coding skills and can be easily implemented by businesses of different sizes, including SMBs and large Enterprises from Fortune 500. [Skyvia Data Integration](https://skyvia.com/data-integration/) is a great option for loading data from one source to another in any direction. You can easily transfer data from Salesforce to SQL Server and vice versa by connecting to these systems from the Skyvia interface and setting up integrations. Skyvia is fully managed, so minimal setup is required. For convenience and better understanding, we describe use cases with the Skyvia Import (ETL) and Skyvia Replication (ELT) tools below. Salesforce to SQL Server Integration with Data Import (ETL) The [Import tool](https://skyvia.com/data-integration/import) is an ETL-based solution with a visual wizard that allows users to load data from SQL Server to Salesforce and in the opposite direction. It also allows the application of various data transformations and mapping to match the data structures of two systems. In the sample integration scenario provided below, we copy data from the Customers table on SQL Server and send it to Accounts and Contacts on Salesforce. The main challenge of such an operation is to preserve relations of the SQL Server data when importing them to Salesforce. On data import, the relation between the corresponding Account and Contact must be created in the Salesforce database. This is easy with Skyvia. When data is inserted from one table into multiple Salesforce objects, Skyvia builds such relations automatically. To connect SQL Server to Salesforce, there are several essential steps to take: Create connections for both tools in Skyvia. Create an Import package to migrate data. Schedule recurring data updates. Run integration and check the results. 1. CREATE A CONNECTION TO SQL SERVER AND SALESFORCE Click + Create New in the top menu and select Connection i n the menu on the left. Select the Database category from the drop-down list on the left and choose SQL Server from available databases. Select Connection Mode (Direct, Agent, or Agent with Alias) and specify other required parameters according to the selected mode. To learn more about SQL Server connection setup, [visit this page](https://docs.skyvia.com/connectors/databases/sqlserver_connections.html) . Create a new connection to Salesforce in Skyvia by clicking + Create New in the top menu and selecting Connection i n the menu on the left. Select the CRM category from the drop-down list on the left and choose SQL Server from available databases. To connect to Salesforce, use either Salesforce User Name and Password or OAuth authentication. If you don\u0432\u0402\u2122t want to store your Salesforce credentials in Skyvia, we recommend trying the OAuth method. After clicking Sign in with Salesforce , the Salesforce login window pops up. You sign in via a web browser and automatically receive an access token. After that, you save the connection. When two connections are established, let\u2019s proceed to the creation of the integration scenarios to quickly and conveniently export data from SQL Server to Salesforce or vice versa. 2. CREATE AN IMPORT PACKAGE TO MIGRATE DATA In the top menu, click +Create New and select Import in the Integration column. In the opened package editor, select Database or cloud app source type. In the Connection drop-down list, select SQL Server as a source and Salesforce as a target. Create a new integration task by clicking Add new . NOTE: A task is a unit of an ETL process (data extraction, transformation, and loading). When creating an import package, we need to add a task for each SQL Server table. In the task editor, select data from SQL Server tables, set filters, if needed, and select the DML operation (Skyvia supports not only INSERT operation for data import but also UPSERT, UPDATE, and DELETE). Configure mapping settings. Skyvia offers numerous mapping types, including Column, Expression, Lookup (Source and Target Lookup), Constant, Relation, External ID, and many others. In our case, we need to map the Customers table columns (SQL Server) to the target Account and Contact object fields (Salesforce). Some columns of Customers table, such as Phone and Fax , are mapped automatically to Account object fields. For others, we use simple column mapping, where each field of the source table is mapped to the corresponding target object field. We map the CompanyName column to Name , the Address column to BillingStreet , the City column to BillingCity , etc. To map Contact object fields, click the target table name ( Accoun t) and select Account.Contact from the drop-down list in the upper right corner. Below, you can see what the mapping process looks like. Save the task. Click Create to put your import task for execution. Finally, click Run to start package execution. 3. SCHEDULE RECURRING DATA UPDATES Skyvia allows you to set a schedule to execute import automatically. This might be useful for configuring data loading operations to run periodically or delay an operation to a later time. If automated data transfer is needed, click Schedule and set the time preferences for the import task run. 4. RUN INTEGRATION AND CHECK THE IMPORT RESULTS Click Run to start the integration. You can check the results of the import integration by downloading an Excel file that is available in the Monitor tab. Find out information about successful and failed records in detail. Import is considered successful if all the records were successfully loaded to a destination and failed either when at least one record has not been loaded successfully or when the integration has not been executed completely (for example, when its connection became invalid). Load Salesforce Data to SQL Server with Data Replication (ELT) With the [Replication tool](https://skyvia.com/data-integration/replication) , you can copy data from cloud applications to databases. You don\u2019t need to create tables in the database yourself since Skyvia can automatically do that itself. Note that replication doesn\u2019t allow any data transformations. It\u2019s also unidirectional, which means you can only [replicate data](https://skyvia.com/blog/top-data-replication-tools/) from Salesforce to SQL Server. To replicate data from Salesforce to SQL Server, you will need to: Create connections to both tools in Skyvia. Create the replication package. Schedule recurring data updates. Run replication and check the results. 1. CREATE CONNECTIONS TO SALESFORCE AND SQL SERVER If you haven\u2019t connected these tools yet, follow the procedure mentioned previously in this article. Otherwise, proceed to the next step. 2. CREATE THE REPLICATION PACKAGE Click +Create New in the top menu and select Replication under the Integration column. Select Salesforce as a source and SQL Server as a target. As soon as you select Salesforce, the table with Salesforce objects will appear on the right. Choose the data objects from which you want to replicate data. For each of the selected objects, you can easily edit and filter its data according to specified conditions to replicate them correspondingly to the SQL Server database. You can also select the Incremental Updates option if needed. That way, Skyvia doesn\u2019t perform a full replication (copying of all the data) each time the replication is executed. Instead, it performs a full replication only on the first run. During subsequent runs, Skyvia detects data that was changed in Salesforce and applies these changes to the SQL Server database. Click Create to put your replication for execution. Finally, click Run to start the data transfer. 3. SCHEDULE RECURRING DATA UPDATES Skyvia allows you to set a schedule for regular replication with incremental updates. Click Schedule and set the time preferences for the replication task run. 4. RUN INTEGRATION AND CHECK THE REPLICATION RESULTS Click Run to start the integration. You can check the results of the replication integration by downloading a file that is available in the Monitor tab. Find out information about successful and failed records in detail. Replication is considered successful if all the records were successfully loaded to a destination and failed either when at least one record has not been loaded successfully or when the replication has not been executed completely (for example, when its connection became invalid). Benefits Limitations \u2013 Skyvia is fully cloud-based , easy to set up, and requires no coding skills. \u2013 Skyvia is among the [top user-friendly data integration tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . \u2013 Skyvia connects to [200+ data sources](https://skyvia.com/connectors) . \u2013 Skyvia can be used for free. \u2013 Limitations on data amounts transferred according to the pricing plans. Real-time Salesforce and SQL Server Connection: Skyvia and Salesforce Connect Now, let\u2019s explore Skyvia Connect (Web API Server), which works in conjunction with Salesforce Connect through OData protocol. [Skyvia Connect](https://skyvia.com/connect/) is a great option for displaying only necessary data from SQL Server to Salesforce by request. Using this product, you can create an SQL Server OData endpoint and link SQL Server data to Salesforce via Salesforce Connect. [Salesforce Connect](https://help.salesforce.com/articleView?id=sf.salesforce_connect.htm&type=5) is similar to SQL Server linked servers. Linked servers allow you to work with external data from SQL Server, like with its own database. Salesforce Connect allows you to work with external data from Salesforce as with Salesforce\u2019s own objects. There are many ways to create an OData endpoint for SQL Server. Most involve developing a service, caring about security, creating hosting and domain, obtaining respective certificates, deploying, administering, and maintaining. As a result, you need to take many preliminary steps to get an endpoint available from the Internet. With Skyvia, you don\u2019t need to build a web API manually but just take a few simple steps to connect Salesforce to SQL Server: Create a connection to SQL Server (which you want to publish data from). Create OData endpoint to SQL Server. Link SQL Server data to Salesforce via Salesforce Connect (Salesforce Lightning). You don\u2019t need to worry about hosting, deployment, or administration since Skyvia helps you to automate the process. You can create both public and private endpoints and optionally limit IP addresses, from which the data of the endpoint can be accessed. This feature is foreseen in the endpoint security settings. STEP 1. CREATE SQL SERVER CONNECTION IN SKYVIA If you haven\u2019t created a connection to the SQL Server from Skyvia yet, follow the procedure mentioned in this article. Otherwise, proceed to the next step. STEP 2. CREATE ODATA ENDPOINT TO SQL SERVER Log into your account. Click + Create New in the top menu and select OData Endpoint u nder the Connect column. NOTE: Skyvia offers two endpoint editor modes: [simple](https://docs.skyvia.com/connect/odata-endpoints/how-to-configure-odata-endpoint-in-simple-mode.html) and [advanced](https://docs.skyvia.com/connect/odata-endpoints/how-to-configure-odata-endpoint-in-advanced-mode.html) . The advanced mode allows you to visually design your OData endpoint, adjusting entities and customizing associations between entity types. As an example, we select an advanced mode in the OData Endpoint Wizard. Select SQL Server connection if you have created it earlier in Skyvia. If not, create it by clicking the +Create New button on the right and specify the requested SQL Server parameters in the opened window. Define which data to publish via the endpoint. You can add SQL Server tables to the endpoint by dragging them from the list to the diagram, and Skyvia will automatically create the corresponding entity set and entity type. Skyvia also automatically creates relationships (associations) with other entity types on the diagram. If necessary, you can edit or delete the generated relationships or even create your custom ones. You can freely configure the OData entities in your endpoint by modifying the generated names of entity types, their properties, and entity sets. Additionally, you can exclude data source columns from entities so that they are not available via the endpoint. You can optionally change endpoint security settings. Namely, you can add user accounts with passwords to make your endpoint data available only for authenticated users. Additionally, you can enable access to your endpoint only for the specific IP addresses. Specify the new endpoint name and configure additional settings, such as an OData protocol version and endpoint access mode. Skyvia Connect supports OData v1-v3 (ATOM format used for returned data and metadata) and OData v4 (JSON format used for returned data and metadata). After the endpoint has been created, copy its URL on the Overview tab to use it in Salesforce Connect. STEP 3: LINK SQL SERVER DATA TO SALESFORCE VIA SALESFORCE CONNECT (SALESFORCE LIGHTNING) Sign into your Salesforce account and click Setup . In the menu on the left, under Platform Tools , click Integrations and then select External Data Sources . In the opened window: Specify your OData endpoint name. Select the OData version. Enter your endpoint URL copied from Skyvia (Step 7). Select the corresponding checkbox if you use an endpoint to a writable data source. Configure authentication settings if you created user accounts with passwords for your endpoint in Skyvia. NOTE: With Salesforce Connect (Salesforce Lightning), you can link SQL Server data obtained via the OData protocol in Skyvia to Salesforce as external objects and then work with these data as with usual SFDC objects. Select the exposed tables you want to synchronize. This will create the necessary external objects automatically. After defining external data source and external objects, you may add tabs for external objects in order to work with them via Salesforce UI. Benefits Limitations \u2013 No need to build a web API manually. \u2013 Simplified process of OData endpoint creation. \u2013 Requires technical expertise To try Skyvia Connect yourself, [register](https://app.skyvia.com/register?) on our platform and receive a [free plan](https://skyvia.com/pricing/) automatically. You may also turn to our [technical support](https://skyvia.com/company/contacts) for no-cost consultation. Use Cases for Salesforce to SQL Server Integration This integration can be helpful in the following cases: Advanced analytics and reporting. If a company uses an SQL Server database, importing Salesforce data there will boost advanced analytics and reporting. This is possible thanks to SQL Server Reporting Services (SSRS) and its powerful querying capabilities, which allow you to combine data from other enterprise resources for comprehensive reporting. 360-degree customer view. If a company needs to get a complete overview of customer profiles, transfer data from SQL Server to Salesforce to enrich client data . That way, sales and support teams gain full visibility into customer journeys. Historical data analysis. Since Salesforce implies limits on historical data, sending it to an SQL Server is a good option for archiving . Data backups. The integration ensures that backups with business-critical data from Salesforce are safely stored on external resources. That way, important information is protected from accidental deletion. Regular backups go in line with disaster recovery requirements . Data Science and Machine Learning. Applying ML and [data mining](https://skyvia.com/blog/data-mining-tools/) algorithms to Salesforce data stored on the SQL Server grants insights about your business development. Conclusion In this article, we have focused on the most effective and reliable methods for connecting SQL Server and Salesforce. If you need to load data in both directions, then Skyvia and SSIS would work well. While SSIS is for tech-wise specialists, Skyvia is accessible to non-technical users as well. Skyvia Connect would be suitable for sending SQL Server data to Salesforce. While Import and Export Wizard and Linked Servers are designed for the integration in the opposite direction. It\u2019s up to you to decide which method best fits you! FAQ for Salesforce to SQL Server How to get Salesforce data into SQL Server? For sending data from Salesforce to SQL Server, you can use different methods: 1. [Using the Skyvia Data Integration tool](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#Skyvia-Data-Integration) 2. [Data transfer with Linked Servers](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#ODBC-Driver-and-Linked-Server) 3. [Using Import and Export Wizard](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#Import-and-Export-Wizards) 4. [Data integration with SSIS](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#SSIS-Data-Flow) How to import data from SQL Server to Salesforce? For sending data from SQL Server to Salesforce, consider these methods: 1. [Skyvia Connect + Salesforce Connect](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#Skyvia-Connect-and-Salesforce-Connect) 2. [Using the Skyvia Data Integration tool](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#Skyvia-Data-Integration) 3. [Data integration with SSIS](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server/#SSIS-Data-Flow) What data can be extracted from Salesforce? It\u2019s possible to export any kind of data stored on Salesforce, including images, documents, attachments, and metadata. However, the availability of data objects for export also depends on the chosen destination, whether an app, database, flat file or other. What data can you transfer to MS SQL Server? Since MS SQL Server is an RDBMS, only structured data can be sent there. For instance, you can send Contacts, Accounts, Leads, etc., but you can\u2019t send documents, images, and other unstructured data. Can you use SQL to query Salesforce? No, Salesforce doesn\u2019t support SQL for querying. However, it has its own query language called SOQL, which is designed to query Salesforce data objects. SOQL syntax is very similar to SQL. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fconnect-salesforce-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=5+Ways+to+Connect+Salesforce+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fconnect-salesforce-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/connect-salesforce-to-sql-server/&title=5+Ways+to+Connect+Salesforce+to+SQL+Server) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/connecting-mysql-to-ftp/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/connecting-mysql-to-ftp/#respond) 4384 June 3, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fconnecting-mysql-to-ftp%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+MySQL+to+FTP%3A+4+Simple+Methods+to+Automate+Data+Transfers&url=https%3A%2F%2Fblog.skyvia.com%2Fconnecting-mysql-to-ftp%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/connecting-mysql-to-ftp/&title=How+to+Connect+MySQL+to+FTP%3A+4+Simple+Methods+to+Automate+Data+Transfers) Moving data from MySQL to an FTP server doesn\u2019t have to be a clunky, manual mess. If you\u2019ve been stuck exporting CSVs, renaming files, and dragging them into FTP folders every week, or worse, every day, this guide\u2019s for you. Whether syncing reports to clients, backing up data off-site, or feeding files into another system, automating the MySQL-to-FTP flow can save you time, reduce human error, and keep things running like clockwork. In this guide, we\u2019ll walk through four simple methods to make it happen: Using scripts (hello, cron jobs, and bash!). Setting up automation tools like Skyvia. Writing your own custom integration. Leveraging middleware platforms. You\u2019ll learn what works best for your team, whether you\u2019re a solo dev, part of a data team, or just want to \u201cset it and forget it.\u201d Table of Contents Why Connect MySQL Database to FTP Server? Overview of FTP Server and MySQL Database 4 Proven Methods to Connect MySQL to FTP Server Efficiently Method 1: Connecting MySQL to FTP via phpMyAdmin Method 2: Connect MySQL to FTP via the Command Line Method 3: Connect MySQL to FTP Using MySQL Workbench Method 4: No-Code Connect MySQL to FTP Troubleshooting MySQL to FTP Integration Conclusion Why Connect MySQL Database to FTP Server? Let\u2019s face it: manually exporting data from MySQL and uploading it to an FTP server gets old fast. It\u2019s slow, repetitive, and one mistyped filename or missed step can throw the whole process off. If your team\u2019s still dragging files around every week, it\u2019s time for a smarter way. Connecting the MySQL database directly to an FTP server opens the door to easy automation. You can schedule exports, push files to remote servers without lifting a finger, and create a hands-off workflow that just runs. Common Use Cases Automated backups. Push daily or weekly data dumps to an FTP server for safe storage or compliance. Sharing data with clients or partners. Keep them in the loop without needing to manually send files. Feeding downstream systems. Transfer files to FTP so other platforms can pick them up and process them. Why Manual Transfers Don\u2019t Cut It They\u2019re time-consuming. Even quick exports and uploads add up. They\u2019re error-prone. One wrong click or missed file, and your data flow breaks. They don\u2019t scale. As data volume grows, so does the chaos of managing it manually. When you connect MySQL to FTP with automation in mind, you turn a recurring pain into a seamless process. No more babysitting; data transfers just result on schedule. Overview of FTP Server and MySQL Database Understanding what each tool does on its own makes it easier to see why connecting them can be such a win for your data workflows. So, before stepping into \u201chow,\u201d let\u2019s quickly cover the \u201cwhat.\u201d What is FTP? FTP (File Transfer Protocol) is one of the old-school champs for moving files between systems. It\u2019s been around forever, and for good reason. It\u2019s simple, widely supported, and perfect for automating routine file transfers between servers, apps, or clients. People use FTP to: Store backups offsite. Share reports or datasets with partners. Feed files into other systems for processing. What is MySQL Database? MySQL is a powerhouse when it comes to managing structured data. It\u2019s one of the most popular relational database systems out there, used everywhere, from small websites to enterprise-level apps. Think of it as your central hub for storing and organizing data, with fast querying, transactional integrity, and scalability built-in. Why Use Them Together? When you combine MySQL with FTP, you automatically create a clean pipeline for getting structured data out of the database and into the hands (or systems) that need it. This integration plugs a real-world gap, whether it\u2019s for backups, reporting, or syncing systems that don\u2019t play nicely with APIs. Put simply, MySQL holds the data. FTP moves it. Together, they make a solid pair for getting data from point A to point B without the manual grunt work. 4 Proven Methods to Connect MySQL to FTP Server Efficiently There\u2019s more than one way to get your MySQL data onto an FTP server, and which one you choose depends on how hands-on you want to get, how often you need the data moved, and what kind of setup you\u2019re working with. Some methods are dead simple but manual. Others can run themselves like clockwork but take a bit more technical muscle to get off the ground. Method 1: phpMyAdmin + FTP . Easy and visual. Great for beginners who want to manually export and upload without touching a terminal. Method 2: Command Line (mysqldump + FTP) . This approach is for those who like scripting and want to automate with cron. It\u2019s free, fast, and flexible if you know your way around bash. Method 3: MySQL Workbench + FTP . A GUI-based approach that adds more control over what and how you export. Still somewhat manual, but great for one-time or small batch jobs. Method 4: No-Code Tool (Skyvia) . Ideal for scheduled, recurring data exports with a visual UI and minimal setup. The comparison table below puts it all in perspective: how automated each method is, how technical you\u2019ll need to get, and what trade-offs come with each path. The comparison table below puts it all in perspective: how automated each method is, how technical you\u2019ll need to get, and what trade-offs come with each path. Method phpMyAdmin + FTP Command Line (mysqldump + FTP) MySQL Workbench + FTP No-Code Tool (Skyvia) Automation Level Low (Manual) High (Fully scriptable) Medium (Partly manual) High (Scheduled automation) Pricing Overview Free (with hosting) Free Free Freemium (paid plans for higher usage or features) Technical Skills Needed Low High Medium Low Pros \u2013 Easy visual interface \u2013 No coding needed \u2013 Fast and flexible \u2013 Fully automatable with cron \u2013 Open-source \u2013 GUI control over export \u2013 Good for ad-hoc tasks \u2013 Fully visual \u2013 Set-and-forget scheduling \u2013 Easy setup Cons \u2013 Manual process \u2013 Not suitable for automation \u2013 Requires scripting skills \u2013 No GUI \u2013 Still manual \u2013 Not built for automation \u2013 Limited flexibility \u2013 Costs can grow with data volume Real-World Use Cases Beginners doing occasional exports; small-scale data backups. Automating nightly database backups; devs managing multiple environments. Analysts needing fine-tuned exports; one-off migrations. SMBs needing daily exports; non-tech users handling regular reporting workflows. Method 1: Connecting MySQL to FTP via phpMyAdmin This is the most straightforward no-code way to move data from MySQL to an FTP server. People use phpMyAdmin to: Export the data. Manually upload the file to the FTP server using a client like FileZilla. It\u2019s simple, visual, and perfect if you need to finish the job occasionally without setting up anything complex. Best For Beginners or non-technical users. One-off data exports and backups. Quick troubleshooting or data migrations. Small databases that don\u2019t require automation. Pros It\u2019s simple, visual, and perfect if you need to finish the job occasionally without setting up anything complex. No coding or scripting needed. Built into most web hosting panels. Easy-to-use interface. Let\u2019s you choose specific tables or full DB exports. Multiple export formats are supported. Cons Manual steps are required every time. No scheduling or automation, 100% hands-on. File size and timeout limits, especially on shared hosting. It\u2019s easy to forget a step if doing it regularly. Step-by-step Guide Step 1: Exporting data from MySQL using phpMyAdmin phpMyAdmin allows transferring the MySQL database through FTP using its user interface. To export the data from MySQL, follow these steps: Log in to phpMyAdmin via your hosting control panel. Select the database you want to export from the left sidebar. Click on the Export tab at the top. Choose your preferred format: SQL \u2013 for full structural + data backup. CSV \u2013 for spreadsheets or analytics tools. Use Custom export mode to fine-tune which tables or rows you want. Step 2: Upload Data to FTP Server Now that you\u2019ve got your .sql or .csv file, it\u2019s time to move it to your FTP server. Open a third-party FTP client like FileZilla , Cyberduck , or even your hosting provider\u2019 built-in file manager. Enter your FTP credentials (host, username, password). Navigate to the target directory. Drag and drop your export file into the server. Pro move : If you\u2019re doing this regularly, save your FTP credentials for faster access. Step 3: Import Data Back to MySQL Log in to phpMyAdmin on the destination server. Create a new database or select an existing one. Click on the Import tab. Browse for your file from the FTP directory (download it first unless your host supports direct import). Choose the format (SQL, CSV, etc.), and click Go . Ensure the file size doesn\u2019t exceed your host\u2019s upload limit. If it does, consider splitting it or switching to a script-based method. The data import has been successfully executed. You will be able to see that the table has been created in the database, which contains the data to be imported. Method 2: Connect MySQL to FTP via the Command Line This one\u2019s for the script-savvy. Going command line is your power move if you\u2019re comfortable with a terminal and want total control over the process. You can automate MySQL exports, upload them to an FTP server, and even schedule the whole thing with cron. It\u2019s fast, flexible, and doesn\u2019t need any fancy tools, just a little shell scripting know-how. Best For Developers and sysadmins who love automation. Scheduled database backups or regular syncs. Environments where GUI tools aren\u2019t available or practical. Teams that need tight integration with other scripts or systems. Pros Fully automatable with cron or other schedulers. Zero UI dependencies; it works on any system with shell access. Fast and efficient for large datasets. More flexibility in how and what you export. Cons Requires scripting knowledge. Easy to mess up if the script isn\u2019t carefully written. No visual feedback; you\u2019ll need to check logs for errors. Might need to secure credentials or use key-based authentication for FTP/SFTP. Step-by-Step Guide Step 1: Export Data from MySQL CL Use mysqldump for full database or table-level backups: mysqldump -u [user] -p[password] [database_name] > backup.sql Or export a specific table to a CSV using SELECT INTO OUTFILE (note: requires file permissions on server): SELECT * FROM your_table \nINTO OUTFILE '/tmp/your_table.csv' \nFIELDS TERMINATED BY ',' \nENCLOSED BY '\"' \nLINES TERMINATED BY '\\n'; Pro tip : Use a timestamp in your filename to avoid overwriting backups. Step 2: Transfer Files Using FTP Commands You can push the exported file to your FTP server using a shell script. Here\u2019s an example using ftp: ftp -inv $HOST < Connection . Select MySQL as the connector and enter its credentials (host, user, password, database). Note : Use direct connection mode for MySQL servers available through the Internet. If you are connecting to MySQL server on your local computer, configure the appropriate firewall settings or use the [agent connection](https://docs.skyvia.com/connections/agent-connections.html) . Test the connection and save it. Step 2: Configure FTP Connection Similarly to the previous case, create a new FTP connection within Skyvia (you may use the Select Connector link here). Enter your FTP server details and credentials. Test and create the connection. Step 3: Automate Data Sync with Skyvia After creating connections, let\u2019s configure integration. Depending on business cases, there are different scenarios. Let\u2019s consider the most popular ones, like [import](https://docs.skyvia.com/data-integration/import/) and [export](https://docs.skyvia.com/data-integration/export/) . Importing Data From FTP to MySQL Using Skyvia Imagine a company\u2019s HR department that regularly receives employee data updates in CSV files from external payroll and benefits providers. These files land on an FTP server daily or weekly. Manually importing these files into the company\u2019s MySQL-based HR system is tedious, error-prone, and slows down access to accurate employee records. Skyvia automates the import of employee data files from the FTP server into MySQL, ensuring the HR database is always up-to-date without manual work. It supports incremental updates, so only changed records are processed, reducing overhead. Let\u2019s use the connections to MySQL and FTP we\u2019ve already created for it and proceed to configuring import. Click +Create New > Integration > Import . Optionally, name your integration to locate it easily in the future.\u00a0 Click Untitled and enter a more suitable name. Click CSV from storage service and select FTP as the source. Select MySQL as the target. Near Tasks, click Add new to add a new import task. In the CSV Path box, select the CSV file on the FTP to export. Click Next Step . On the next step, select the target table to import data to, select the desired DML operation, and click Next Step again. Here, we map target and source columns. Since the source CSV and target table have columns with the same name, they are mapped automatically. Click Save . Click Create . The integration is ready. Now you can click [Run](https://docs.skyvia.com/data-integration/package-run-history.html) to start the import. Note : Skyvia automatically tracks sync status and logs errors via the Monitor and Log tabs. You can set email alerts or notifications for failures so you never miss a hiccup. The dashboard shows detailed reports so you can monitor performance and troubleshoot easily. Exporting Data with Skyvia The same company\u2019s HR database in MySQL stores all employee records. However, payroll providers, benefits platforms, and compliance auditors require regular access to it, usually via CSV files uploaded to their secure FTP servers. Manually exporting this info, formatting it correctly, and uploading it to multiple FTP locations takes time and risks mistakes, especially as the company scales. Skyvia automates the export of employee data from MySQL directly to the FTP servers of external partners. Scheduled tasks ensure the latest data is delivered on time, without manual intervention. Let\u2019s see how to export data from FTP to MySQL. Click +Create New , and under Integration , click Export . Optionally specify the meaningful name for the integration. Click Untitled and enter a more suitable name. Select MySQL as a source. Click CSV to storage service and then select the target FTP connection. Optionally select the folder on the FTP server to export data to, or leave it empty to export data to the root folder. Specify the CSV options if you need non-default ones. Add\u00a0an export\u00a0task. Near the Tasks , click Add new . Select an object to export data from in the object list. Then click Save . In the same way, you can add multiple export tasks, exporting data from multiple MySQL tables to FTP. Finally, click Create . The export is ready. Now you can click Run to start the export. Click [Schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) and set it to automate export. Choose how often this sync runs: hourly, daily, weekly, or at custom intervals. Set the exact time to fit your business needs (e.g., after business hours to avoid peak load). Troubleshooting MySQL to FTP Integration Getting MySQL and FTP to play nice together isn\u2019t always so simple. Whether you\u2019re scripting your own solution, clicking through tools, or using a no-code platform, issues pop up. The good news? Most hiccups have straightforward fixes once you know what to look for. Here\u2019s a quick rundown of the usual suspects and how to handle them. 1. Connection Failures What\u2019s going on? Your MySQL or FTP client can\u2019t connect to the server. Why? Wrong hostname, port, or credentials are the top culprits. Firewalls or IP restrictions can also block access. How to fix it: Double-check the server address, username, and password. Verify that the MySQL and FTP services are running and listening on the right ports (default MySQL: 3306, FTP: 21). Ensure the IP is whitelisted or firewall rules allow your connection. Try pinging or telnetting to test connectivity. 2. Permission Issues What\u2019s going on? You get errors like \u201cAccess denied\u201d or \u201cPermission refused.\u201d Why? The database user or FTP user lacks the rights to read/write files or perform certain operations. How to fix it: Ensure the MySQL user has the necessary SELECT or FILE privileges for exports/imports. Check FTP user permissions to upload/download files in the target directory. If using command-line tools, confirm file system permissions on the server (read/write access to export directories). When working with no-code tools, confirm the connection user has full access on both ends. 3. Timeout and File Size Limits What\u2019s going on? Transfers fail midway, or you get errors related to max file size or execution time. Why? Shared hosts and FTP servers often impose limits on file size, script execution time, or connection duration. How to fix it: Break large exports into smaller chunks (export tables individually or by date ranges). Adjust timeout and file size limits in your MySQL and FTP server configs if you control them. Use command-line tools or no-code platforms that support chunked uploads or resumable transfers. 4. Data Format and Encoding Problems What\u2019s going on? Imported data looks garbled, missing characters, or CSVs don\u2019t parse correctly. Why? Mismatched character sets or improper export settings cause encoding issues. How to fix it: Ensure MySQL export and FTP clients use the same character set (UTF-8 is usually best). Check field delimiters, enclosure characters, and line endings when exporting CSV files. Test imports on a small sample before full transfers. 5. Sync Failures with No-Code Tools What\u2019s going on? Scheduled syncs don\u2019t run, or only partial data transfers happen. Why? Connectivity glitches, expired credentials, or platform-specific limits. How to fix it: Check API tokens, passwords, and connection status in your no-code platform. Look at logs or dashboards for error messages or alerts. Reauthorize or refresh connections if needed. Contact platform support if problems persist. Conclusion When it comes to connecting MySQL to FTP, your choice boils down to what you need and how comfortable you are with tech. Want quick manual exports? phpMyAdmin\u2019s got your back. Love the command line and automation? mysqldump plus scripts will make you feel at home. Prefer GUI control? MySQL Workbench delivers. Want fast setup, zero coding, and smooth, scalable automation? No-code tools like Skyvia are the true game changers. Ready to ditch the headaches of manual transfers and fragile scripts? Give Skyvia a spin and see how easy MySQL-to-FTP integration can really be. F.A.Q. for MySQL to FTP How to Secure MySQL to FTP Connections? Use secure protocols like SFTP or SSH instead of plain FTP. Encrypt your data in transit and at rest. Limit access with strong passwords and IP whitelisting. Always keep your software up to date to patch security vulnerabilities. Can I Schedule Automatic Data Transfers? Absolutely. Command-line scripts with cron jobs offer full control. No-code tools like Skyvia let you set up automatic, recurring syncs with just a few clicks . N o scripting needed. What Tools Simplify MySQL-FTP Integration? Tools like Skyvia provide no-code visual interfaces to connect MySQL and FTP quickly. For script lovers, mysqldump combined with FTP clients is a classic, flexible approach. How to Automate MySQL Backups via FTP? Use mysqldump to export your database, then script FTP or SFTP commands to upload backups. Schedule these scripts with cron for hands-free, regular backups. Can I Connect MySQL to SFTP Instead of FTP? Yes! SFTP is more secure and widely recommended. Most FTP clients and integration tools support SFTP, letting you safely transfer MySQL exports without exposing your data. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fconnecting-mysql-to-ftp%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+MySQL+to+FTP%3A+4+Simple+Methods+to+Automate+Data+Transfers&url=https%3A%2F%2Fblog.skyvia.com%2Fconnecting-mysql-to-ftp%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/connecting-mysql-to-ftp/&title=How+to+Connect+MySQL+to+FTP%3A+4+Simple+Methods+to+Automate+Data+Transfers) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Complete Guide to Salesforce Data Loaders in 2025](https://skyvia.com/blog/salesforce-data-loaders/)" }, { "url": "https://skyvia.com/blog/data-aggregation-tool/", "product_name": "Unknown", "content_type": "Blog", "content": "[Integration](https://skyvia.com/blog/category/integration/) 10 Best Data Aggregation Tools for 2025 for Your Business Needs By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-aggregation-tool/#respond) 1619 July 11, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-aggregation-tool%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Aggregation+Tools+for+2025+for+Your+Business+Needs&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-aggregation-tool%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-aggregation-tool/&title=10+Best+Data+Aggregation+Tools+for+2025+for+Your+Business+Needs) Modern tech stacks usually contain numerous hardware and software components with data scattered across them. Data aggregation helps to bring all that information into a single view. This process resembles a puzzle game, where you must organize hundreds of pieces into a general picture. Having a general picture of data helps to identify trends and gain insights. You can discover the average age of your audience or cities with the highest sales and get other critical information about your business. This article explains data aggregation and its application within the organizational environment. It also provides a dozen data aggregation tools for collecting and organizing data. Table of Contents Introduction to Data Aggregation Importance of Data Aggregation Tools Overview of the Top 10 Data Aggregation Tools Comparison of the Data Aggregation Tools Criteria for Selecting Data Aggregation Tools Final Thoughts Introduction to Data Aggregation Let\u2019s first focus on the word aggregation and discover its roots. In Latin, the meaning of aggregation was to group items together. So, when returning to data aggregation , it\u2019s about grouping various kinds of data. Why is it necessary? It\u2019s simple. With the growing amount of data, the aggregation process reduces its amount. It also contributes to noise minimization, which is critical for the correctness of data analysis. Data aggregation also helps to obtain value from the collection of data. For instance, you can discover the total number of work hours of each employee and detect whether there\u2019s any overtime or underperformance. With data aggregation, you can also calculate customer-related parameters, such as an average purchase amount, satisfaction rate, etc. Importance of Data Aggregation Tools Many processes stand behind data aggregation, from collection to analysis. And there is a long way between these two procedures: grouping, transformation, transfer, and other operations on data. Some tools perform end-to-end aggregation processes, starting from data collection to visualization. However, most data aggregation tools focus either on data ingestion and preparation or analysis and visualization. Data aggregation tools are important because they can automate data collection from scattered sources. Then, you can blend and group data, apply aggregation functions, create pivot tables, build reports, etc. Overview of the Top 10 Data Aggregation Tools 1. Skyvia [Skyvia](https://skyvia.com/) is a universal cloud platform for various data-related tasks. This tool can help with data aggregation, data integration, etc. Skyvia can collect data from [180+ sources](https://skyvia.com/connectors) , group and standardize it, and send the refined data to a chosen destination. All the processes can be done via a visual wizard without programming language knowledge. The following Skyvia\u2019s solutions would be particularly useful for data aggregation: [Replication tool .](https://skyvia.com/data-integration/replication) This is a visual wizard where you can create [ELT pipelines](https://skyvia.com/learn/what-is-elt) . Replication is a fast way to load data into a database or a data warehouse. It copies data from chosen apps and sends it to the preferred database or a data warehouse. You\u2019ll need to create as many replication scenarios as the number of sources from which data is collected. [Import tool .](https://skyvia.com/data-integration/import) It\u2019s a visual wizard where you can build [ETL](https://skyvia.com/learn/what-is-etl) and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) pipelines. For instance, you can import data from your CRM, ERP, and other business tools to a database or a data warehouse. Import allows you to execute queries for data selection and apply aggregation functions on it. You\u2019ll need to create an import scenario for each [point-to-point integration](https://skyvia.com/blog/point-to-point-integration-pros-cons/) and apply the needed transformations on the copy of source data. [Data Flow.](https://docs.skyvia.com/data-integration/data-flow/) It allows users to build more complex data pipelines, including several sources, and apply multistage data transformations like lookup and expression. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) is used together with Data Flow to create logic for task execution, perform pre- and post-integration activities, and configure automatic error processing logic. [Query](https://skyvia.com/query) . It allows users to gather data from the selected sources by using a visual query builder or writing SQL statements. Then, apply aggregation functions to group the obtained data. Skyvia offers add-in for Excel and add-on for Google Sheets, allowing you to query data directly from a spreadsheet. Key features: Source diversity. Extract data from applications, databases, data warehouses, [legacy systems](https://skyvia.com/learn/legacy-system) , storage services, and flat files. [Data integration and aggregation](https://skyvia.com/data-integration) . Select data from the preferred cloud source via SQL query builder, and then group it by a certain criteria. Skyvia supports the following [aggregation functions](https://docs.skyvia.com/query/configuring-queries-with-query-builder.html) : COUNT, AVG, MIN, MAX, and SUM. Then load this data to the destination of your choice. [Data transformation](https://skyvia.com/learn/what-is-data-transformation) . Cleanse data and standardize its format to match the destination requirements. Pricing: A free plan is suitable for those who operate\u00a0 <10,000 records per month. It offers one daily data sync between a source and a destination, and standard data transformation options. A basic plan is available starting from $79 per month and offers flexibility for the operated data amount. A Standard plan is available starting from $159 per month. The overall cost depends on the amount of data processed. This plan includes hourly data synchronization and 50 scheduled integrations per month. A Professional plan is available starting from $199 per month. Its price depends on the selected data amount. It offers a 1-minute interval for data synchronization and update, multiplex data transformations, and advanced integration scenarios. An Enterprise plan is a tailor-made offer that matches your data needs. Discuss the pricing and requirements with the Skyvia team. Suitable for: Startups Small companies Medium-sized businesses Enterprises Corporations Non-profit organizations Governmental institutions Universities 2. Alteryx [Alteryx](https://www.alteryx.com/) is an AI platform with an intuitive interface for enterprise analytics. It enables users to aggregate data from around a hundred sources and blend it. Alteryx also has an embedded module for analyzing the aggregated data. Key features: Data aggregation with the Summarize tool. Drag and drop fields to group them and choose the aggregation function (SUM, COUNT, AVG, etc.). The outcomes obtained in the Summarize tool can be used in other tools on the Alteryx platform. Data blending. Combine data from several sources together and elaborate on it. Data cleaning. Remove extra spaces, delete duplicates, and detect missing values. Predictive and spatial analytics. Develop AI and ML-based models for advanced analytics insights. Pricing: Designer Cloud starting from $4,950 Designer Desktop starting from $5,195 A 30-day trial is available. Suitable for: Enterprises 3. Talend [Talend](https://www.talend.com/) offers a platform for data collection, integration, aggregation, management, and governance. It\u2019s adaptable for various data architectures due to its connectivity with a wide range of sources and notable data cleansing capabilities. Key features: SQL templates. Use system SQL templates or create custom ones to streamline data querying. For instance, the Aggregate template already contains statements for data aggregation. Visual job builder. Compose data pipelines with drag-and-drop components in the visual builder. Data preparation. Standardize the aggregated data with available transformation options. Pricing: Talend Data Fabric can be deployed in the cloud, on-premises, or hybrid cloud environments. It\u2019s a customizable solution, so it could be set up in accordance with the specific organizational needs. Subsequently, the price is discussed with the Talend team representatives. It depends on the deployment method and the set of requested features. A free trial is available. Suitable for: Enterprises 4. Domo [Domo](https://www.domo.com/) is a platform for data aggregation that powers up business intelligence and data analytics. It pulls data from various sources, blends it, applies cleansing and transformation functions, and centralizes it in a unified dataset. The aggregated data can be used for building custom business apps for data presentation and workflow automation. Key features: Extended connectivity. Domo has 1000+ pre-built connectors for the on-premises services, databases, data warehouses, and cloud apps. Data blending. Combine and transform the aggregated data with commands and statements. Visualize data. Select the preferred visuals from a set of charts and maps to explore and analyze data. Mobile app support. Use an iOS or Android Domo app to view data and create reports. Pricing: The cost is defined individually for each Domo customer and depends on data storage, data update rates, number of queries, and users. A 30-day free trial is available. Suitable for: Small businesses Medium-sized companies Enterprises 5. Looker Studio [Looker Studio](https://lookerstudio.google.com/) , formerly known as Google Data Studio, is an online tool for data exploration and visualization. It allows users to create dashboards and reports quickly and conveniently. Looker Studio natively connects to 21 sources and more than 985 partner connectors for multiple online services. Key features: Data blending and aggregation. The Blend Editor enables users to join tables visually instead of writing SQL statements. User-friendly interface. Looker Studio\u2019s interface is very simple and understandable, allowing even non-tech users to elaborate on datasets. A wide availability of dashboard templates. There are many dashboard templates for Looker Studio on the web available for free. You can select the preferred one, aggregate data from necessary sources, and explore it. Such templates are available for practically any department and industry. Pricing: Looker Studio is completely free to use for those who have a Google account. Suitable for: Small businesses Medium-sized businesses 6. Tableau [Tableau](https://www.tableau.com/) is a powerful business intelligence and data visualization platform. It can connect to around 100 sources, including databases and cloud apps, and get data from files (CSV, JSON, PDF, etc.). Then, you can prepare the aggregated data in Tableau and start exploring it with built-in visualizations. Key features: Data aggregation. Tableau offers a range of aggregation functions (SUM, AVG, MAX, MIN, COUNT, MEDIUM, VARIANCE, etc.) Revision history. Revert up to 25 last versions of the selected dataset. Querying data with plain language. The Ask Data feature allows users to get answers based on the given data by asking questions without any specific knowledge of programming languages. Mobile support. Access data and share insights using a mobile app. Pricing: Tableau Viewer for viewing and interacting with dashboards at $15 per user per month. Tableau Explorer for exploring data with self-service analytics at $42 per user per month. Tableau Creator for end-to-end analytics workflow at $75 per user per month. Suitable for: Startups Small and medium-sized businesses Enterprises Non-profit organizations Government organizations 7. Power BI [Microsoft Power BI](https://www.microsoft.com/en-us/power-platform/products/power-bi/) is a widely used tool for business intelligence and analytics. It allows you to manually aggregate data by uploading it from separate files or connecting to databases and online sources. As aggregation reduces data complexity, it becomes simpler to visualize and analyze it. Key features: Data cleaning and preprocessing. Identifying duplicates, incorrect values, or incomplete records that can deteriorate the quality of analysis. Data grouping. Group data by specific dimensions and apply the aggregation function (MIN, MAX, COUNT, AVG, SUM) via the Power Query Editor. DAX. Perform calculations and queries on data with DAX formula expression language, which is similar to SQL. Copilot. Ask data-related questions in plain language and get AI-generated answers to them. Pricing: Power BI Pro: $10 per user per month Power BI Premium: $20 per user per month Suitable for: Startups Small and medium-sized businesses Enterprises Non-profit organizations Government organizations 8. Apache NiFi [Apache NiFi](https://nifi.apache.org/) is a data flow management system that can handle batches and real-time data streams. It collects, routes, aggregates, transforms, and processes data in a reliable and scalable manner. Key features: Data availability acceleration. Apache NiFi is a catalyst for big data project execution. Integration with [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) . NiFi easily connects to Apache Spark, Apache Kafka, Hadoop, Hive, MongoDB, Cassandra, etc., and collects and aggregates data from them. Pricing: Apache NiFi is an open-source solution, so it\u2019s free to use. However, considerable investment is required for the deployment of the necessary underlying hardware resources. Suitable for: Enterprises Companies dealing with big data 9. Segment [Segment](https://segment.com/) is a Customer Data Platform (CDP) that simplifies the aggregation of user data from various channels. This tool also transforms, transfers, and archives your first-party customer data. Overall, Segment helps to activate customer data for ad personalization, purchase predictions, and improvement of other marketing activities. Key features: Source function. Collect and aggregate data from third-party applications. Transformation function. Fix bad data or standardize it to make it suitable for further processing. Pricing: A Free plan with up to 500,000 Reverse ETL records and two sources. A Team plan starts at $120 per month with up to 1,000,000 Reverse ETL records and unlimited sources. A Business plan with advanced features and personalized volume at custom pricing. Suitable for: Startups Small businesses Medium-sized businesses Enterprises 10. Fivetran [Fivetran](https://www.fivetran.com/) is an automated data movement platform for quick extraction of data from 160+ data connectors. To blend and transform the aggregated data within Fivetran, you\u2019ll need to apply SQL statements. Key features: Real-time data movement. Fivetran supports real-time data replication, creating a solid base for the timely collection and aggregation of the most recent data. Data lake management. Aggregate cleansed and standardized data in a data lake. Pricing: Fivetran is available under the free plan with up to 500,000 monthly active rows (MAR). The prices of other plans depend on the data amount, number of users, sync interval, advanced security features, and other factors. Suitable for: Startups Small businesses Medium-sized businesses Enterprises Comparison of the Data Aggregation Tools Most of the above-mentioned data aggregation tools provide a possibility to create data flows visually. This greatly simplifies and streamlines daily work of data engineers, data analysts, and data scientists. Other factors may also impact data aggregation processes. Some of the crucial ones are mentioned in the table below. Tool Connectors Pricing G2 reviews Skyvia 180+ starting from $0 4.8 out of 5 Alteryx 100+ starting from $4,950 4.6 out of 5 Talend 140+ custom pricing 4.0 out of 5 Domo 1000+ custom pricing 4.4 out of 5 Google Data Studio 1000+ custom pricing 4.4 out of 5 Tableau 100+ starting from $15 4.4 out of 5 Power BI 150+ starting from $10 4.5 out of 5 Apache NiFi custom code for adding connectors free 4.2 out of 5 Segment 450+ starting from $0 4.5 out of 5 Fivetran 160+ starting from $0 4.2 out of 5 For more details, you can also review a detailed comparison of data aggregation tools: [Skyvia vs Alteryx](https://skyvia.com/etl-tools-comparison/alteryx-designer-alternative-skyvia) [Skyvia vs Talend](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) [Skyvia vs Fivetran](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) [Skyvia vs Apache NiFi](https://skyvia.com/etl-tools-comparison/apache-nifi-alternative-skyvia) [Fivetran vs Talend](https://skyvia.com/etl-tools-comparison/fivetran-vs-talend) [Fivetran vs Alteryx](https://skyvia.com/etl-tools-comparison/fivetran-vs-alteryx-designer) [Fivetran vs Apache NiFi](https://skyvia.com/etl-tools-comparison/fivetran-vs-apache-nifi) [Talend vs Apache NiFi](https://skyvia.com/etl-tools-comparison/talend-vs-apache-nifi) [Talend vs Alteryx](https://skyvia.com/etl-tools-comparison/talend-vs-alteryx-designer) Criteria for Selecting Data Aggregation Tools Each business has its own expectations regarding the aggregation of data. Based on these expectations, the requirements for the data aggregation tool selection are formed. While price is usually the most significant criterion when selecting any tool for business operations, you should consider this parameter first. If several suitable options match your budget, pay attention to the supported connectors and make sure there are sources from which you want to collect data. Also, check whether the selected solution is convenient to use and see what other users think about it. Skyvia can be a win-win for your business because it satisfies all of the above-mentioned criteria. It offers pricing plans matching various budgets and data aggregation needs. Moreover, it allows you to collect data from more than 190 cloud apps, databases, and data warehouses. Skyvia is one of the [top of the most user-friendly tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use&__cf_chl_rt_tk=YXFMUzuXffkH2PTPL.ZsxgBRXvgE_l7SvLtZr9TThYk-1720084056-0.0.1.1-5844#rank-1) , and users rate its overall functionality at 4.8 on G2. Final Thoughts Data aggregation tools automate data collection from apps, databases, legacy systems, and other sources. They also summarize the grouped data quickly, revealing more insights about employees, customers, and overall business performance. Selecting the data aggregation tool that best suits your business requirements largely contributes to the value extraction from data. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-aggregation-tool%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Data+Aggregation+Tools+for+2025+for+Your+Business+Needs&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-aggregation-tool%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-aggregation-tool/&title=10+Best+Data+Aggregation+Tools+for+2025+for+Your+Business+Needs) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Integration](https://skyvia.com/blog/category/integration/) [Data Mesh vs Data Lake: Comprehensive Comparison](https://skyvia.com/blog/data-mesh-vs-data-lake/) [Integration](https://skyvia.com/blog/category/integration/) [Point-to-Point Integration Explained: Key Pros and Cons](https://skyvia.com/blog/point-to-point-integration-pros-cons/)" }, { "url": "https://skyvia.com/blog/data-analyst-vs-data-engineer/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Analyst vs. Data Engineer: Roles and Real-Life Applications By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-analyst-vs-data-engineer/#respond) 1843 May 20, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-analyst-vs-data-engineer%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Analyst+vs.+Data+Engineer%3A+Roles+and+Real-Life+Applications&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-analyst-vs-data-engineer%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-analyst-vs-data-engineer/&title=Data+Analyst+vs.+Data+Engineer%3A+Roles+and+Real-Life+Applications) Data today is like the air businesses breathe, powering everything from decision-making to customer satisfaction. However, although many companies understand the importance of data, they face difficulties in properly using it for business needs. This requires a certain set of technical skills, which is where data analysts and data engineers come into play. The future for data analysts and engineers looks thrilling, with many technological advancements on the way. Both roles will see shifts that make their jobs more integral, strategic, and impactful. Keeping up with these changes means staying curious, continuously learning, and always being ready to adapt to the next big thing in the data world. Let\u2019s start the journey into these roles and see the key differences, technologies used, and future trends to help businesses successfully grow, taking the best from both roles. Table of Contents What is a Data Analyst [?](https://docs.google.com/document/d/15AdQ4lYp4ALCfw-asQYnr34x2OgeHh0hCMnB31s_wsc/edit#heading=h.vihofnev2jes) What is a Data Engineer? [K](http://Key Differences between Data Analysts and Data Engineers) ey Differences between Data Analysts and Data Engineers How to Transition from Data Analyst to Data Engineer [Tools and](https://docs.google.com/document/d/15AdQ4lYp4ALCfw-asQYnr34x2OgeHh0hCMnB31s_wsc/edit#heading=h.8q6mxmm5y415) Technologies Used by Each Role Career Path and Progression Future Trends Affecting Data Analysts and Data Engine [ers](https://docs.google.com/document/d/15AdQ4lYp4ALCfw-asQYnr34x2OgeHh0hCMnB31s_wsc/edit#heading=h.xmw592kkx4tz) Conclusion What is a Data Analyst? Imagine a huge pile of puzzle pieces (raw data)\u00a0necessary to create the whole picture. That\u2019s where data analysts come in, like the storytellers of the business world. They take scattered, dispersed data from various systems,\u00a0make it meaningful, and turn it into a story everyone in the company can understand. What They Do Analyze data \u2014 using statistical tools and software, they crunch numbers to find trends, patterns, and anomalies. Visualize data \u2014 create visual representations like charts, graphs, and dashboards that make data easy to understand. Report insights \u2014 compile their findings into reports or presentations that can answer business questions like \u201cWhich product is selling the best?\u201d or \u201cWhere are we losing money?\u201d Skills Data analysts are usually wizards in various areas, including Excel, SQL, Python, Power BI tools, data processing, reporting, and modeling software. Let\u2019s see more details: SQL Expertise SQL (Structured Query Language) is like the universal language for talking to relational databases. Whether retrieving specific data or managing massive datasets, SQL is a must-have skill for any data analyst. Data Visualization A good chart or graph can be worth a million in data analysis. Tools like Tableau, Power BI, or even advanced [Excel](https://skyvia.com/connectors/excel) are crucial for converting complex data into visual stories anyone can understand. Statistical Analysis Knowing how to apply statistical methods to data helps data analysts draw accurate conclusions and make predictions. Skills in statistical software like R or Python\u2019s stats libraries will make them a go-to person for insights. Data Wrangling Data rarely comes clean and ready to use. Data wrangling involves cleaning, restructuring, and enriching raw data into a usable format. Mastery of tools like Python\u2019s Pandas Library or R can really set you apart. Critical Thinking Every great data analyst needs to think critically. This skill involves asking the right questions and not taking data at face value. It helps analysts make sense of the data and find the story behind the numbers. Machine Learning Basics As AI and machine learning evolve, understanding the basics of machine learning models is becoming increasingly important. Even basic knowledge can help understand which models can be applied to the data to extract new insights. Business Context Understanding the business context around the data helps make more impactful analyses. Knowing what drives the business, key metrics, and how your work affects the bottom line can make your analyses invaluable. What is a Data Engineer? If data analysts are the storytellers, data engineers are the builders and architects behind the scenes. They design, build, and oversee the architecture that handles data at scale. Data engineers also maintain the systems and infrastructure that allow all that data to be collected, stored, processed, and accessed efficiently. Without them, data analysts wouldn\u2019t have the clean, well-organized data they need to do their magic. What They Do Build data pipelines \u2014 create and manage the pipelines that transport data between various sources, ecosystems, services, apps, DWHs, data lakes, etc. It\u2019s like setting up a sophisticated network of roads that ensures smooth traffic flow. Maintain the company\u2019s data ecosystems \u2014 ensure that databases are running smoothly and optimized for quick access and storage efficiency. They\u2019re the mechanics keeping the engines running. Ensure data quality \u2014 set up processes to clean and validate data, ensuring data analysts and business users get accurate and reliable data. They\u2019re a bit like the quality control inspectors in a factory. Skills Data engineers are masters in programming languages like Python and SQL, [Big Data technologies](https://skyvia.com/blog/top-big-data-analytics-tools/) like Hadoop, Spark, and cloud platforms, including AWS, Google Cloud, or Azure. They also work with tools for orchestrating workflows, like [Apache Airflow.](https://skyvia.com/etl-tools-comparison/apache-airflow-alternative-skyvia) They must also communicate perfectly to engage senior stakeholders, assess user requirements, and translate this information into valuable data products. Let\u2019s walk through more skills. Proficiency in Programming Languages Strong coding skills are non-negotiable for data engineers. Languages like Python and Java are staples because they\u2019re powerful, versatile, and widely used for building and managing data-intensive applications. Expertise in Big Data Technologies The ability to work with big data frameworks like Hadoop, Spark, and Kafka is crucial. These technologies help manage and process the vast amounts of data modern businesses generate, and knowing the way around them is vital. Cloud Platforms Mastery Cloud services like AWS, Google Cloud, and Azure are today\u2019s data engineering playgrounds. As more companies move their operations to the cloud, cloud infrastructure management and service integration skills are vital. Database Management Data engineers need to be adept at both SQL and NoSQL database technologies. Understanding how to structure, query, and manage data efficiently is key to ensuring data accessibility and security. Data Pipeline and ETL Development Building robust data pipelines is at the heart of what data engineers do. Skills in designing, implementing, and maintaining ETL (extract, transform, load) processes ensure data flows smoothly and reliably from source to destination. Automation and Scripting Automating repetitive and time-consuming tasks is essential. Knowledge of scripting languages and automation tools like Airflow or Luigi can greatly increase efficiency and reduce the likelihood of errors. Problem-Solving Skills Data engineering is all about finding innovative solutions to complex technical challenges. Strong analytical and problem-solving skills are crucial for diagnosing issues, optimizing performance, and implementing scalable data solutions. Key Differences between Data Analysts and Data Engineers Understanding the distinction between data analysts\u2019 and data engineers\u2019 roles is super important because it helps businesses allocate the right resources and tools for each part of the data journey. For instance, a company needs a skilled data engineer to set up a better system if the data isn\u2019t well-organized or easy to access. If a firm needs to understand what the data tells about the sales trends, that\u2019s a job for a data analyst . The table below displays the key differences between these two roles. Aspect Data Analyst Data Engineer Main Skills \u2013\u00a0 Statistical analysis \u2013 Data visualization \u2013\u00a0Querying databases \u2013\u00a0 Programming (Python, Java, Scala) \u2013\u00a0 Database management \u2013\u00a0 System architecture Tools Commonly Used \u2013 SQL \u2013 Excel \u2013 Visualization tools \u2013 Hadoop, Spark ( [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) ) \u2013\u00a0Cloud platforms \u2013 Workflow management Primary Goals \u2013\u00a0Interpret and analyze data to find insights \u2013\u00a0Create reports and dashboards for decision-making \u2013\u00a0 Build and maintain data pipelines and architectures \u2013\u00a0Ensure data is clean, processed, and stored efficiently Focus \u2013\u00a0Focused on extracting and presenting data to influence business decisions \u2013\u00a0Focused on the infrastructure that collects, stores, and prepares data for use Impact \u2013\u00a0Directly affects business strategies and operations through insights from data \u2013\u00a0Enables the business to be data-driven by providing robust, scalable, and efficient data systems How to Transition from Data Analyst to Data Engineer The switching between data analyst and engineer is definitely doable, and it\u2019s not uncommon in the tech world. It involves upskilling and strategic career maneuvering, but it\u2019s entirely within reach with the proper preparation and mindset. Whether the specialist is moving from crafting the story to building the stage or vice versa, each role offers unique challenges and rewards, making the switch not just a change in job title but a fresh perspective on data\u2019s power in the business world. Let\u2019s review how it\u2019s possible to jump from one to the other and what you\u2019d need to make it happen. Switching from Data Analyst to Data Engineer Imagine you\u2019ve been spending a lot of time getting to know the stories data can tell as a data analyst, and now you want to get more involved in how that data is collected and processed. Moving into data engineering might be the next exciting step. What You\u2019ll Need Enhanced Programming Skills \u2014 advance your programming, particularly in Python, Java, or Scala. Understanding of Big Data Technologies \u2014 start familiarizing yourself with big data tools like Hadoop or Spark, which are staples in a data engineer\u2019s toolkit. Knowledge of Data Systems \u2014 work on a deeper understanding of databases, data warehousing, and architecture (how data is structured, stored, and moved). How to Get There Courses and Certifications \u2014 use online courses on [Coursera](https://www.coursera.org/) , [Udemy](https://www.udemy.com/) , or [edX](https://www.edx.org/) that can teach the necessary tech skills. Practical Experience \u2014 try to get involved in projects at your current job that touches on the engineering side or work on personal projects that let you practice building and optimizing data pipelines. Networking \u2014 connect with current data engineers, join tech communities, and maybe find a mentor who can guide you through the transition. Switching from Data Engineer to Data Analyst You may have been deep in the data infrastructure trenches, handling the flow and storage of data, and now you\u2019re itching to see more of the direct business impact your work supports. Shifting to data analysis could be your next exciting move. What You\u2019ll Need Strong Analytical Skills \u2014 hone your ability to interpret data and extract meaningful insights. Proficiency in Data Visualization \u2014 learning tools like Tableau or Power BI will be crucial for presenting data in a way that\u2019s easy for stakeholders to understand. Business Awareness \u2014 understanding what business questions to ask and how data can answer them is key. How to Get There Educational Resources \u2014 like moving into engineering, use great courses ( [Udemy](https://www.udemy.com/) , [Coursera](https://www.coursera.org) , [Udacity](https://www.udacity.com/) , etc.) focused on analytics. Data science courses often cover analysis techniques in depth. On-the-Job Experience \u2014see if you can shift your role gradually by taking on more analysis tasks or collaborating with analysts on projects. Build a Portfolio \u2014 start analyzing datasets on your own and building a portfolio of case studies. This will be hugely beneficial in showing potential employers your skills. Tools and Technologies Used by Each Role Data analysts typically focus on tools that help them slice and dice data to find trends and make sense of numbers. They often use: SQL for data querying. It\u2019s like the multitool for pulling specific data out of databases. Excel or similar spreadsheet tools. It\u2019s perfect for quick data manipulations and what-if analysis. Tableau or Power BI for painting data pictures. They turn complex data sets into understandable, visually appealing insights. Python or R because these programming languages have robust statistical analysis and data visualization libraries. On the other hand, data engineers are more focused on the infrastructure that handles large volumes of data. Their tools are: Programming languages like Python, Java, and Scala. These are the building blocks for developing data applications and services. Apache Hadoop and Apache Spark are the heavy machinery for processing huge amounts of data across clusters of computers. Cloud services like AWS, Google Cloud, and Azure are the playgrounds where data engineers set up scalable and robust data storage and processing environments. Workflow orchestration tools like Apache Airflow are like having a planner that helps automate and coordinate the sequence of data tasks. How Skyvia Can Help Both [Skyvia](https://skyvia.com/data-integration/) , a universal, highly [user-friendly](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) cloud-based data integration platform, is a bit of an all-rounder that can lend a hand to both data analysts and data engineers. It bridges these two critical roles, offering tools that simplify how data is managed, moved, and manipulated. Whether building complex data pipelines or creating detailed analytical reports, Skyvia can enhance your workflow and let you pay more attention to what matters \u2014 drawing valuable insights and building reliable data infrastructures. The platform is a helpful assistant that makes both data analysts\u2019 and data engineers\u2019 lives easier by taking care of the repetitive, technical tasks so they can focus on the strategic aspects of their jobs. It\u2019s like having a skilled sidekick in the data adventures. Here\u2019s how: Data Integration. Skyvia can automate the movement of data between [200+](https://skyvia.com/connectors/) sources and destinations, which is a big win for data engineers looking to save time when setting up data pipelines. Data Backup. The importance of data security can\u2019t be overstated for both roles. Skyvia offers robust backup solutions to ensure your data is safe and sound, like having an insurance policy for data. Querying and Reporting. Data analysts love Skyvia\u2019s capabilities to run SQL queries directly on cloud data. No downloads are needed. Plus, its ability to create reports and dashboards can make sharing the findings much easier. Cloud Data Management across different cloud platforms can be tricky for data engineers. Skyvia provides tools that help streamline this process, making it easier to ensure data flows smoothly and securely across systems. Career Path and Progression Exploring the career paths for data analysts and data engineers is like mapping out a journey in a land full of opportunities. Both roles start with a strong foundation in data but branch out in different directions, allowing to gain more experience and expertise. Let\u2019s look at how it\u2019s possible to navigate these paths and where they might lead each role. Data Analyst Career Path and Progression Starting Point \u2014 most data analysts kick off their careers with a degree in statistics, business, information technology, or a similar field. Early in their careers, they\u2019re often tasked with basic data cleaning, querying databases, and generating straightforward reports. Mid-Career \u2014 as they gain more experience, data analysts start handling more complex analyses, learn advanced statistical techniques, and become proficient with tools like Tableau or Power BI. They may specialize in business intelligence, marketing analytics, or financial analysis. Senior Positions \u2014\u00a0 further down the road, seasoned analysts can become senior data analysts, lead a team of analysts, or even become managers of analytics departments. They are expected to provide strategic insights that influence high-level business decisions. Transition Opportunities \u2014\u00a0 data analysts might deepen their expertise by moving into more specialized roles like data scientist, where they would focus more on predictive modeling and machine learning. Or they could transition into data consultancy roles, advising businesses on how to leverage data effectively. Data Engineer Career Path and Progression Starting Point \u2014\u00a0 data engineers typically begin with a strong computer science or engineering background. Early career responsibilities include setting up data pipelines, managing databases, and ensuring data quality. Mid-Career \u2014\u00a0 with a few years under their belts, data engineers often start tackling more complex projects involving big data technologies like Hadoop or Spark. They might also start working with cloud services such as AWS or Azure to manage scalable data storage solutions. Senior Positions \u2014 advanced roles include becoming a senior data engineer, where they might oversee multiple data projects or lead a team of engineers. Some become data architects, designing intricate data systems for large organizations. Transition Opportunities \u2014\u00a0 data engineers have the technical foundation to move into software development or system architecture roles. They could also cross over to more business-focused roles like data analyst or data scientist if they pick up some additional skills in statistics and analytics. Common Growth Opportunities Learning and Certifications \u2014 both paths benefit greatly from continuous learning. Certification in specific tools (like AWS for engineers or certified analytics professionals for analysts) or technologies can help propel their careers forward. Cross-Field Skills \u2014\u00a0 data analysts and data engineers often learn from each other. Analysts with a good understanding of engineering can design better, more efficient data queries. Engineers with a knack for analysis can ensure the infrastructure supports complex data exploration needs. Management and Strategic Roles \u2014\u00a0 moving into management can be a significant growth opportunity for both. Leading teams, strategizing data implementations, or heading entire departments are common progression routes for those interested in leadership. Future Trends Affecting Data Analysts and Data Engineers Diving into possible trends for data analysts and engineers is like peeking into a crystal ball to see what cool tech developments are on the horizon. The future for both roles looks bright but challenging, with plenty of new technologies to master and opportunities to seize. The table below collects the future trends for the roles and skills needed in data analysis and engineering. Trend Impact on Data Analysts Impact on Data Engineers Artificial Intelligence and Machine Learning AI and ML will automate many data analysis tasks, requiring analysts to focus more on interpreting AI-generated insights and strategic decision-making. AI and ML will increase demand for engineers to implement and maintain AI-powered systems, enhancing data processing capabilities. Increased Demand for Real-Time Data Analysts will need to use technologies that offer real-time data processing to deliver instant insights. Engineers will focus on building and optimizing real-time data processing architectures and streaming data systems. Data Literacy Analysts might increasingly assume roles in educating other teams about data insights and decision-making. A foundational understanding of data across various departments may lead to engineers playing key roles in data governance and architecture decision-making. Automation and Orchestration Tools Analysts will benefit from automated data cleaning and preparation tools, allowing more focus on analysis rather than data management. Engineers need to adopt new tools that automate data pipeline creation and maintenance, focusing on efficiency and scalability. Ethics and Data Privacy Analysts must become adept at navigating data privacy laws and ethical concerns in data usage to ensure compliance in analysis. Engineers will implement and maintain secure data systems that comply with data protection regulations. Enhanced Visualization Tools Analysts will gain from advanced visualization tools that make conveying complex data insights simpler and more effective. While not directly impacted, better visualization tools can aid in debugging and optimizing data flows and structures. Conclusion Data has always been important, but currently, there are many mechanisms for collecting and processing it in large volumes. As data continues to grow in importance across all aspects of modern businesses, from improving user experience to optimizing operations and beyond, having the right data pros in the right roles is not just nice to have \u2014 it\u2019s essential. It\u2019s all about putting data analysts and engineers where they can do their best so the business performs better. Choosing the right role isn\u2019t just about following the data; it\u2019s about following your passion and playing to your strengths. Whether you\u2019re shaping the future with your analytical insights as a data analyst or building robust data systems as a data engineer, both paths offer exciting opportunities to grow and significantly impact any data-driven organization. So, dive into the role that aligns with your skills and career aspirations, and get ready to make a big splash in the world of data. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-analyst-vs-data-engineer%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Analyst+vs.+Data+Engineer%3A+Roles+and+Real-Life+Applications&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-analyst-vs-data-engineer%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-analyst-vs-data-engineer/&title=Data+Analyst+vs.+Data+Engineer%3A+Roles+and+Real-Life+Applications) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-automation-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Automation Tools and Strategies for Business Efficiency By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-automation-tools/#respond) 1777 May 31, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-automation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Automation+Tools+and+Strategies+for+Business+Efficiency&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-automation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-automation-tools/&title=Data+Automation+Tools+and+Strategies+for+Business+Efficiency) Robots, production lines, and driverless airport shuttles are common examples of automated systems. They appeared at the crossroads of technological development and the will for cost optimization since the number of tasks increased and more human resources were needed. The same goes for data management: operating enormous datasets has become laborious and time-consuming, requiring much human power. Data automation tools address this issue by optimizing business workflows, accelerating processes, and eliminating human intervention. They can create an assembly line with raw input data and ready-to-use output information. This article helps to understand data automation, discover its benefits, and explore its contemporary challenges. You\u2019ll also find a selection of data automation tools and strategies to arrange your workflows properly. Table of Contents Understanding Data Automation Benefits of Data Automation Key Data Automation Strategies Top Data Automation Tools Common Challenges of Data Automation Implementation Conclusion Understanding Data Automation Data is generated either by humans (user posts on social media, customer records in CRM, etc.) or machines (IoT sensors, computer systems, etc.). Such raw data should be organized, prepared, and analyzed to bring value to organizations. Data automation tools can do all that. The most commonly used techniques for automating workflows are: Collection. Data automation tools gather data from different sources using the following techniques: Scheduling. The data is collected based on the indicated interval, usually one minute to one month. Triggers. Data collection starts once a certain condition is fulfilled or an event occurs. For example, once a record gets updated in a CRM system, the new data is loaded into a data warehouse. Streaming. This method is used in [data ingestion](https://skyvia.com/learn/what-is-data-ingestion) from websites, social media platforms, and stock markets in real-time. Once the initial setup is done, data collection runs automatically without human intervention. Transformation. Data cleansing, duplicate removal, validation, and transformations improve data accuracy and quality. Loading. Cleansed data sets are automatically sent to a data warehouse, data lake, spreadsheet, BI tool, or other chosen destination. Analysis. Data warehouses and BI tools often have embedded AI, [data mining](https://skyvia.com/blog/data-mining-tools/) , and machine learning algorithms for in-depth data analysis. Most BI tools also have powerful visualization and reporting capabilities for data exploration. Benefits of Data Automation According to the [Gartner report](https://www.gartner.com/en/newsroom/press-releases/2022-08-22-gartner-survey-reveals-80-percent-of-executives-think-automation-can-be-applied-to-any-business-decision) , 80% of executives are convinced that automation is applicable to any business process. Data automation tools reduce manual work on data-related operations and help people focus on creative tasks. Here are some other notable benefits of data automation: Brushed-up data . Data automation tools provide the mechanisms for cleansing and transforming data, making it well-organized for storage and analysis. Cost reduction . Technological solutions are much cheaper and faster than human work. Implementing data automation tools can save companies plenty of resources. Productivity increase . Since machines handle data automation operations, humans can concentrate on high-level tasks. Faster analytics . Analysts obtain high-quality datasets rather quickly as data automation tools process and prepare data at high speed. As a result, it takes less time to create reports, apply the needed data mining algorithms, and drive insights. To sum up, data automation speeds up data processing, which saves organizations a lot of time and monetary resources. Thus, employees can concentrate on non-monotonous tasks and analytics. Key Data Automation Strategies Like any process within an organization, data automation also needs a deliberate plan. So, here are some tips for building your data automation strategy. Data prioritization. Perform an audit of workflows across different departments. Based on the collected information, decide which manual tasks must be automated first. Data pipeline prototype creation. Gather details on the business process and decide how it could be automated. Create a prototype model for a data pipeline representing a gradual process view. Selecting a data automation tool . Check the data automation tools and their features in the next section . Based on this overview, decide which one suits your needs best. Operation execution . Use the selected data automation tool to design workflow automation scenarios. Specify the source from where the data needs to be taken, the data collection method, preprocessing operations (filtering, cleansing, transformation, etc.), and the destination tool. Monitoring . Check the progress of the designed pipeline\u2019s data automation execution. Inspect the logs in case any error occurs and make the necessary adjustments. Top Data Automation Tools Skyvia [Skyvia](https://skyvia.com/) is a universal data platform offering quick and easy solutions for workflow automation and data integration. It also provides products for cloud data backup, data management with SQL, and creating OData services. Let\u2019s focus on the [Automation](https://skyvia.com/automation/) product as it\u2019s the most suitable for workflow orchestration and automation. It connects your favorite apps and builds complex workflows to execute repetitive tasks. Here are some common workflows that could be automated with Skyvia: The payment transaction data is sent to bookkeeping software once a new purchase is completed on the e-commerce platform. Once a customer record is updated in a CRM, the information gets updated across other systems containing information about the same customer. If a website visitor puts items in a shopping cart but doesn\u2019t make a purchase within the last 24 hours, a reminder message is sent to the user\u2019s email address. Sending an email to customers with an anniversary with the company or birthday on a specific day. Collecting daily purchases from the website and sending them to Google Sheets. The automation flow starts with a trigger, followed by a set of actions performed under certain conditions, and ends with a Stop. Those are the building blocks you can use to construct your automation: A [trigger](https://docs.skyvia.com/automation/building-automation/triggers.html) can be manual, on-schedule, or event-based. An [action](https://docs.skyvia.com/automation/building-automation/components.html) is what makes the automation happen as it executes operations over data from the selected connection. Skyvia supports [180+ connectors](https://docs.skyvia.com/connectors/) (popular tools and services, including cloud apps, databases, and data warehouses), each with its own list of available actions. Other [components](https://docs.skyvia.com/automation/building-automation/components.html) , including blocks, specify conditions for the automated action execution. Stop signifies the end of the automation process. As you see, it\u2019s rather easy to automate workflows with Skyvia due to: The drag-and-drop designer. Web-based access without the need for any additional software installation. 180+ connectors for popular sources and services. Free version with two active automation processes daily or a paid plan with the pricing starting at $99/month. Syncari [Syncari](https://syncari.com/) is a data automation platform for streamlining business processes. It allows users to build interconnected pipelines for data flow between sources based on predefined templates or completely from scratch. Also, it offers functions for creating simple and complex routing logic for the data flow design. Features Data automation with pipeline design and routing. Data synchronization in multiple directions to align data among multiple sources. Data quality enhancement with cleaning, enrichment, and merge functions. AI-powered functionality for data activation, exploration, and understanding. Limitations Limited custom data transformations Lack of robust documentation Steep learning curve User reviews Syncari Skyvia G2 Crowd 4.8 / 5 4.8 / 5 Zapier [Zapier](https://zapier.com/) is an online platform designed to connect different apps and services and automate data flow between them. It allows users to create tasks to automate repetitive workflows that take employees a lot of time. All that can be done with no coding, so both developers and non-technical specialists can use Zapier. Features Multi-step zaps create complex workflows consisting of several stages. Filters set up conditions that determine a zap run when fulfilled. Formatter converts any kind of data to the needed form. Limitations Many connectors are available at extra cost. A free plan is satisfactory only for basic data automation needs. User reviews Zapier Skyvia G2 Crowd 4.5 / 5 4.8 / 5 TrustRadius 8.9 / 10 9.7 / 10 Capterra 4.7 / 5 4.8 / 5 Tray.io [Tray.io](http://tray.io) is an [Integration Platform as a Service](https://skyvia.com/blog/what-is-ipaas/) (IPaaS) with the ability to connect data from various third-party applications and build customizable workflows. All that can be done with visual interface and drag-and-drop. Tray.io also ensures strong data security and provides detailed logging. Features Data synchronization across different apps and databases. Transformation options for matching data formats and structures across sources. Ecosystem activation implies the creation of reusable and customizable templates. API management relies on low code to create and maintain microservices. Limitations No detailed instructions explaining how to configure pipelines. Debugging can be difficult. User reviews Tray.io Skyvia G2 Crowd 4.5 / 5 4.8 / 5 TrustRadius 8.9 / 10 9.7 / 10 Capterra 4.9 / 5 4.8 / 5 Workato [Workato](https://www.workato.com/) is an Integration Platform as a Service (IPaaS) that helps to automate business workflows. This app is typically used by enterprises and large companies to streamline processes. It can connect to both cloud and on-premises systems, and move data between them. Features Conversational integration with the chatbot allows users to build integrations with attractive UX. IT governance ensures high visibility of integration processes and usage patterns across an organization. New application support is provided by the configurable REST data connector for adding new data sources to the platform. Limitations The initial implementation is time-consuming. Coding is required for complex automation scenarios. User reviews Workato Skyvia G2 Crowd 4.7 / 5 4.8 / 5 TrustRadius 8.6 / 10 9.7 / 10 Capterra 4.7 / 5 4.8 / 5 Common Challenges of Data Automation Implementation Despite the numerous advantages of data automation, certain limitations are associated with its implementation and management. Initial investment costs . Once a company decides to automate data flows, it must invest in the infrastructure and toolkit update. The cost depends on the expected data load and the choice of a data automation tool. Even with a free version of the data automation software, it\u2019s necessary to invest in employee education and training activities. Steep learning curve . It might take some time to explore the tool\u2019s functionality, test data workflow automation, and implement it in the production environment. Monitoring and intervention . Even though data automation arranges data-related processes, human-level monitoring is still inevitable. It\u2019s crucial for creating or updating data pipelines and checking logs when data integration errors occur. Conclusion Data automation aims to transform manual, repetitive tasks into processes with less to no human intervention. It takes the weight off your shoulders, ensuring excellent data quality, cost reduction, and faster analytics. If you plan to implement data automation within your organization, consider the initial investment cost and employee training. Note that Skyvia is a user-friendly tool that doesn\u2019t require much time to get acquainted with its functionality. This service offers ample features, allowing you to build simple or complex automation data flows. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-automation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Automation+Tools+and+Strategies+for+Business+Efficiency&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-automation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-automation-tools/&title=Data+Automation+Tools+and+Strategies+for+Business+Efficiency) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-enrichment-services/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Enrichment Services: Definition, Techniques, Examples of Tools By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-enrichment-services/#respond) 2563 December 8, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-enrichment-services%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Enrichment+Services%3A+Definition%2C+Techniques%2C+Examples+of+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-enrichment-services%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-enrichment-services/&title=Data+Enrichment+Services%3A+Definition%2C+Techniques%2C+Examples+of+Tools) They say money won\u2019t make you wealthy, but data definitely can. By far, data enrichment is crucial, especially when it comes to the 360-degree customer view, as it can give your company the authentic voice of the customer along with decent revenues. Companies that know their audiences\u2019 tastes outperform their competitors and obtain the desired results. To reveal customers\u2019 needs, data enrichment services come into play. They collect information about consumers\u2019 behavior and their preferred methods of communication. In this article, we mainly focus on the data enrichment process and the tools that carry it out. Here, we also explain what benefits an organization can obtain by implementing data enrichment practices. Table of contents What Is Data Enrichment? Advantages of Data Enrichment Techniques for Effective Data Enrichment What Are Data Enrichment Services? Top Tools for Data Enrichment Skyvia Clearbit Zoominfo Datanyze LeadGenius Leadspace FullContact Tips for Choosing the Right Data Enrichment Company Conclusion What Is Data Enrichment? Data enrichment is a process of adding details to a customer profile by making it exhaustive and multifaceted. This process involves internal organizational data apps as well as third-party datasets. Meanwhile, data enrichment services ingest information from such sources and add it to a general customer profile. There are two principal categories of client information suitable for enrichment: Contact details. Ensure that companies reach their current or potential consumers. Behavioral patterns. Light up a personalized approach for each customer. In general, data enhancement serves customer-related departments. It aims to improve their daily workflows, conversion rates, brand loyalty, and other important metrics. Advantages of Data Enrichment Well, it\u2019s clear that marketers and sales professionals are very interested in enriching customer data, but how can entire organizations benefit from it? Let\u2019s shed light on this by providing solid advantages that data enrichment brings to an organization: Enhanced data quality. The problem of missing values is very common here and there. However, it\u2019s not acceptable when it comes to user-related details. To complement missing details about a customer, data enrichment is there. Facilitated decision-making. Having all data at disposition is like planning a trip with a map on the table. Managers no longer have white spots in understanding customers, so they can make wise and balanced decisions. Improved customer segmentation and targeting. With details about clients, it becomes a piece of cake to categorize them by any parameter. Data enrichment tools help to consolidate behavioral patterns and show which groups of customers prefer which products or who buys the most cheap/expensive services. Techniques for Effective Data Enrichment By now, we have been mainly concentrating on the theory of data enrichment phenomenon. Now, it\u2019s time to review some practical solutions and techniques for improving the data quality by enriching it. Usually, the operations on data are either manual or automated. The first case might be an option only for small datasets, but that doesn\u2019t exclude human error. Automated processes rely on data enrichment services and usually involve the following steps: Data collection. Information about users comes from different channels and is usually stored within various data locations. Data cleaning. Depending on the technology used, this stage might apply before or after integration. It assumes the removal of duplicate records or correction of corrupted data. Data integration. This is the core of data enrichment, involving combining data from organizational apps, databases, and other resources. All these stages, step-by-step, form [a single source of truth](https://learn.microsoft.com/en-us/azure/databricks/lakehouse/ssot) about customers, which serves for marketing and sales teams. What Are Data Enrichment Services? Tools for automated enhancement of customer data are called data enrichment services. They could be roughly classified into two groups: [data integration](https://skyvia.com/data-integration/) services. data discovery platforms. [Data integration tools](https://skyvia.com/blog/data-integration-tools/) , such as [Skyvia](https://skyvia.com/) , operate data as described above and consolidate it in a central repository. Collected data includes contact details and behavioral patterns such as time spent on a website, reactions, etc. Data discovery platforms enhance user information in another way \u2013 they provide quality datasets for companies to reach new customers or learn more about the existing ones in a particular sector. Such services can be used together with data integration platforms. Top Tools for Data Enrichment It\u2019s the right moment to learn about particular tools for data enrichment. Skyvia [Skyvia](https://skyvia.com/) is a universal SaaS data platform for quick and easy solving data-related tasks: data integration, ETL & Reverse ETL, automating workflows, creating complex data pipelines and simple integrations, CSV import/export, etc. With Skyvia, you can: Load data from one source to another with [Import](https://skyvia.com/data-integration/import) integration (ETL and Reverse ETL). Transfer data to a data warehouse with the [Replication](https://skyvia.com/data-integration/replication) scenario (ELT approach). Export critical data from business apps in a CSV format with the [Export](https://skyvia.com/data-integration/export) option. Sync several applications with the [Synchronization](https://skyvia.com/data-integration/synchronization) scenario to make the customer data up-to-date in all systems. Additionally, Skyvia offers complex integration scenarios, [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) , and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) , allowing you to: involve several sources for data enrichment set up pre- and post-integration tasks perform data integration tasks in a specific order following certain conditions Skyvia is a powerful tool that allows you to: Perform any operation with no coding. Construct desired data flows in seconds only using the drag-and-drop user interface. Connect to [170+ sources](https://skyvia.com/connectors/) (cloud apps, databases, data warehouses) to enrich data. Test all the functionality on a free trial. Clearbit [Clearbit](https://clearbit.com/) is a data enrichment service that mainly assists B2B companies. It easily integrates with popular CRMs, such as HubSpot, to complement existing records with 100+ data points. For instance, it adds information about location, employee count, annual revenue, and other details about the company of interest. The pricing for the Clearbit service is flexible, though data enrichment is available under the Business plan only. Zoominfo Another popular platform for data discovery is called [Zoominfo](https://www.zoominfo.com/) \u2013 it contains accurate datasets about businesses. Organizations targeting B2B companies as their targeted clients can freely use Zoominfo for data enrichment purposes. This tool has the SalesOS solution for helping sales managers obtain more information about their prospects and find new leads. Meanwhile, the MarketingOS solution supplies multiple attributes about companies, which provides great changes to modernize marketing campaigns. Datanyze In fact, [Datanyze](https://www.datanyze.com/) was acquired by Zoominfo several years ago, though it\u2019s still presented as a standalone product. Similarly to Zoominfo, it provides B2B contact data but with a strong focus on sales, while Zoominfo offers a broader spectrum of options suitable for marketers, organizational teams, and HR. Datanyze is a browser extension integrating cloud CRMs and marketing automation platforms. It supplies companies with real-time information that is regularly updated. This solution also has a strong objective to introduce new prospects to sales managers in the industry they\u2019re targeting. LeadGenius [LeadGenius](https://www.leadgenius.com/) is known as a lead procurement and sales intelligence platform. It helps companies to generate, qualify, deliver, and convert leads, which accelerates and increases sales. LeadGenius is a cloud-based solution, so it can be installed as a browser extension and integrated with other SaaS solutions for sales. Leadspace Similarly to other tools, [Leadspace](https://www.leadspace.com/) also provides accurate industry-oriented datasets. However, Leadspace is more than just a data enrichment service because it provides extra functions for analyzing how enhanced data works. FullContact [FullContact](https://www.fullcontact.com/) has a series of different products, and one of them, called Enrich, helps businesses nurture records. This tool is suitable for different types of businesses as it provides plans for enriching the data of individual customers as well as companies. Tips for Choosing the Right Data Enrichment Company The abundance of data enrichment tools provokes the difficulty of choice. In order not to get lost in this wood, here are some crucial tips to select the proper solution. Define strategic goals that the business needs to achieve in the next month or year. Explore the set of features of each tool of interest and see how it aligns with strategic goals. Decide which audience the company targets \u2013 B2B or B2C customers. Consider how to obtain the needed data \u2013 whether it\u2019s needed to consolidate internal information about users or find third-party external datasets. See how the price for a specific data enrichment product meets your budget. Those are only small suggestions for selecting the right data enrichment service. Criteria, such as supported languages, ease of use, and the level of integration with other products, may also be critical factors for purchase. Conclusion Companies participate in a neverending race to get higher revenues through marketing and sales initiatives. However, the quality of data isn\u2019t usually superior for new prospects and existing customers, which impedes from reaching the desired numbers. One of the modern approaches addressing this problem is data enrichment. This process relies on data enrichment services of two types: Data integration (Skyvia) Data discovery (Clearbit, ZoomInfo, Leadgenius). Choosing a data integration tool, such as Skyvia, could be advantageous for data enrichment. Skyvia can bring together internal organizational tools (CRMs, ERP systems, cloud apps) and access more than 170 sources to ingest third-party data from there. Besides that, you can use Skyvia for other data-related tasks! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-enrichment-services%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Enrichment+Services%3A+Definition%2C+Techniques%2C+Examples+of+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-enrichment-services%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-enrichment-services/&title=Data+Enrichment+Services%3A+Definition%2C+Techniques%2C+Examples+of+Tools) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-integration-and-etl/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Understanding the Key Differences Between Data Integration and ETL By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-integration-and-etl/#respond) 1333 August 28, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-and-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=Understanding+the+Key+Differences+Between+Data+Integration+and+ETL&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-and-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-and-etl/&title=Understanding+the+Key+Differences+Between+Data+Integration+and+ETL) Data is a treasure trove of valuable information in the modern business world. Data management is like being the guardian of that treasure, ensuring data is organized, accessible, and secure. There are a lot of practices to manage data, but here, we\u2019ll talk about data integration in general and ETL as a part of it and the difference between them. Data integration and ETL are the critical dynamic duo of data management. Together, they help companies harness the full power of data to improve decision-making, boost efficiency, or enhance collaboration. So, mastering data integration and ETL is not just a game-changer; it\u2019s a key to unlocking the full potential of data. Why Data Integration is Important Integrated data means less time spent searching for information and more time focusing on what matters. Collaboration becomes smoother and more effective when everyone has access to the same up-to-date data. Why ETL is Important Automating the ETL process saves time and reduces the risk of errors compared to manual data handling. [ETL tools](https://skyvia.com/blog/etl-tools/) can handle large volumes of data, making it easier to scale data operations as businesses grow. In this article, we\u2019ll examine each term and discover its difference. To do this, we\u2019ll compare data integration and ETL processes, find their pivotal moments, and consider both integration methods\u2019 most popular use cases. Table of Contents What is ETL? What is Data Integration? Key differences between Data Integration and ETL Popular Data Integration Tools for 2024 Use Cases for Data Integration and ETL Conclusion What is ETL? Imagine data integration as a round dining table with a set of chairs to sit behind, like ETL, ELT, reverse ETL, data replication, synchronization, etc\u2026 In that case, ETL is just one chair. However, it\u2019s the most popular and useful one. Extract, Transform, and Load [(ETL](https://skyvia.com/learn/what-is-elt) ) is a process that prepares and moves data from various sources into a centralized data warehouse. This process is the backbone of effective data management. It automates moving and preparing data, making it easier for businesses to get valuable insights and make smarter decisions. For instance, John Smith manages a retail business, and data comes from multiple sources, such as online sales, in-store sales, and customer feedback. ETL helps him extract all this data, clean and standardize, and load it into a central data warehouse. This way, Mr. Smith can easily analyze sales trends, customer behavior, and inventory levels all in one place. What is Data Integration? [Data integration](https://skyvia.com/learn/what-is-data-integration) is the term describing a process that combines data from various sources (CRMs, databases, spreadsheets, cloud services, etc.) into a unified view. Imagine an online store with customer data in CRM, sales data in an e-commerce platform, and inventory data in a spreadsheet. Data integration is the first step in collecting all this info into a single dashboard to understand what\u2019s going on in the company. So users can see how trends affect inventory levels and adjust their marketing, sales, services strategies, etc., based on customer behavior. Key Differences between Data Integration and ETL As we have considered, ETL is just a part of the data integration process. Understanding the differences between the whole data integration and ETL is crucial for choosing the right approach for data management needs. Here\u2019s a quick overview to help users understand what sets them apart. Criteria Data Integration ETL Scope and Application Broader scope, includes various methods to combine data from different sources into a unified view. Specific process focused on extracting, transforming, and loading data into a data warehouse. Processes and Methods Involves data synchronization, replication, federation, and consolidation. Involves three main steps: extraction of data, transformation of data to meet requirements, and loading of data into a target system. Tools and Technologies Uses tools like Skyvia, Informatica, Talend, and MuleSoft to connect and merge data from diverse systems, ensuring a unified data view. Uses tools like Skyvia, Talend, Informatica PowerCenter, Apache Nifi, and Microsoft [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) to handle the extraction, transformation, and loading of data into data warehouses. Output and Data Volume Outputs can be varied including integrated applications, dashboards, and unified data views. Typically outputs structured, cleansed data into a data warehouse, handling large volumes of data. Frequency and Timing Can be real-time, near real-time, or [batch processing](https://skyvia.com/blog/batch-etl-processing/) depending on the method used. Generally involves scheduled batch processing, but can also support near real-time data loading. Common Scenarios \u2013 Real-Time Integration . Useful for scenarios where immediate data access is required, such as in live dashboards or monitoring systems. \u2013 Data Consolidation . Common for creating a single source of truth by merging data from various systems, helping in comprehensive reporting and analysis. \u2013 Application Integration . Helps in ensuring different software applications can communicate and work together seamlessly, enhancing workflow efficiency. \u2013 Data Warehousing . Commonly used for preparing and loading data into a centralized data warehouse where it can be analyzed. \u2013 Data Migration. Includes transferring data from legacy systems to new systems, ensuring data consistency and accuracy. \u2013 Data Cleaning . Crucial for transforming raw data into a clean, consistent format that is ready for analysis. Best Practices \u2013 Plan and Design . Define clear integration goals and design a flexible architecture to accommodate future changes. \u2013 Use Automation . Leverage automated tools to minimize manual intervention, which reduces errors and increases efficiency. \u2013 Monitor and Manage . Regularly monitor data flows and manage integration processes to ensure data accuracy and reliability. \u2013 Data Quality . Focus on ensuring data is cleaned and transformed accurately to maintain high quality. \u2013 Efficiency . Optimize ETL workflows to handle large volumes of data efficiently, avoiding bottlenecks. \u2013 Documentation . Keep detailed documentation of ETL processes to facilitate maintenance, troubleshooting, and future enhancements. Popular Data Integration Tools for 2024 Let\u2019s walk through the popular [data integration tools](https://skyvia.com/blog/data-integration-tools/) that make data work for your business. Skyvia [Skyvia data integration](https://skyvia.com/data-integration) platform is perfect for beginners and pros. According to the last [G2 rate](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , it\u2019s second in the top 20 easiest-to-use ETL tools. The solution offers seamless cloud integration, ETL, ELT, Reverse ETL,\u00a0 migration, sync, the visual building of advanced data pipelines, orchestration, etc. With its intuitive interface, you can easily connect [190+](https://skyvia.com/connectors) data sources like Salesforce, Google Sheets, and databases without coding. Plus, it supports scheduled data sync and automated workflows. Informatica [Informatica](https://www.informatica.com/) is a vital tool in the world of data integration, known for its strong performance and scalability. The platform is ideal for large enterprises. With a wide range of features, including ETL, data quality, and master data management, Informatica helps organizations easily handle complex data integration tasks.However, despite its powerful functionality, the solution might not be user-friendly for non-techs. Talend [Talend](https://talend.com/) is a good choice for businesses of all sizes, especially those who love open-source solutions. It offers a comprehensive suite of tools for ETL, data integration, data quality, and more. Talend offers users a drag-and-drop interface to easily design data workflows and strong community support. Microsoft Power BI [Power BI](https://www.microsoft.com/en-us/power-platform/products/power-bi) is perfect for pulling data from various sources and creating stunning dashboards and reports. The tool smoothly integrates with other Microsoft products like Azure, Excel,\u00a0 and SQL Server, so it\u2019s a good selection for companies already using the Microsoft ecosystem. However, the solution may be too complicated for beginners. Fivetran [Fivetran](https://www.fivetran.com/) is about automated data pipelines. It\u2019s easy to set up and use and requires minimal maintenance. So, the solution is a favorite among data teams who want to focus more on analysis and less on data wrangling. The platform also supports a wide range of connectors and ensures that data is always up-to-date and ready for work. Stitch [Stitch](https://stitchdata.com/) offers a simple, straightforward approach to data integration. It perfectly fits startups and small businesses looking to quickly and efficiently get their data into a data warehouse. Like Fivetran, Stitch is known for its easy setup and usability. It scales with organizations\u2019 business needs and might also be a good choice for growing companies. Use Cases for Data Integration and ETL Every business wants to reach success and lead in the race. Data integration tools are the ones helping to become and stay the leader. Let\u2019s review a few real-life scenarios, showing how it works in various business areas. How the American Health Care Association Improved its Reporting The American Health Care Association (AHCA) needed a [solution](https://skyvia.com/case-studies/ahca) to integrate on-prem databases with Dynamics 365. Even with transitioning to a cloud database, they still had to build reports and dashboards on-prem, applying SSRS (SQL Server Reporting Services) for creating advanced analytics and building reports. For that, the data from Dynamics 365 should be replicated to SQL Server (served as a data warehouse). They selected Skyvia to solve this problem. Such a decision allowed AHCA to avoid changing the current data stack and offered a variety of data connectors to cover possible future requests. Skyvia\u2019s data integration abilities allowed AHCA to overcome the data streamlining challenge. So, the team can configure the system with just a few clicks and automatically capture data updates from source to destination, ensuring that their reports remain up-to-date and accurate. How NISO Automated Data Flows for Financial Operations NISO, an outsourced CFO company that provides services to the MCA industry (merchant cash advance), needed to see a broad picture of the financial aspects of its customers. They used self-made scripts to extract data from MySQL to Excel Spreadsheets and, afterward, QuickBooks Online. With the company\u2019s growth, this manual approach quickly became a bottleneck, so they searched for a tool to improve it and selected [Skyvia](https://skyvia.com/case-studies/niso) to connect services in the cloud directly instead of developing its own APIs. The selection was because the NISO team can directly connect such services in the cloud with Skyvia. The company also has many necessities regarding SQL and MySQL servers, and Skyvia was perfect for integrating with other products with their specific developing features. With Skyvia\u2019s integration features, NISO can gain a broader reach for customer info and scale more confidently. How Cirrus Insight Enhanced its Salesforce Data Integration Cirrus Insight, a sales enablement platform, needed to automate data integration between various systems like Salesforce, QuickBooks, and Stripe. They considered Skyvia to facilitate data migration between Salesforce accounts and synchronize Stripe data with Salesforce. For the [Salesforce to Salesforce integration](https://skyvia.com/blog/salesforce-to-salesforce-integration/) , Skyvia migrated data from one instance to another so that all operations are in one instance but still have all customer data in the other. The implementation costs are 20 times less than usual with such integration. For the Stripe integration scenario,\u00a0 data was transferred automatically into Salesforce. With Skyvia, they can handle bringing over automated data updates and integration, providing customer records synchronization between these systems. This [automation saved time and significantly reduced implementation costs](https://skyvia.com/case-studies/cirrus) . Conclusion Data integration is an essential process that combines data from multiple sources into a unified view using methods like data virtualization, replication, federation, CDC, API integration, and streaming beyond just ETL. Data Integration is flexible and perfect for merging data from multiple applications or creating comprehensive dashboards. ETL is ideal for preparing data for in-depth analysis and reporting. By understanding these capabilities, users can choose the right approach for their needs. No matter what the company is looking for, the flexibility of data integration or the structured process of ETL. Both points are crucial for making data work for it. The best way to cover data integration processes in general and ETL is to select a universal solution like Skyvia that can do both and even more. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-and-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=Understanding+the+Key+Differences+Between+Data+Integration+and+ETL&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-and-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-and-etl/&title=Understanding+the+Key+Differences+Between+Data+Integration+and+ETL) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-integration-for-hybrid-cloud/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Hybrid Cloud Data Integration \u2014 Strategies and Tools By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-integration-for-hybrid-cloud/#respond) 1997 April 12, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-for-hybrid-cloud%2F) [Twitter](https://twitter.com/intent/tweet?text=Hybrid+Cloud+Data+Integration+%E2%80%94+Strategies+and+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-for-hybrid-cloud%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-for-hybrid-cloud/&title=Hybrid+Cloud+Data+Integration+%E2%80%94+Strategies+and+Tools) Hybrid cloud solutions are gaining significant popularity in today\u2019s business landscape. They are convenient and provide businesses with a robust framework to overcome traditional data integration challenges like data silos, cost management, security troubles, scalability and flexibility concerns, etc. Compared to traditional on-premise applications, you don\u2019t need to be a wizard to predict the winner: According to the [RedHat report](https://www.redhat.com/en/topics/cloud/open-hybrid-cloud-approach) , 63% of IT leaders already have a hybrid cloud infrastructure, and 54% plan to have one soon. Statista\u2019s [Hybrid Cloud Market Size Worldwide 2021-2027](https://www.statista.com/statistics/1232355/hybrid-cloud-market-size/) report predicts that the global hybrid cloud market will reach USD 262 billion in 2027, up from USD 85 billion in 2021. This article explains hybrid cloud apps\u2019 capabilities and how such platforms improve business data management by offering vital solutions for each challenge. Table of Contents Understanding Hybrid Cloud Data Integration Hybrid vs. Conventional Systems Principles for Effective Hybrid Cloud Integration Steps for Implementing Hybrid Cloud Integration Best Practices in Hybrid Cloud Integration How Skyvia Contributes to Hybrid Cloud Integration Conclusion Understanding Hybrid Cloud Data Integration Hybrid cloud data integration is a strategic approach with one secured and accessible framework consolidating data from different sources like SaaS apps, on-premise DBs, and private and public cloud services. Thus, businesses may optimize their data management and usage by leveraging the strengths of each platform. Let\u2019s review the key components of such integration, its benefits, and how it addresses modern data challenges: Key Components On-Premises Data Centers . A company owns and operates these physical servers and storage. Critical or sensitive data often remains here for security and compliance reasons. Private Clouds . A cloud environment dedicated solely to one company, offering greater control and security than public clouds. Businesses often use such services for sensitive operations requiring strict data controls. Public Clouds . Third-party providers offer scalable and flexible cloud services over the Internet. Businesses use public clouds for their cost-effectiveness and vast computing resources. Integration Tools and Platforms. Software and services (API management tools, [data integration software](https://skyvia.com/blog/data-integration-tools/) , and hybrid cloud management platforms) that facilitate data movement, transformation, and management across these environments. Benefits Flexibility and Scalability . Businesses can dynamically allocate resources where needed most, scaling up or down as demand changes. Cost Efficiency . Companies can optimize their costs by using public clouds for non-sensitive, scalable tasks and reserving private clouds for critical operations. Enhanced Security . Sensitive data can be kept on-premises or in a private cloud while still taking advantage of the computational power of public clouds for other tasks. Compliance and Data Sovereignty . Companies can ensure that their data handling practices comply with regional laws and industry regulations by strategically placing data where it\u2019s legally safe. Improved Data Management . Hybrid cloud data integration provides a unified view of data across the company, improving insights, operational efficiency, and decision-making. How It Works Data Synchronization . Keeps data updated across systems, ensuring consistency and accuracy. Data Transformation . Converts data into the required formats for different systems and processes. API Management . Facilitates secure and efficient data exchange between cloud environments and applications. Orchestration . Automates workflows and data processes across the hybrid cloud environment. Addressing Data Challenges Breaking Down Silos . Integrates data from various sources, providing a holistic business view. Handling Big Data . This involves efficiently managing large volumes of data, using the public cloud\u2019s scalability for processing and analytics. Real-time Processing . Supports the immediate movement and data analysis, enabling timely insights and actions. Security and Compliance . Meets strict data protection requirements by controlling where and how data is stored and processed. Hybrid vs. Conventional Systems Hybrid cloud solutions are about flexibility, cost efficiency, and scalability, which is the best choice for businesses looking for agility and growth in a secure, compliant manner. At the same time, traditional IT systems offer control and high-security levels but may need more scalability and cost-effectiveness required by modern businesses. The table below compares both approaches\u2019 abilities, advantages, and pitfalls. Aspect Hybrid Cloud Systems Conventional IT Systems Abilities \u2013 Integrates private, public, and on-premises resources. \u2013 Offers dynamic scalability and flexible resource allocation. \u2013 Centralized management for diverse environments. \u2013 Managed entirely on-premises with complete control over hardware and software. \u2013 High customization for organizational needs. Advantages \u2013 Flexibility and scalability for fluctuating workloads. \u2013 Cost efficiency through optimal resource usage. \u2013 Enhanced security for sensitive data via controlled environments. \u2013 Supports compliance and data sovereignty. \u2013 Full control over the security and configuration of systems. \u2013 Optimal performance through dedicated resources. \u2013 Clear visibility and management of the infrastructure. Pitfalls \u2013 Complexity in managing across environments. \u2013 Integration challenges requiring advanced orchestration. \u2013 Potential for variable costs due to mismanagement. \u2013 Limited scalability requiring physical hardware purchases and installation. \u2013 Higher upfront and operational costs. \u2013 Inflexibility to quickly adapt to changing needs. \u2013 Responsibility for all maintenance, updates, and security. Principles for Effective Hybrid Cloud Integration Businesses need to navigate the complexities of hybrid cloud integration, ensuring they can fully use the benefits of cloud computing while mitigating potential risks and challenges. Let\u2019s go through key principles to ensure successful integration: Strategic Planning and Alignment A clear understanding of your business objectives is the first step in data integration. Select the IT strategy. Explain how hybrid cloud integration supports your business goals. Ensure that the integration aligns with the business strategies and operational requirements. Security and Compliance First Design your hybrid cloud integration with a security-first approach according to the compliance requirements of your industry and the data protection laws of the jurisdictions where you operate. Seamless Data Integration and Mobility Implement technologies and practices that enable seamless data movement and integration across cloud environments and on-premises data centers. Scalability and Flexibility Your hybrid cloud architecture must be scalable and flexible, allowing for easy resource adjustment in response to changing organizational needs. Consistent Management and Operations Use unified management tools and practices across all cloud and on-premises solutions to simplify operations and reduce complexity. Workload and Data Placement Optimization Analyze workloads and data to determine their optimal placement based on performance requirements, costs, security, and compliance needs. Steps for Implementing Hybrid Cloud Integration Companies need a thoughtful, strategic, and detailed approach to succeed in cloud deployment. [Red Hat 2021 Global Tech Outlook report](https://www.redhat.com/en/global-tech-outlook-report/2021) shows the following picture: Source: https://www.redhat.com/en/resources/2021-global-tech-outlook-detail The steps below help businesses implement a hybrid cloud integration strategy, leveraging the combined benefits of cloud and on-premises resources to support their primary objectives. Assessment and Planning Conduct a thorough inventory of your applications, data, and workloads. Identify which will remain on-premises, migrate to the cloud, or require new development. Define your business\u2019s main aims (e.g., cost savings, scalability, compliance, improved agility, etc.) and how the hybrid cloud will support them. Designing the Hybrid Cloud Architecture Decide on the private and public cloud components, data storage solutions, and networking architecture. Consider the aspects like: Data flows, including the security, synchronization, and latency requirements. Network topologies, like VPNs, direct connections, or software-defined networking (SDN). Provide IAM policies and controls to manage user access across hybrid environments. Ensure the design meets security standards and regulatory compliance. Selecting Vendors and Tools Evaluate public cloud providers based on their services, compatibility with your existing systems, security, and cost, and offer appropriate tools, APIs, and connectors. Select integration, management, and security systems supporting hybrid environments. Security and Compliance Framework Implement identity and access management, data encryption mechanisms, like SSL/TLS for data in transit, and network security protocols. Ensure compliance with relevant regulations and standards according to your industry. Remember that healthcare, finance, and other sectors have different regulation requirements. Data Integration and Management Strategy Define data integration processes, including ETL operations, APIs for real-time data exchange, and data synchronization methods. Implement data governance and quality measures. Implementation and Migration Start with non-critical workloads to test the environment. Use automated tools for migration where possible. Ensure thorough testing for performance, security, and functionality. Such testing has to recover: Performance testing to review how your hybrid cloud environment works under varying workloads. Resource scaling to verify that resources scale automatically. Data backup and recovery testing to ensure data integrity. Operationalization and Monitoring Implement monitoring tools for performance, cost, and security. Establish a DevOps culture for ongoing deployment and management. Regularly review and adjust resources to optimize costs and performance. Best Practices in Hybrid Cloud Integration Statista\u2019s [Worldwide Enterprise Cloud Strategy 2017-2023](https://www.statista.com/statistics/817296/worldwide-enterprise-cloud-strategy/) Report says that 72% of the enterprise respondents have already deployed a hybrid cloud in their business. You must adopt the best practices if you\u2019re also ready to do it. Clear Strategy This moment is often ignored, but it\u2019s critical. You must understand what workloads and data will be moved to the cloud and which will stay on-premises based on performance, security, and cost considerations. Review your IT assets and business processes. Map out a phased migration plan that aligns with your business priorities and risk tolerance. Strong Backups Policy Having backups of all the data on both sides of the hybrid cloud or multi-cloud environment is vital. So, you\u2019ll not need to worry about recoverability in case of breaches or data loss. Keeping backup storage separate from the original data source is also a good idea. This avoids a single point of failure, so losing duplicate data isn\u2019t a problem. Note: You may use Skyvia\u2019s universal data integration platform for such tasks. It offers [automated cloud data backup solutions](https://skyvia.com/backup/) , ensuring that data from cloud applications can be securely backed up and restored. This approach helps in hybrid cloud strategies where data protection and recovery are must-haves. Skyvia\u2019s backup features contribute to a robust disaster recovery plan, avoiding data loss across cloud and on-premises systems. Click [here](https://docs.google.com/document/d/1avHL31R7AvuBxDAQt3RNQBLeRxsAXtczm-4BFEhCTuM/edit#heading=h.1jjzbl4r58ig) to read more about the platform\u2019s abilities in hybrid cloud integration. Security and Compliance According to Statista\u2019s Drivers for Considering Cloud-Based Security Solutions Worldwide 2023, [56% of businesses selected cloud-based security solutions](https://www.statista.com/statistics/1319187/global-drivers-for-cloud-security-solutions/) for better scalability, followed by faster time for deployment. 38% of them chose better visibility into user activity and system behavior. To be always secure and compliant with industry requirements: Implement robust security measures that span both your cloud and on-premises environments. Use IAM (identity and access management) to control user access. Ensure compliance with relevant regulations by understanding your industry\u2019s data residency and protection requirements. Remember to encrypt data regularly to protect it during storage and transfers between clouds, users, or devices. Unified Management Approach Managing a hybrid cloud environment includes multiple platforms and vendors. A unified management toolset can simplify monitoring, management, and optimization tasks across environments. Deploy CMPs (cloud management platforms) that support both on-premises and cloud resources. Look for tools that provide a single pane of glass for resource deployment, monitoring, and policy enforcement. Automation and DevOps Practices Manual processes are slow and cause errors. Automation and DevOps practices are helpful to enhance efficiency, reliability, and speed in software development and infrastructure management. Automate cloud deployments, configurations, and testing using infrastructure as code (IaC) tools like Terraform or AWS CloudFormation. Incorporate continuous integration and deployment (CI/CD) pipelines to streamline development workflows. Network Performance and Connectivity Optimization Network issues impact the performance of hybrid cloud applications and services. Connectivity between on-premises and cloud environments is vital for user experience and application performance. Implement connectivity solutions like AWS Direct Connect or Azure ExpressRoute. Optimize your WAN (Wide Area Network) and internet access points for cloud traffic. Regularly Costs Review Without careful management, cloud costs can spiral. You have to understand how resources are used and how much this usage costs for efficient cloud financial management. Select cloud cost management tools to monitor usage and expenses. Implement tagging for resources to improve cost allocation and reporting. Regularly review services and usage to identify optimization opportunities. How Skyvia Contributes to Hybrid Cloud Integration [Skyvia](https://skyvia.com/) is a cloud-based data platform offering comprehensive tools that simplify data management, integration, and connectivity between various data sources and cloud services. [Its user-friendly UI](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) allows for integrating data across diverse cloud-based and on-premises data sources without extensive coding knowledge. The solution supports data synchronization, allowing businesses to merge data from various sources, like CRM, ERP, databases, etc., into the cloud to enhance data accessibility and decision-making. This approach allows for speeding up deployment and reducing reliance on specialized resources. The table below shows how [Skyvia data integration](https://skyvia.com/data-integration/) contributes to hybrid cloud integration across different aspects. Aspect Feature Contribution Centralized Data Management With Skyvia\u2019s centralized data management capabilities, users can query, transform, and manage their data across different systems using a single interface. This unified management approach is essential for hybrid cloud environments, reducing complexity and improving efficiency in managing data workflows and analytics. Connectivity and API Management Skyvia offers [180+](https://skyvia.com/connectors/) built-in connectors to various cloud and on-premises applications and API management capabilities. By facilitating easy connection to and from different data sources, Skyvia enhances the seamless data flow essential for effective hybrid cloud integration. Enhanced Data Security Skyvia ensures that data integration and management operations are secure, complying with standard data protection and privacy regulations, including HIPAA, GDPR, PCI DSS.ISO 27001, and SOC 2 (by Azure). This aspect is crucial for businesses concerned about data security and compliance in their hybrid cloud environments, especially when handling sensitive or regulated data. Scalable Integration Solutions The platform scales with your business needs, supporting everything from simple data replication tasks to complex integration scenarios, like bi-directional data sync, data flow, and control flow. Businesses evolve, and their data integration needs become more complex; Skyvia\u2019s scalable solutions ensure they can still manage their hybrid cloud environments effectively without performance degradation. Let\u2019s see real-life examples of [data backup](https://skyvia.com/case-studies/advisera) and [integration automation](https://skyvia.com/case-studies/healthcare-company) with Skyvia. Advisera\u2019s Clients\u2019 Data Backup Enhanced and Secured Background [Advisera](https://advisera.com/) needed to enhance the clients\u2019 data backup stored in the CRM system, avoid human impact, and completely automate all the processes. According to business requirements, they had to select a third-party data backup tool that the IT team could manage and easily integrate with the company\u2019s other cloud platforms. The requirement was to securely connect and transfer data from the CRM system to the remote storage. Solution Skyvia\u2019s seamless data integration feature allows Advisera to set automated backup processes for the CRM system, storing data backups in remote company-managed storage. The platform enabled: A secure connection between two endpoints. Defining types of exports. Setting tasks with exactly specified data. Scheduling the frequency of backups. So, it\u2019s possible to monitor backups in real-time for active runs easily and historically from the list of logs, validating the data transfer for each backup task. Result With Skyvia, Advisera can set data backups for their SaaS products: ISO 27001 automation solution Conformio, AI-Powered Knowledge Base Experta, AI-Powered Toolkits, and Company Training Account securely to the remote storage on autopilot. Automated Data Integration for Healthcare Company Background The healthcare company had to orchestrate data from diverse sources (backend system, Facebook data, CRM data, Excel files, etc.) and bridge the gap between these data hubs. They also needed to improve the reporting capabilities and overall data analytics using the BI tool (Tableau). The next challenge was reliance on manual Excel entries, which complicated matters and resulted in data duplication and errors. Solution They compared solutions and selected Skyvia because of a more cost-effective solution for the same data volumes. The platform\u2019s flexibility allowed them to retain their current data warehouse technology and infrastructure (Amazon Redshift). The company built appropriate business data analytics with all data sources connected, like dashboards and reports on email and CRM campaigns, to review the effectiveness of these campaigns. Result Skyvia implementation allowed the company to: Automate data integration processes. Reduce the need for manual entries. Increase data accuracy. Save time and costs. Conclusion The choice between hybrid and conventional systems depends on each company\u2019s needs, strategic goals, and operational considerations. Cloud-based tools are about innovation. However, on-premise systems often allow unique custom configuration abilities. A hybrid cloud strategy enables businesses to use both worlds\u2019 best sides, ensuring simplicity, cost-effectiveness, security, and compliance. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-for-hybrid-cloud%2F) [Twitter](https://twitter.com/intent/tweet?text=Hybrid+Cloud+Data+Integration+%E2%80%94+Strategies+and+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-for-hybrid-cloud%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-for-hybrid-cloud/&title=Hybrid+Cloud+Data+Integration+%E2%80%94+Strategies+and+Tools) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-integration-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Integration Tools: TOP 10 List of Best Software By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/data-integration-tools/#respond) 8565 September 20, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Integration+Tools%3A+TOP+10+List+of+Best+Software&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-tools/&title=Data+Integration+Tools%3A+TOP+10+List+of+Best+Software) The growing need for data integration tools comes with the exponential boost of the data applied by companies within every industry. Why so? These massive volumes of data create both possibilities to thrive and problems to solve. Statista predicts total global data volume to reach 161 zettabytes (that\u2019s 161 trillion gigabytes!) by 2025. [Statista Research Department, May 23, 2022](https://www.statista.com/statistics/871513/worldwide-data-created/) On average, businesses use [110 different services](https://www.statista.com/statistics/1233538/average-number-saas-apps-yearly/) since each tool is better for a specific business task. Unfortunately, due to this, getting a unified view of business data becomes an enormous challenge. Though many services offer data integration, they all have features suitable only for a particular business. As a result, choosing the right tool becomes a mission impossible. In this article, we\u2019ve gathered the top data integration software vendors and specified for which cases they are most suitable. We\u2019ve also included explanations of the data integration definition and what data integration tool is for those of you who are at the very beginning of the integration challenges. Table of contents Data Integration Meaning What are Data Integration Tools? Main Types of Data Integration Software On-premise integration software Cloud-based data integration tools Open-source data integration tools Proprietary software How to choose data integration tools? TOP 10 Best Data Integration Tools How Skyvia Can Solve Your Data Integration Needs? Key takeaways Data Integration Meaning Data integration stands for aggregating all data (regardless of structure, type, and format) within disparate sources into a single, unified view. Simply put, the meaning of data integration is in transforming small, miscellaneous segments into one consistent system. As businesses keep operating within various systems, producing huge amounts of data, the need for integrated data is undeniable. By merging and standardizing, different teams across the organization(s) receive a 360-degree customer view and produce actionable insights in a fast manner, becoming data-driven, not only in words. Note: You can find more information about [data integration, its meaning, main types, and examples](https://skyvia.com/blog/what-is-data-integration/) in our detailed article. What are Data Integration Tools? The data integration tools definition appears to be a logical continuation of the previous one. These tools perform data extraction, transforming, and transferring processes, including mapping, cleansing, etc. Integration solutions provide businesses with a single source of truth, delivering quality data to all teams at once. Obviously, creating such a defined pathway can become very technical and tricky; however, the advantages of implementing data integration tools are substantial: Elimination of data silos as they usually become a barrier to digital transformation Deriving value from data along with getting insights fast in action. Scaling business by applying analytics based on quality data. Main Types of Data Integration Software Since there are no two identical businesses (even operating in the same industry), data integration tools for big companies vary in their functions. There are many tools to solve different business needs, each with its specifics. Below, we show how these tools can be grouped by certain types. On-premise integration software As the term suggests, these tools don\u2019t work from external servers installed in a local (on-premise) network. The goal of using such services is for the data not to leave the company\u2019s infrastructure, usually according to specific security policies. However, this approach makes the internal IT department configure, support, and maintain the operation of such software. As a result, on-premise integration tools require a lot of resources and time to support their smooth operation. Cloud-based data integration tools Unlike on-premise software, cloud-based data integration tools operate from external servers outside the organization\u2019s network. Their primary function is to integrate data from separate sources into cloud-based storage. It\u2019s more convenient for businesses and way more cost-effective than having to install and maintain their own IT systems. It provides organizations with modern cloud agility, eliminating the need for deployment or manual upgrades, and delivers global access to data anytime for any department. Open-source data integration tools A distinctive feature of this type is open-source integration, which allows any company to change the software to suit its requirements. On the one hand, it\u2019s great for a business that doesn\u2019t want to buy a pig in a poke. However, on the other hand, you should consider both the costs of changing and supporting the product and the high expertise required from company employees. Proprietary software The proprietary software is the most expensive on the market, as it\u2019s designed to meet specific business requirements that aren\u2019t everyday use cases. The main clients of such tools are large enterprises with vast amounts of data, extensive data sources, and complex ecosystems requiring non-standard approaches. How to choose data integration tools? The wider the choice, the better, right? However, it can quickly become a disaster as it\u2019s pretty challenging to make the right choice. In real-world situations, integration problems are usually complex and unique. Your tool should meet requirements depending on the business size (whether it\u2019s an SMB or enterprise), considering the difficulty of business processes, the technical knowledge required for the users to apply the tool/service, and so on. Assessing all possible scenarios is a tough spot, so we\u2019ve put together a short checklist of factors to keep in mind to help you prioritize your requirements for the data integration toolkit. Business use cases . Initially, the chosen integration tool should be able to meet the functional needs of a business at its current stage. The use cases required for SMBs and Enterprise level companies are different, as are the complexity of implementation and costs of the data integration solutions. Security . The tool must have sufficient data protection (security certificates, etc.). Thus, the customer data won\u2019t be compromised. Scalability . Companies and their business goals are constantly changing, as well as the software employees apply (new tools in, outdated ones out). Accordingly, the integration tool you choose must be flexible enough to cope with changes in quantity and application of customer data and data sources. TOP 10 Best Data Integration Tools Note: The list of data integration tools is based on G2 Crowd and Gartner ratings. Skyvia Skyvia Data Integration is a cloud-based, complete ETL, ELT, and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) solution designed to consolidate data from all of your services and applications (local and on-premise), making sure it\u2019s always accurate and up to date. It\u2019s easy to use, integrates seamlessly with many business apps, and scales up to meet any load. It also requires no maintenance costs or software updates as it\u2019s browser-based. Skyvia also offers more powerful Data Flow and Control Flow tools that allow for building complex data pipelines with many sources and managing them. By applying various integration scenarios, any business can get fully automated and trouble-free data processes. Details: [Official website](https://skyvia.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/skyvia/reviews#) Key features: Business size \u2014 from small to large businesses: simple scenarios for SMBs and complex ones for enterprises. Deployment \u2014\u00a0 installation isn\u2019t needed, as it\u2019s cloud-based and browser-based. Technical knowledge required \u2014 provides a no-code wizard with a user-friendly interface, meaning that business users can configure data flows visually without extensive training. Alert notifications & error detection \u2014 enables failure alerts and detailed error logs. Template availability \u2014 offers various scenarios with pre-made connectors for cloud-based, on-premise, or hybrid ecosystems. Price \u2014 offers a free trial and five different pricing plans, starting from $15 per month. SnapLogic The Intelligent Integration platform by SnapLogic is a visual [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) tool that allows self-service integration for its clients. SnapLogic supports hybrid integrations and works with cloud-based and on-premise data sources. The service delivers an intuitive user experience with the help of Iris AI integration assistant for easy integrations and data management. Details: [Official website](https://www.snaplogic.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/snaplogic-intelligent-integration-platform-iip/reviews) Key features: Business size \u2014 offers integrations for enterprise-level organizations. Deployment \u2014\u00a0 the service is HTML5-based and delivered as a multi-tenant cloud service. Technical knowledge required \u2014 delivers an easy-to-use interface and offers low-code or even no-code environments. Alert notifications & error detection \u2014 identifies data errors throughout integration workflow and provides log reports. Template availability \u2014 supplies 600+ pre-built data connectors. Price \u2014 offers a free trial; for more pricing information, you need to contact a manager. TIBCO Cloud Integration The cloud integration platform by TIBCO is an enterprise solution that is a part of the large ecosystem. It\u2019s also been named a Leader in Enterprise iPaaS by the Gartner 2021 report. The service provides plenty of features and additional tools within one product suite. TIBCO integration software secures data by applying an embedded or optional on-premise API gateway and offers role-based permissions and access for better data management. Details: [Official website](https://www.tibco.com/products/cloud-integration) [Reviews and product details on G2 Crowd](https://www.g2.com/products/tibco-cloud-integration-including-businessworks-and-scribe/reviews) Key features: Business size \u2014\u00a0 works with small, midsize businesses and large organizations. Deployment \u2014 is a cloud-based service with a web-based interface. Technical knowledge required \u2014 allows the no-code guided experience for connecting hybrid environments: on-premise and cloud-based data sources. Alert notifications & error detection \u2014 supplies a built-in error detection mechanism. Template availability \u2014 delivers pre-built integration solutions to over 100 IaaS and PaaS environments. Price \u2014 offers four plans: 30-day free trial, Basic (from $400), Premium, and Hybrid. Celigo The Celigo Integration Platform works as flexible, cloud-based software, a common iPaaS. The service supports multiple connections (SaaS, B2B, apps, etc.), allowing business users to create integration data flows. It also grants data access to authorized users only, ensuring role-based control over product functions. Details: [Official website](https://www.celigo.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/celigo/reviews) Key features: Business size \u2014\u00a0 works with enterprise businesses. Deployment \u2014 is a cloud-based software. Technical knowledge required \u2014 provides a guided software interface within a low-code Developer Workspace. Alert notifications & error detection \u2014 uses machine learning capabilities for error detection and data management within integration processes. Template availability \u2014 offers pre-made templates of the most popular cloud connections. Price \u2014 offers a free trial; its Basic plan starts from $600. SQL Server Integration Services (SSIS) [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) is an integration solution and a part of the Microsoft SQL Server. This tool is one of the oldest in the market, offering multiple functionalities for miscellaneous data workflows. It supports native data connection with Microsoft\u2019s Azure, delivers plenty of recovery options for users, and presents access management and permission control to secure customer data. The latest version of SQL Server 2022 has already been introduced for the public preview. Due to its difficulty in use, pricing policy, and enterprise-oriented features, it may not be a good choice for small and medium companies. Details: [Official website](https://www.microsoft.com/en-us/sql-server/sql-server-2019) [Reviews and product details on G2 Crowd](https://www.g2.com/products/microsoft-sql-server/reviews) Key features: Business size \u2014\u00a0 works with enterprise-level organizations. Deployment \u2014\u00a0 is an on-premise SQL server that can be deployed only through Azure Cloud Environment or the SQL server with Windows authentication. Technical knowledge required \u2014\u00a0 enables a low learning curve and lots of documentation support for users. Alert notifications & error detection \u2014 uses a Send Mail task to notify an administrator of the failure. Template availability \u2014 built-in connection managers installed while setting up Integration Services. Price \u2014 has four pricing editions, from a\u00a0 three-month subscription of $599 to $3,390. Informatica Informatica offers its clients the whole product suite containing expanded functionality of the miscellaneous data-related processes like migration, cleansing, validation, management, etc. There are such tools as Intelligent Data Management Cloud (IDMC), Cloud Data Integration for Cloud ETL and ELT, etc. Altogether these solutions secured a Leader place in the Enterprise iPaaS category by Gartner 2021 report. Details: [Official website](https://www.informatica.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/informatica-powercenter/reviews) Key features: Business size \u2014 suits medium-sized and large corporations. Deployment \u2014 is a cloud-based solution. Technical knowledge required \u2014\u00a0 delivers a drag-and-drop AI-powered interface, making it easy for the users to use and maintain the organizational data. Alert notifications & error detection \u2014 enables sending alert messages or emails in case of any failures. Template availability \u2014 supports more than 300 pre-made integrations. Price \u2014 there\u2019s a free trial and pre-paid subscription starting from $2000 per month. Talend Talend Open Studio is an open-source integration software offering customers data preparation, integration, management, etc. Like many integration solutions, it works with both on-premise and cloud-based systems. Talend platform also appeared in Gartner Magic Quadrant 2021 as a Leader in the Data Integration category. It requires minimal installation requirements and illustrates product features with the Talend resource center that includes tutorials, webinars, and the podcast Truth Be Known. Details: [Official website](https://ua.talend.com/products/talend-open-studio/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/talend-data-integration/reviews) Key features: Business size \u2014 is suitable for any size business: from small to large. Deployment \u2014 is a cloud-based solution with web-based interfaces. Technical knowledge required \u2014\u00a0 the user interface isn\u2019t friendly for a non-technical audience. Alert notifications & error detection \u2014 has email notifications when certain types of events occur in the applications. Template availability \u2014\u00a0 comes with several out-of-the-box connectors. Price \u2014 offers a free trial and four pricing plans, but for more detailed information, you have to contact sales representatives. Boomi One of the leaders within the Enterprise iPaaS category, according to [Gartner](https://www.gartner.com/reviews/market/enterprise-integration-platform-as-a-service/vendor/dell-boomi) , Boomi is an [iPaaS solution](https://skyvia.com/blog/best-ipaas-solutions/) that offers typical features of a data integration platform, efficiently handling hybrid environments. It allows managing data (cloud-based or on-premise) in a single unified place and offers free on-demand training, certification, and many additional resources (knowledge base, tutorials, etc.). Details: [Official website](https://boomi.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/boomi/reviews) Key features: Business size \u2014\u00a0 suits the needs of businesses of all sizes. Deployment \u2014 is a cloud-native platform. Technical knowledge required \u2014 is low-code and provides a visual, drag-and-drop interface. Alert notifications & error detection \u2014 includes email alerts for different events. Template availability \u2014 delivers templates, process libraries, and pre-configured patterns for miscellaneous application integrations. Price \u2014 the first 90 days are free, and after that, the pricing depends on what you use. Jitterbit The API-based platform supports integrating data sourcing between on-premise, cloud-based, and SaaS services and offers custom integrations on-demand along with speech recognition and real-time language translation due to AI features. Jitterbit makes data transformation easier for business users by providing an automapping function along with presenting separate virtual environments to test integration steps with no altering of the other data processes. It also produces many additional resources for its clients like webinars, solution sheets, and even the Digital Transformation Pioneers podcast. Details: [Official website](https://www.jitterbit.com/) [Reviews and product details on G2 Crowd](https://www.g2.com/products/jitterbit/reviews) Key features: Business size \u2014 works with any size business, especially with enterprises. Deployment \u2014 is a cloud-based service. Technical knowledge required \u2014\u00a0 provides low-code integration and visual interface. Alert notifications & error detection \u2014 enables email notifications for success or failure of an operation or on the calling of a script. Template availability \u2014 applies various pre-built templates. Price \u2014 offers a trial period and three pricing plans Standard, Professional, and Enterprise. However, it requires an annual contract for any plan. MuleSoft Anypoint Platform Anypoint Platform is a flexible integration system that unravels business data connection and transformation needs. It\u2019s been named a Leader among Enterprise iPaaS and full-lifecycle API management according to Gartner\u2019s report in 2021. This hybrid solution works with cloud and on-premise systems, connecting them into a unified view by applying REST APIs. It uses its functional language, Data Weave, offers IoT devices integration and template-based workflow. The platform delivers real-time and batch data integration and fully managed infrastructure. It also protects sensitive customer data, establishing compliance with ISO 27001, SOC 2, PCI DSS, and GDPR. Details: [Official website](https://www.mulesoft.com/platform/enterprise-integration) [Reviews and product details on G2 Crowd](https://www.g2.com/products/mulesoft-anypoint-platform/reviews) Key features: Business size \u2014\u00a0 suits enterprise-level companies. Deployment \u2014\u00a0 as an ESB, it handles on-premises integration, and its CloudHub handles cloud-based integration. Technical knowledge required \u2014\u00a0 enables its clients with quick start guides, tutorials, certification, training, and webinars. Alert notifications & error detection \u2014 delivers alert notifications and send emails based on the application\u2019s notifications. Template availability \u2014\u00a0 uses pre-built APIs, connectors, templates, accelerators, and other integration assets. Price \u2014 offers a free trial and three pricing plans Gold, Platinum, and Titanium. How Skyvia Can Solve Your Data Integration Needs? As we mentioned above, there are three crucial points to consider while choosing the perfect one for your business. Skyvia Data Integration suits each one of them as it\u2019s a flexible tool offering solutions for both simple and complex use cases. Each scenario offers visual design and requires no coding knowledge. Business use cases By applying the Skyvia platform with its vast capabilities, customers receive a powerful tool that covers all their data-related needs from a single code-free interface: ETL, ELT, Reverse ETL, bidirectional data sync, workflow automation, etc. Simply put, data kept in disparate sources can be synchronized automatically with no coding. And with the [versatile](https://skyvia.com/pricing/) prices, every company can find a suitable plan for its needs and budget. Security Since data security is of utmost importance, Skyvia uses industry-leading security technologies to keep customers\u2019 data safe. Among [security practices adopted by Skyvia](https://skyvia.com/security) is Microsoft Azure cloud, one of the most secure platforms for enterprise use, with its west US data centers. Moreover, Skyvia complies with the European Union\u2019s Global Data Protection Regulation (GDPR) and HIPAA requirements for Protected Health Information (PHI) and is PCI DSS compliant. Scalability No matter the size, your company could benefit from Skyvia products in many different scenarios: managing, analyzing, and making sense of your business data using Skyvia products. It\u2019s also important to choose software that is flexible enough to accommodate the increasing amount of data. With powerful data pipeline designers like [Data Flow and Control Flow by Skyvia](https://skyvia.com/data-integration/) , you can create custom integration logic and configure advanced integration scenarios involving different integration packages. Key takeaways Data integration tools are a must-have for any data-driven company, as making decisions without considering all the data is a precarious path. However, making the right choice can be pretty challenging \u2014 there are too many different tools on the market. Now, with a list of the most rated and popular integration solutions, you can start choosing by watching demos, trying out the services, etc. We recommend starting with a free Skyvia account, as there are many useful functions and no hidden costs. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Integration+Tools%3A+TOP+10+List+of+Best+Software&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-tools/&title=Data+Integration+Tools%3A+TOP+10+List+of+Best+Software) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-integration-trends/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Integration Trends in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-integration-trends/#respond) 2208 February 28, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-trends%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Integration+Trends+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-trends%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-trends/&title=Data+Integration+Trends+in+2025) The first data integration mentions emerged right after the invention of floppy disks. People used floppy disks as an intermediary tool for copying information from one system to another, but that took lots of time and resources. Things got better in the 1990s when the first [extract-transform-load (ETL) tools](https://skyvia.com/blog/etl-tools/) appeared. They were much more convenient than floppy disks, though they required a very complex architecture and infrastructure stack. The rise of cloud computing and the internet has been continuously transforming the field of data integration since 2009. Data warehousing, implementation of AI and ML, and even self-service platforms rock the data integration sphere today. Find all these and other data popular integration trends in this article. Table of Contents Exploring Data Integration Critical Role of Data Integration for Business Emerging Data Integration Trends Conclusion Exploring Data Integration Believe it or not, organizations use around [130 apps](https://www.statista.com/statistics/1233538/average-number-saas-apps-yearly/#:~:text=In%202022%2C%20organizations%20worldwide%20were,by%20companies%20has%20constantly%20increased.) on average in their daily operations. Imagine the amount of data generated every day in those SaaS platforms and desktop applications. However, unsystematized bulk data isn\u2019t very informative, so it\u2019s better to organize it well. A great way to do that is by combining data from various sources with the [data integration](https://skyvia.com/learn/what-is-data-integration) mechanisms applied. Let\u2019s look at some popular definitions of data integration by tech giants of today. \u201cData integration is the process of bringing together data from different sources to gain a unified and more valuable view of it so that your business can make faster and better decisions.\u201d Google \u201cData integration is the process of discovering, moving, and combining data from multiple sources to drive insights and power machine learning and advanced analytics.\u201d Google \u201cData integration refers to the process of combining and harmonizing data from multiple sources into a unified, coherent format that can be put to use for various analytical, operational and decision-making purposes.\u201d IBM \u201cData integration is the process for combining data from several disparate sources to provide users with a single unified view.\u201d Microsoft To combine and harmonize data from multiple sources into a unified format, data integration techniques evolve. Below you\u2019ll find the most common approaches going in line with data integration trends. Traditional ETL relies on three fundamental stages: 1) extraction, 2) transformation, and 3) loading of data. It pulls out data from selected sources, applies cleansing to match the target data type, and loads it to the destination system. [ELT](https://skyvia.com/learn/what-is-elt) is a newer and more flexible alternative to ETL, with the same stages, though loading precedes transformation. This approach appeared as a response to big data. Its aim is to load high-volume unstructured datasets to the target system. [Data warehousing](https://skyvia.com/blog/data-warehouse-best-practices/) uses either ELT or ETL to consolidate data in a data warehouse, usually for business intelligence purposes. Data virtualization, unlike the methods mentioned above, doesn\u2019t imply physical movement and copying of data. Instead, it creates a virtual view across multiple data sources. It serves as an access point for applications, queries, reporting tools, etc. Change data capture (CDC) identifies and tracks changes in data warehouses and databases. It allows companies to achieve data integrity and consistency across multiple systems. Those concepts and approaches are widely used in [data integration tools](https://skyvia.com/blog/data-integration-tools/) such as [Skyvia](https://skyvia.com/) \u2013 a universal SaaS data platform designed for a wide set of data-related tasks. One of its products, named [Data Integration](https://skyvia.com/data-integration/) , has everything needed to perform seamless migration and elaboration of data across multiple systems. Skyvia\u2019s Data Integration product embeds the following tools: [Import](https://skyvia.com/data-integration/import) is the ETL-based tool for migrating data between cloud applications, databases, and data warehouses. [Export](https://skyvia.com/data-integration/export) extracts data from cloud apps into CSV files that can be saved on a computer or cloud storage. [Replication](https://skyvia.com/data-integration/replication) is the ELT-based tool for copying data from cloud apps into data warehouses and databases. It uses the CDC approach to capture data differences when making incremental updates on schedule. [Synchronization](https://skyvia.com/data-integration/synchronization) coordinates data in different sources to make sure it\u2019s aligned. It also uses the CDC to identify which data was changed in one source and make the appropriate changes in another. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) allows users to build more complex data pipelines, including several sources, and apply multistage data transformations like lookup and expression. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) creates logic for task execution, performs pre- and post-integration activities, and configures automatic error processing logic. Data Flow and Control Flow offer advanced data integration features available under [paid plans](https://skyvia.com/pricing/) . Other integration scenarios are available for free, so you can start experimenting with your data now! Critical Role of Data Integration for Business Data integration positively impacts the overall efficiency and productivity of any business. It helps both startups and large corporations to drive insights from consolidated data and make smart decisions. The most tangible advantages associated with data integration: Enhanced data quality. [Data integration tools](https://skyvia.com/blog/data-integration-tools/) standardize data, making it clear to understand and ready to use for business intelligence (BI). Data-driven decision-making. Cleansed and standardized data becomes a solid foundation for machine learning, robotics, and other modern technologies. They accompany BI, which helps businesses make informed decisions. Resource saving. The initial investment in data integration affects the total cost of ownership (TCO). However, the results are just around the corner. You\u2019ll get immediate insights into which actions to take, which products to improve, and how to make your business more profitable and improve ROI. Data security. Modern data integration tools implement encrypted channels for data transfer between sources. Let\u2019s examine [Cirrus\u2019s case study](https://skyvia.com/case-studies/cirrus) , which depicts how the company uses data integration daily. They needed to bring data from Stripe, QuickBooks, and SQL Server into Salesforce customer records. With the help of Skyvia, they managed to set up everything without the IT specialists intervening. As a result, Cirrus saves a lot of time and money, keeps data clear and correct, and makes informed decisions. Emerging Data Integration Trends No matter whether a company operates in the \u2018red ocean\u2019 with harsh competition or the \u2018blue ocean\u2019 being a monopolist in the niche, it certainly has a lot of data to manage. Data integration can be compared to the tree pole that helps businesses cope with data; technologies are its branches, and the trends are fruit. Below, you\u2019ll find emerging data integration trends, starting from the cloud to self-service platforms. Depending on which fruit you expect, select those trends that will help cultivate the crops. Cloud and Data Integration: Emerging Patterns and Trends Amazon Web Services, Google Cloud, Azure Cloud, IBM Cloud, and other similar solutions have completely changed how companies build and organize their infrastructures. More and more businesses decide to bring their operations to the cloud to take weight off their shoulders and stop configuring on-premises architecture. Some enterprises still use so-called hybrid infrastructure combining cloud and on-premise elements for security reasons. Cloud technologies grant scalability, flexibility, and cost-effectiveness, which is critical to modern businesses. They also make a foundation for all other [data integration](https://skyvia.com/data-integration/) trends, providing the needed architecture and features. The Rising Wave of Real-Time Data Integration The slogan \u201cCitius, Altius, Fortius\u201d no longer refers only to the Olympic Games but to the real business world. Companies are craving to be leaders and thus do everything for that. They perform real-time monitoring of KPIs, performance, market trends, and other critical aspects that may impact business growth and success. Real-time data integration usually requires complex infrastructure and architecture solutions. The most popular trends in real-time architectural frameworks are: Event-driven architecture is based on the events that trigger certain actions in the system. It\u2019s rather effective in working with data from heterogeneous sources and of different types. Microservices architecture presumes that each process is an individual unit running in a separate container. It provides more agility in working with various data types and integration requirements. Stream processing frameworks like Apache Kafka are designed to handle data streams in real-time. Cloud-native architecture supports applications deployed in the cloud and provides scalability, resilience, and elasticity for them. Such applications as Skyvia run in the cloud environment and thus handle any data loads. Real-time integration might be of particular interest to companies working with IoT (Internet of Things), patient surveillance, the financial stock market, and digital marketing. AI and Machine Learning in Data Integration Processes Data integration also has its challenges and uncertainties. Low-quality data or wrong data format are among them and become especially tangible when mapping source fields to the destination fields on integration setup. Data integration tools aren\u2019t always capable of detecting low-quality data or mismatched formats, but machine learning can help. It usually comes embedded into data integration tools for detecting and signalizing data discrepancies. To make things clearer, let\u2019s look at the example of when marketers register phone numbers in the 000-000-0000 format in HubSpot. Meanwhile, salespeople register phone numbers in the 000 000 0000 format in Salesforce. When bringing data from both systems together, an integration error will likely occur. The role of AI and ML is to detect and predict possible inconsistencies and apply the necessary measures to map data between sources correctly. The Surge in Popularity of Embedded iPaaS Platforms The term [iPaaS](https://skyvia.com/blog/what-is-ipaas/) is an acronym that stands for integration Platform-as-a-Service. It offers all he necessary services to seamlessly integrate applications and platforms in the cloud. The embedded iPaaS platform is usually an in-built solution inside another software product. It enables users to access embedded applications directly inside the software. Such embedded iPaaS systems benefit particularly SaaS companies. They already provide easy-to-use app integrations that can be adopted inside a SaaS application. This also reduces the time for development for SaaS companies. Self-Service Data Integration on the Rise Floppy disks are in the past, while self-service data integration platforms shape the future. They represent an innovative approach toward data integration and maintenance. Self-service data integration platforms are particularly helpful for non-tech business users. Owing to the no-code and zero-maintenance approach, users with no programming experience can build their data pipelines. Self-service tools incredibly speed up the processes and effectiveness of marketing and operation departments. They can perform any data integration task without any help from the IT guys. Enhanced Commitment to Safeguarding Data Data security is another concern associated with data integration, maintenance, and processing. Modern data integration tools implement secure mechanisms for safe data transfer in the encrypted using HTTPS protocol and SSL certificate. They also ensure role-based access control (RBAC) that provides each user with a set of permissions. There are also audit trails that log user actions to detect who accesses data and what actions are performed on it. Such monitoring procedures detect any anomalies and potential security breaches in real-time. Scheduled Data Processing Given the rising popularity of real-time data, data integration tools implement appropriate scenarios for dealing with it. They ensure proper transfer and processing of big data in real-time. For instance, Skyvia can perform [data transfer in batches](https://skyvia.com/blog/batch-etl-processing/) each minute, which is close to the real-time pace. Just click Schedule when setting up the integration scenario and select the frequency and timing for data integration. Note that such near-real-time integration is possible under the Professional pricing plan. A free version of Skyvia allows only one scheduled data transfer a day, which is suitable for daily reports and data updates. See other [Skyvia pricing plans](https://skyvia.com/pricing/) to understand where to start. Blockchain Integration and Beyond Cryptocurrency isn\u2019t the only sphere embracing blockchain technology. Data integration also heavily relies on it to enhance data security. Complex cryptographic mechanisms, hash function, and chain structure ensure data blocks are highly protected. Moreover, the blocks in the blockchain are stored across various locations over the network. This concept eliminates a single point of failure and enhances data availability. Conclusion We\u2019ve presented the most recent and actual trends in the data integration market: real-time streams ML and AI for data mapping self-service and [iPaaS platforms](https://skyvia.com/blog/best-ipaas-solutions/) cloud services blockchain technologies etc. The good news is that you don\u2019t have to take many different actions to implement each of these trends. Just try out Skyvia, which goes in line with most of the modern data integration trends. It offers near-real-time data transfer, a self-service approach with no coding knowledge needed, smart data mapping, scheduled data transfer, and data safeguarding. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-trends%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Integration+Trends+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-integration-trends%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-integration-trends/&title=Data+Integration+Trends+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-mart-vs-data-warehouse/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Understanding the Differences between Data Mart and Data Warehouse By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-mart-vs-data-warehouse/#respond) 1248 September 19, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mart-vs-data-warehouse%2F) [Twitter](https://twitter.com/intent/tweet?text=Understanding+the+Differences+between+Data+Mart+and+Data+Warehouse&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mart-vs-data-warehouse%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mart-vs-data-warehouse/&title=Understanding+the+Differences+between+Data+Mart+and+Data+Warehouse) In the digital age, data isn\u2019t just a byproduct of business activities. It\u2019s a core asset that drives decision-making, innovation, and growth. Companies of all sizes use data to gain insights, optimize operations, and stay competitive. However, no one wants to believe in words in the modern world, so let\u2019s review the statistics. According to a study by [McKinsey](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-data-driven-enterprise-of-2025) , companies that embrace data-driven decision-making are 23 times more likely to acquire customers, six times more likely to retain them, and 19 times more likely to be profitable. The amount of data generated worldwide is staggering. [IDC](https://www.idc.com/getdoc.jsp?containerId=US52076424) predicts that by 2025, the global datasphere will grow to 175 zettabytes, up from 33 zettabytes in 2018. This explosion of data means that businesses need effective ways to manage, store, and analyze their data. Businesses that effectively use big data analytics can see significant financial benefits. [Forrester](https://www.forrester.com/report/data-driven-business-role-connections-tool/RES180699) says that data-driven companies are growing at an average of over 30% annually and are expected to take $1.8 trillion annually from their less data-driven peers by 2021. Looking impressive, isn\u2019t it? Now, let\u2019s explore two essential tools for managing and using volumes of this data: Data Marts and Data Warehouses . A data warehouse is a large, centralized repository that stores data from various sources across an organization. It handles vast amounts of data and supports complex queries and analysis. Data warehouses are typically used for enterprise-wide data consolidation and reporting. A data mart is a smaller, more focused version of a data warehouse that serves the specific needs of a particular department or business unit, such as marketing, sales, or finance. Data marts typically contain a subset of data from the organization\u2019s data warehouse, tailored to meet the needs of that specific group. This article will help users catch the difference between these two terms, consider each repository\u2019s core features and main benefits, and describe common user scenarios in both cases. Table of contents What is a Data Warehouse What is a Data Mart Key Differences When to Use DWH or Data Mart Role of Data Integration Tools Conclusion What is a Data Warehouse Now we know what a data warehouse is, so let\u2019s see how it works. Imagine it as the brain of a company\u2019s data operations, where all the different pieces of data from various parts of the business come together to provide a unified view. For instance, Mrs. Potter owns a growing chain of retail stores called \u201cPotter\u2019s Goods.\u201d Mrs. Potter\u2019s business is booming, and she\u2019s got data pouring in from all directions: Sales data from the CRM systems. Customer data from the loyalty program. Inventory data from the warehouses. Marketing data from the online campaigns and social media. All this data is scattered across different systems, making it hard for Mrs. Potter to get a complete view of her business. To tackle this, Mrs. Potter implements a data warehouse (DWH) , using its core features to reap significant benefits while addressing everyday challenges. Core Features of the Data Warehouse Centralized Storage. The data warehouse combines sales figures, customer information, inventory levels, and marketing campaign results into one centralized location. Data Integration. The data warehouse stores the data integrated from multiple sources and keeps it consistent and accessible for analysis. So, sales data from different stores, even using various POS systems, is now harmonized. Historical Data Management. The data warehouse stores historical data. It allows tracking changes and trend analysis and provides LTV, ROI, channel attribution, etc. You can find the necessary data in the DWH when comparing this quarter\u2019s sales to the same quarter last year or analyzing long-term customer behavior. Optimized for Complex Queries. DWHs provide appropriate data for complex queries that generate detailed reports and analytics. Challenges and How the Data Warehouse Solves Them The table below shows the typical challenges and how DWHs may solve them. Criterion Problem Solution Data Silos Before implementing the data warehouse, Mrs. Potter struggled with data being trapped in different systems, making it difficult to get a holistic view of her business. By integrating data from all the systems into one centralized repository, the data warehouse eliminates silos, giving Mrs. Potter a unified view of her operations. Data Quality Issues With data coming from various sources, inconsistencies in formatting and accuracy were typical, leading to unreliable reports. The DWH standardizes and cleanses data during the integration process, ensuring that all data is consistent and accurate, improving the quality of its insights. Complex Reporting Needs Mrs. Potter needed to perform complex analyses, such as identifying seasonal trends or customer preferences, which was complex with her previous systems. The data warehouse\u2019s ability to handle complex queries and large datasets enables Mrs. Potter to generate detailed, insightful reports that inform her business strategies. Benefits Enhanced Decision-Making. With all the data integrated and centralized, Mrs. Potter can easily generate reports that provide actionable insights. It helps to make smarter decisions about inventory, marketing, and sales strategies. Time Efficiency. Instead of spending hours gathering and reconciling data from various sources, Mrs. Potter can now access all the information she needs from the data warehouse, saving time and reducing errors. Scalability. As Mrs. Potter\u2019s business grows, the data warehouse scales with it, accommodating the increasing volumes of data. She can add new stores and systems without worrying about data management complexity. What is a Data Mart Unlike a data warehouse, a data mart focuses on a subset of data to make it easier and faster to access and analyze. For example, Mrs. Potter implements a data mart specifically for her inventory management team. She\u2019s noticed that her inventory managers often struggle with getting the precise data they need from the broader company systems geared more toward sales and customer analytics. Core Features of the Data Mart Focused Scope. Data marts handle specific data relevant to a business area, like sales, marketing, or inventory management. Tailored Data Structure. The data is organized and structured to meet the specific needs of the department it serves, allowing for faster and more efficient queries. Ease of Access. Because a data mart contains a smaller, more focused dataset, it\u2019s quicker and easier to retrieve and analyze data. Challenges and How the Data Mart Solves Them The table below shows the common challenges and how data marts solve them. Criterion Problem Solution Overloaded Systems Mrs. Potter\u2019s previous system tried to serve all departments with a single, centralized database, leading to slow performance and difficulty accessing relevant data. The data mart lightens the load by focusing only on inventory data, improving system performance, and ensuring that the inventory team has quick access to the needed information. Lack of Specialized Reports The generalized reports from the company\u2019s main system didn\u2019t provide the detailed insights Mrs. Potter\u2019s inventory team needed to manage stock effectively. With a data mart, the team can generate specialized reports tailored to their specific needs, like stock turnover rates or supplier performance metrics. Complex Data Management Managing a large, centralized system was complex and time-consuming, often leading to data retrieval and reporting delays. The data mart simplifies data management by focusing only on the relevant data for inventory, making it easier to maintain and use. Benefits Improved Efficiency. With a data mart, users like Mrs. Potter\u2019s team no longer have to sift through irrelevant data. This streamlined access leads to faster decision-making and more efficient operations. Targeted Insights. The data mart provides the users with the necessary information without the noise of unrelated data. It helps the team spot trends, manage stock levels effectively, and reduce waste. Cost-Effective Solution. Compared to a full-scale data warehouse, the data mart is more cost-effective for meeting the specific needs of Mrs. Potter\u2019s or someone else\u2019s businesses. It provides all the benefits of a larger system but on a smaller, more manageable scale. Key Differences While data warehouses and marts play vital roles in data management, they\u2019re about different organizational scopes and needs. The table below shows the differences that help companies choose the right solution based on their data requirements. Let\u2019s highlight the benefits of using DWHs and data marts to help users choose the right approach depending on their organization\u2019s requirements and resources. Criteria Data Warehouse (DWH) Data Mart Complex Query and Analytics Support Optimized for running complex queries and large-scale analytics across the entire organization. Supports simpler queries and analytics specific to the department\u2019s data needs. Historical Data Storage Stores large volumes of historical data for long-term analysis and reporting. Typically focuses on current and relevant data for the department, with limited historical data. Cost Higher cost due to extensive infrastructure, storage, and maintenance requirements. Lower cost, more budget-friendly for specific departmental needs. Decision-Making Enables comprehensive, organization-wide decision-making with insights from all departments. Speeds up decision-making within departments by providing quick access to relevant data. When to Use DWH or Data Mart Let\u2019s return to Mrs. Potter and her growing chain of boutique stores. As her business expands, she\u2019s collecting data from sales transactions, customer loyalty programs, inventory systems, and even her online marketing campaigns. At some point, Mrs. Potter realizes she needs a comprehensive view of her business to stay ahead of the competition and make informed decisions. When to Use a Data Warehouse Large-Scale Insights. Mrs. Potter wants to consolidate data from all stores and online platforms into one centralized location to get a complete picture of the business\u2019s overall performance rather than just looking at individual stores or departments. Complex Analytics. This business owner is interested in identifying long-term trends (based on historical data), such as how sales have grown yearly, which customer segments are most profitable, and which products are best-sellers across all her stores. The data warehouse allows complex queries that involve large datasets to run. When a Data Mart is the Perfect Solution Focused Insights. The users only need data related to customer purchases, marketing campaign results, and customer segmentation. A data mart allows them to focus on just the information they need without being overwhelmed by irrelevant data. Quick Access to Specific Data. Mrs. Potter\u2019s marketing team needs to make quick decisions about upcoming campaigns, so they need fast access to relevant data. A data mart provides this by streamlining the data they work with. Cost-Effective Solution. Setting up a data warehouse for the marketing department alone would be overkill. A data mart offers a more affordable and easier-to-maintain solution for their specific needs. Role of Data Integration Tools [Data integration tools](https://skyvia.com/blog/data-integration-tools/) make users\u2019 lives easier when working with data warehouses and data marts. For example, Mrs. Potter runs a business with tons of data from sales, marketing, inventory, and customer feedback. She relies on a data warehouse or data mart to make sense of all this data and use it effectively. But all that data needs to be gathered, cleaned, transformed, and loaded into these systems, and that\u2019s where data integration tools like [Skyvia,](https://skyvia.com/data-integration) [Talend](https://talend.com/) , [Fivetran](https://www.fivetran.com/) , etc., come into play. Bridging the Gap Between Systems Such integration tools act as the bridge between all data sources and DWH or data mart. Whether Mrs. Potter pulls data from cloud applications like Salesforce or databases like MySQL, these tools handle the heavy lifting. They extract the data, transform it to fit the needed format, and load it seamlessly into the data warehouse or data mart. So, Mrs. Potter doesn\u2019t have to worry about manually moving data around or dealing with inconsistencies. Tools like Skyvia automates the whole process. Ensuring Data Accuracy and Consistency One of the biggest challenges in data management is ensuring that data is accurate and consistent. If\u00a0 Mrs. Potter\u2019s sales data from her online store doesn\u2019t match the data from her physical stores, it can lead to incorrect analysis and poor decision-making. Data integration tools help standardize and clean the data before it reaches the DWH or data mart. So, Mrs. Potter\u2019s team runs reports or analyzes trends; they\u2019re working with reliable data that gives a true picture of the business. Supporting Collaboration Across Teams Multiple departments, such as sales, marketing, finance, and operations, often use data warehouses within the company. Data integration tools support this collaboration by ensuring that data from all sources is integrated smoothly into the warehouse. So, everyone in the organization works from the same data set, leading to more consistent and aligned decision-making. For instance, if Mrs. Potter\u2019s marketing team uses a data mart focused on customer data, Skyvia can integrate data from various marketing tools, CRM systems, and customer feedback forms, ensuring the marketing data mart is up-to-date and accurate. Meanwhile, the finance team can rely on the data warehouse, where Skyvia aggregates financial data from various systems to provide a complete financial overview. Flexibility and Scalability As Mrs. Potter\u2019s business grows, so does the data she needs to manage. Data integration tools like Skyvia offer the flexibility to scale with the company, integrating new data sources or expanding the capacity of existing pipelines as needed. So, whether she\u2019s opening new stores or launching new marketing campaigns, the data integration processes remain efficient and effective. Conclusion To summarize, a data warehouse is the go-to solution for businesses that need a broad, comprehensive view of their entire business. It is capable of handling complex analytics and large datasets. On the other hand, a data mart is ideal for more focused, department-specific needs, offering quick access to targeted data without the complexity and cost of a full data warehouse. Whether you\u2019re like Mrs. Potter, managing a growing business, or handling specialized departmental tasks, choosing the right approach will be helpful to making data-driven decisions that propel your business forward. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mart-vs-data-warehouse%2F) [Twitter](https://twitter.com/intent/tweet?text=Understanding+the+Differences+between+Data+Mart+and+Data+Warehouse&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mart-vs-data-warehouse%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mart-vs-data-warehouse/&title=Understanding+the+Differences+between+Data+Mart+and+Data+Warehouse) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-mesh-vs-data-lake/", "product_name": "Unknown", "content_type": "Blog", "content": "[Integration](https://skyvia.com/blog/category/integration/) Data Mesh vs Data Lake: Comprehensive Comparison By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-mesh-vs-data-lake/#respond) 1255 September 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mesh-vs-data-lake%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Mesh+vs+Data+Lake%3A+Comprehensive+Comparison&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mesh-vs-data-lake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mesh-vs-data-lake/&title=Data+Mesh+vs+Data+Lake%3A+Comprehensive+Comparison) Data mesh and data lake are not just buzzwords but modern approaches to data management. Both of them emerged at the crossroads of unimaginable data spikes and the development of new technologies. Many businesses have already understood that data is a crucial asset, but its potential is revealed only when proper management techniques are applied. Both data mesh and data lake address large amounts of data, each in its own way. Discover key similarities and differences between data mesh and data lake in this article. Decide which approach is a win-win for your business, or maybe you want to implement both? Table of Contents What Is a Data Mesh? What Is a Data Lake? Comparing Data Mesh and Data Lake Choosing the Right Approach for Your Business Role of Skyvia in Dealing with Data Mesh and Data Lake Conclusion What Is a Data Mesh? Definition and Principles The term \u2018data mesh\u2019 appeared only around five years ago to introduce a decentralized approach to data management. It\u2019s a new way to organize data within large organizations and enterprises. Data mesh relies on a distributed architecture, where each domain or team, in simple words, generates its datasets. Each domain defines its storage mechanisms, access control rules, and processing pipelines. To better understand the concept of data mesh, it\u2019s necessary to describe the four pillars it\u2019s based on: Data ownership is provided to those teams that create information. Data is perceived and treated as a product. A self-service data platform is utilized to give users the possibility to access, share, and use information. Federated computational governance implies creating a cross-domain agreement defining data governance standards at the mesh level. At the same time, each domain has autonomy and freedom of choice when selecting the methods to achieve those data governance standards. The main objective of data mesh is to shift from centralized data management to a decentralized one. It also aims to ensure access to data for those users who need it without the intervention of the central data team. For instance, if a Financial Director wants to get an overview of daily incoming and outgoing transactions, with the data mesh it\u2019s no longer needed to address centralized data teams and wait for the requested data. Instead, it\u2019s possible to access the needed data independently and much more quickly. Benefits of Data Mesh The value of data has been constantly increasing across all industries. One important thing is to use and process it efficiently. Data mesh provides such a possibility and brings a bunch of other benefits to businesses: Data accessibility. Data mesh ensures that everyone who needs specific data will get access to it. Boosted workflow efficiency. Given that each employee has the requested data, it\u2019s possible to complete the tasks within the given time frames. Bottlenecks cut out. Employees no longer need to address centralized data teams and wait their turn to get the requested data. Instead, they can access the needed data on their own. Storage diversity. Data mesh allows you to store data at any location suitable for your organization or data team, enabling seamless data scaling and mobility. What Is a Data Lake? Definition and Principles A data lake is a repository for data storage that can be used to [aggregate all the organizational data](https://skyvia.com/blog/data-aggregation-tool/) . It can handle large amounts of structured, unstructured, and semi-structured data in its original format. They can store and process various data types, including video, audio, and documents. Data lakes have gained popularity because their capabilities go beyond traditional databases and data warehouses. They usually make up a base for machine learning, [data m](https://skyvia.com/blog/data-mining-tools/) [ining](https://skyvia.com/blog/data-mining-tools/) , and advanced analytics. Let\u2019s look at the characteristic features of the data lake that make it a popular choice for data management: It allows users to store different data types and convert them to other formats. It guarantees secure access to data at any time. It can be used as a base for data analytics to reveal insights and trends from data. Benefits of Data Lake A data lake is a complex structure that is useful for different kinds of organizations. No matter whether you are a small business that operates large amounts of data or a large enterprise with terabytes of data generated daily, you might consider using a data lake. And here is why it could be the right choice for your business: Scalability. Data lakes can handle spikes in data generation thanks to their ability to handle large data volumes and scale quickly. Cost-effectiveness. Most cloud providers offer affordable rates for storing large amounts of information. Flexibility. Data lakes support a variety of formats so that data can be stored as it is. Centralization. Very often, data lakes appear as central repositories that can be accessed by all authorized users. Comparing Data Mesh and Data Lake Our mission is to help you decide between data mesh and data lake. That way, the confrontation of these two approaches is inevitable. While data mesh is more about a philosophical-social approach, data lake is an architectural paradigm. The first stage of confrontation is to explore the challenges of both data mesh and data lake. Data Mesh Challenges Despite its revolutionary nature and promising future for organizational data management, data mesh has several significant drawbacks: Difficulty in implementation. As this concept is relatively new in the IT industry, it\u2019s not so easy to find a specialist who knows how to implement it right. Moreover, testing and adopting the data mesh principles across all the organizational departments might take plenty of time. Data duplication. Since each team is responsible for its dataset, there might be similar datasets across different departments. This creates difficulties for data analytics teams in driving meaningful insights and accurate organization-wide reports. Data Lake Challenges Even though data lakes are suitable for many use cases across organizations, they can cause some problems. Here are some of the typical ones: Difficulties for data analysis. Since data lakes allow users to operate a myriad of data types, it might be difficult to develop a standardized scheme required for analysis. Data quality deterioration. Poor control of the loaded data can lead to duplication and other issues, which generally decrease data quality. Poor data management. Even though data lakes are great storage options, they become more useful when combined with data management techniques. Connectivity. A data lake should integrate with other organizational tools and data platforms. For that, organizations need to implement [reliable ETL solutions](https://skyvia.com/blog/top-no-code-etl-tools/) that can connect to the needed data sources. Bottlenecks. With centralized data management, teams might experience long waiting times to get the needed data from the IT team. Data Mesh vs Data Lakes Comparison It\u2019s safe to say that data mesh and data lake have something in common. However, these concepts are still somewhat different. So, let\u2019s compare these two approaches to data management. Data mesh Data lake Grants decentralized data ownership, where each team is responsible for creating and managing a dataset. Data ownership Offers centralized data ownership, where all authorized users can access the storage repository. Data mesh relies on distributed data systems and can use heterogeneous data sources. Architecture Data lake depends upon the architecture of the cloud provider. Data is seen as a product. Data vision Data is seen as an asset. Federated computational governance. Data governance Top-down governance. Highly scalable. Scalability Highly scalable. Complicates data analytics. Data analytics Can cause difficulties in data analysis. Each team is responsible for integrating data from their sources of interest. Data integration Data integration is performed by the IT team. Choosing the Right Approach for Your Business After having compared data mesh and data lake in detail, you might already have a vague idea of which one you like more. However, we\u2019d say that both these concepts can be used together. For instance, data mesh can exploit a data lake as a data source in one of its domains. To help you understand which concept would align with your business needs, we\u2019d also present the key implementation steps for each approach. Data Mesh Implementation Step 1: Define domains. Each data domain represents a specific set of business functions and processes within an organization. For instance, one domain may contain all sales-related information, another one can contain all the information about product inventory, and so on. Each domain should also have its owner, who will be responsible for data management and maintenance. Step 2: Define infrastructure for each domain. Set up systems and tools that make up the domain infrastructure. This is necessary for proper data collection, storage, and processing. Each domain owner needs to have access to the resources to effectively manage data without relying on data engineers. Step 3: Establish communication and collaboration channels . It\u2019s crucial to enable communication between domain owners, so it makes sense to conduct regular meetings. Step 4: Monitor performance. It\u2019s necessary to monitor data mesh domains to understand whether they perform as expected or not. Data Lake Implementation Step 1: Set up the infrastructure. Select the cloud service provider for your data lake. Amazon, Azure, and Snowflake are popular choices for building a data lake. Step 2: Install supplementary tools. Consider adding tools for big data processing or integration to your data lake infrastructure. Step 3: Create metadata schemas. This is necessary for describing and understanding data stored within a data lake. Such procedures help to categorize data to enable its efficient querying and analysis. Step 4: Populate your data lake. Ingest data from various sources, either with batch data integration or real-time data streaming. Step 5: Monitor data ingestion. See how your data pipelines perform their job to identify malfunctions and detect defects promptly. Step 6: Set role-based access control . Make sure that only authorized users have access to a data lake. Otherwise, the lack of security and control may lead to data integrity violations or breaches. Role of Skyvia in Dealing with Data Mesh and Data Lake Given that both data lakes and data mesh domains usually deal with big data, there\u2019s a need for a solution that would [integrate](https://skyvia.com/blog/data-integration-trends/) and [replicate data](https://skyvia.com/blog/top-data-replication-tools/) into your corporate data systems. That could be an ETL tool or data integration platform, such as Skyvia. Skyvia Overview [Skyvia](https://skyvia.com/) is a universal cloud platform that is suitable for a wide range of data-related tasks: data integration SaaS backup data query workflow automation OData and SQL endpoint creation Skyvia is a no-code solution where you can build [ETL, ELT, and Reverse ETL pipelines](https://skyvia.com/learn/what-is-data-pipeline) to populate your data mesh components. This platform allows you to easily implement your [data integration strategy](https://skyvia.com/learn/data-integration-strategy) to facilitate data management, both on centralized and decentralized levels. It provides a range of other tangible benefits to users: Friendly user interface with drag-and-drop functionality. Web access to the platform via browser. A wide range of data integration scenarios. Powerful scheduling capabilities with up to 1-minute intervals. Availability for any type and size of business. Integration Capabilities Skyvia supports [190+ connectors](https://skyvia.com/connectors) , including cloud apps, databases, storage services, and data warehouses. Whether you choose the data mesh or data lake approach, Skyvia can send data there using the available integration scenarios. [Data Integration](https://skyvia.com/data-integration) contains tools for implementing both simple and complex integration scenarios: [Import](https://skyvia.com/data-integration/import) allows you to implement the ETL and Reverse ETL scenarios between two data sources in the visual interface with zero code. You can apply transformations on the source data copy to match the destination data structure. [Export](https://skyvia.com/data-integration/export) allows you to extract data from cloud applications into CSV files and save them on a computer or online storage service. [Synchronization](https://skyvia.com/data-integration/synchronization) is good for syncing data bidirectionally between two different apps. [Replication](https://skyvia.com/data-integration/replication) copies raw data from the source, sends it to the destination and keeps it up-to-date. This can be an excellent option for populating your data lakes. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) is a compound data integration scenario where you can build data pipelines with the drag-and-drop functionality. It involves multiple data sources, more complex logic, and compound data transformations. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/index.html) is suitable for organizing data integration tasks in a specific order. It allows you to perform preliminary and post-integration actions and even set up some automatic error-processing logic for your integration. Additional Benefits of Using Skyvia One of the drawbacks of a data lake is the diversity of data formats , so it\u2019s challenging to prepare data for analysis. Skyvia can help you overcome this by transforming structured data and reverting it to a unified format. It can also work with the metadata of unstructured data. The notable drawback of data mesh is the possible data duplication across domains . While this is not a problem on the domain level, it causes some obstacles in organization-wide analytics. Skyvia can help to overcome this challenge by checking for duplicates on integration and gathering only unique data for further analysis. Conclusion Data mesh could be the right choice if you want to grant autonomy to different business departments. In this case, data management is decentralized, and each domain is responsible for creating and operating its data. A data lake could be good for implementing the centralized approach to data management. This could be suitable for businesses dealing with sensitive data that needs to be governed and provisioned on the organization-wide level. Whatever approach seems more attractive to you, it\u2019s necessary to populate your systems with data. For that, use the Skyvia platform to integrate data into your data lake or move data to your data mesh domains. Skyvia offers a free plan to start with, where you can try out all of its functionality. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mesh-vs-data-lake%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Mesh+vs+Data+Lake%3A+Comprehensive+Comparison&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mesh-vs-data-lake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mesh-vs-data-lake/&title=Data+Mesh+vs+Data+Lake%3A+Comprehensive+Comparison) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Integration](https://skyvia.com/blog/category/integration/) [10 Best Data Aggregation Tools for 2025 for Your Business Needs](https://skyvia.com/blog/data-aggregation-tool/) [Integration](https://skyvia.com/blog/category/integration/) [Point-to-Point Integration Explained: Key Pros and Cons](https://skyvia.com/blog/point-to-point-integration-pros-cons/)" }, { "url": "https://skyvia.com/blog/data-migration-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 Data Migration Tools for 2025: Popular Options Compared By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-migration-tools/#respond) 4826 April 17, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-migration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Migration+Tools+for+2025%3A+Popular+Options+Compared&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-migration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-migration-tools/&title=Top+10+Data+Migration+Tools+for+2025%3A+Popular+Options+Compared) Replacing information from one system to another might seem like a straightforward IT task, but it\u2019s actually one of the trickiest and most important jobs for any business today. Whether migrating to the cloud, changing your CRM, or cleaning up old legacy systems, choosing the right solution can save countless hours and prevent costly mistakes. With so many data migration software, cloud-based, on-premises, and open-source ones, it\u2019s tough to figure out which genuinely fits your needs in 2025. That\u2019s why we put together this guide and walk you through 10 of the most popular systems, highlighting what they do best, where they fall short, and what types of projects they\u2019re made for. Table of contents What Is Data Migration? What are Database Migration Tools? Different Types of Database Migration Tools 4 Best Cloud-based Migration Tools Skyvia Fivetran Xplenty AWS Data Pipeline Top 3 On-premise Migration Tools Informatica PowerCenter Talend Data Fabric Oracle Data Integrator (ODI) Top 3 Open Source Migration Tools Apache NiFi Airbyte Apache Airflow How to Select the Right Data Migration Software? Data Migration Best Practices Conclusion What Is Data Migration? Moving information from one platform, format, or storage location to another is called \u201cmigration.\u201d We use it during: Cloud migration. Software upgrade. System replacement. Consolidating multiple databases into a single source of truth. Sounds simple, but it means reformatting , validating , cleaning , and securing insights, especially when moving between platforms that don\u2019t speak the same \u201clanguage.\u201d Done right, such a process helps users streamline operations, reduce redundancy, and improve info accuracy. Done wrong, it leads to broken systems, lost data, and a lot of trouble. That\u2019s why having the right tools and a well-thought-out plan matters more than ever. What are Database Migration Tools? There are software solutions that move insights between different databases or platforms quickly , safely , and with minimal disruption . Whether upgrading to a newer system, switching to a different database engine, or moving from on-prem to the cloud, they take the heavy lifting off the plate. These systems handle many tasks, from schema conversion and data transfer to validation and synchronization. Many also provide users with automation , scheduling , and monitoring features to avoid information loss, downtime, or compatibility issues. Depending on the project, you might need something lightweight, simple, enterprise-grade, and deeply customizable. That\u2019s where understanding the different types of tools matters. Different Types of Database Migration Tools Not all migration software is built in the same way. Depending on each business\u2019s setup and goal, companies choose a solution that matches their infrastructure and resources. Cloud-based systems are ideal for teams that want a quick setup , automated workflows , and scalability without worrying about hardware or updates. On-premise software is perfect for businesses with strict security requirements , complex legacy systems , or limited internet access . Open-source ones are often free to use but may require more technical expertise to set up and maintain. Let\u2019s walk through the top tools in each category to help you find the best fit. 4 Best Cloud-based Migration Tools Such migration software is a go-to choice for modern teams that require: Flexibility. Speed. Minimal setup. Since these tools run in the cloud, users don\u2019t have to worry about installation, updates, or infrastructure; they just connect data sources and let the tool handle the rest. They\u2019re handy for SaaS-heavy environments, distributed teams, and businesses scaling quickly. Most offer automation, scheduling, and real-time syncing, making them great for ongoing data workflows, not just one-time transfers. Next, we\u2019ll look at four of the most popular cloud-based migration tools in 2025: Skyvia, Fivetran, Xplenty, and AWS Data Pipeline. Skyvia [Skyvia](https://skyvia.com/data-integration) is a no-code cloud data integration platform that supports: [ETL](https://skyvia.com/learn/what-is-etl) . [ELT](https://skyvia.com/learn/what-is-elt) . [Reverse ETL](https://skyvia.com/learn/reverse-etl-tools) . Replication. Synchronization. Data Flow Control Flow. It provides over [200](https://docs.skyvia.com/connectors/) connectors, including MySQL, Salesforce, HubSpot, Google Sheets, and many more. Skyvia\u2019s clean interface and drag-and-drop logic make it easy for non-technical users while offering advanced features for more complex pipelines. It\u2019s perfect for businesses that want fast results without relying on developers. Plus, its cloud-based infrastructure means users can manage everything from a browser. The tool offers a flexible range of [pricing](https://skyvia.com/pricing) plans, starting with a free tier for light use, followed by paid plans beginning at $79/month. Higher-tier options are available for businesses needing more rows, connectors, or advanced scheduling. Fivetran It\u2019s a fully managed ELT tool built for speed and reliability. It connects with 500+ sources and automatically syncs data into the destination warehouse or database. Once set up, it handles schema changes and pipeline maintenance with little human input. Fivetran is a good choice for data teams that want hands-off syncing and high-volume reliability, but it comes at a premium price, especially at scale. Fivetran\u2019s price is based on monthly active rows (MAR), meaning the number of new or updated records. Plans start around $300/month, but the cost scales quickly with usage. A free plan is available for light use (up to 500K MAR/month). Xplenty Xplenty , now part of the Integrate.io platform, is a user-friendly ETL tool that helps businesses move data between cloud sources, databases, and SaaS apps. It balances simplicity in customization with a visual interface, low-code options, and built-in monitoring. It\u2019s a solid choice for teams that want powerful workflows without the infrastructure headache. The platform offers custom pricing based on data volume and connector usage, with most packages starting around $1200/month. A free trial is available. AWS Data Pipeline That\u2019s Amazon\u2019s native tool for moving and transforming data within the AWS ecosystem. It integrates well with services like S3, Redshift, RDS, and EMR, and supports complex job scheduling. While it\u2019s extremely powerful, it\u2019s more suited for teams already familiar with AWS architecture and comfortable managing infrastructure-as-code. AWS Data Pipeline charges based on pipeline activity. It starts at $1/month per pipeline, but costs vary depending on frequency, region, and AWS resource usage. Pros of Cloud-based Migration Tools Quick to deploy. No installation or local setup is required. Scalable and flexible. Easily handle growing data needs across multiple cloud platforms. Automation-friendly. Most offer built-in scheduling, monitoring, and error handling. Accessible from anywhere. Great for distributed teams and remote operations. User-friendly interfaces. Many support no-code/low-code workflows for business users. Reduced infrastructure maintenance. The vendor manages uptime, updates, and scalability. Cons of Cloud-based Migration Tools They can get expensive at scale. Especially for tools that charge by data volume or usage. Limited control. Less customization compared to self-hosted or code-first solutions. Internet-dependent. Relies on strong and stable connectivity for large transfers. Vendor lock-in risk. Migrating away from a cloud-based solution later can be tricky. Less suited for sensitive on-premise-only environments. Not ideal for companies with strict data residency or security regulations. Top 3 On-premise Migration Tools While cloud-based tools are popular for their flexibility, many businesses, especially large enterprises or those in regulated industries, still prefer on-premise migration tools . These platforms are installed and managed locally, offering more control over: Data Security. Performance. Compliance. They\u2019re particularly useful when dealing with sensitive legacy systems or strict internal policies that prevent data from moving through the cloud. Though they require more setup and IT involvement, their robustness and customizability make them a solid choice for complex or high-stakes migrations. Three of the most widely used on-premise data migration tools in 2025 are Informatica PowerCenter , Talend Data Fabric , and Oracle Data Integrator . Informatica PowerCenter It\u2019s a powerful, enterprise-grade data integration platform known for its stability, scalability, and broad connector support. The system successfully manages large, mission-critical migration projects that require high availability and strong governance. PowerCenter is packed with tools for: Transformation. Profiling. Workflow design. Performance monitoring. Custom pricing is based on enterprise requirements and deployment size. Generally, it starts in the five-figure range annually. Talend Data Fabric The platform provides multiple solutions as a part of its data integration services. It allows connecting from most data sources and managing the data cluster on-site and in the cloud. The system offers features like data cleaning, data governance, data integration, etc., and supports over 1,000 connectors to choose from and process the data. The pricing starts at $1,170/month, depending on the edition and deployment type. An enterprise plan is available for large-scale deployments. Oracle Data Integrator (ODI) This data integration platform is typically used for all data integration services. It can connect with multiple sources to perform various tasks like: Data synchronization. Transformation. Quality management. Data loading across heterogeneous systems. Being a product of Oracle, it also enables easier integration with other Oracle products like Oracle GoldenGate for real-time data replication and Oracle Database services. It provides documentation and an extensive knowledge base for users getting started with the platform. However, setup and customization often require skilled developers or database administrators. The platform doesn\u2019t offer a free version of ODI. The Oracle Data Integrator Enterprise Edition is typically priced at around $23,000\u201330,000 per processor license, plus additional support and maintenance fees. Precise pricing depends on your Oracle licensing agreement. Pros of On-premise Migration Tools Full data control. Everything stays behind your firewall, which is ideal for highly sensitive environments. Better compliance. Perfect for industries with strict regulatory or data residency requirements. Strong performance . Optimized for high-throughput, enterprise-grade workloads. Legacy system compatibility. Designed to work with older infrastructure or customized internal systems. Robust monitoring and auditing. Advanced tracking, logging, and error-handling features built in. Cons of On-premise Migration Tools Expensive to implement . High upfront licensing, hardware, and maintenance costs. Slower to deploy . Installation and setup can be complex and time-consuming. Requires IT resources. Ongoing support, upgrades, and system maintenance depend on in-house teams. Less flexible. Scaling up or adapting to new tools often involves more manual work. Not cloud-native. May lack seamless integration with cloud apps or hybrid environments without customization. Top 3 Open Source Migration Tools This software is driven by the developers\u2019 community, who work together to create and improve the tools and prefer: Complete control. Transparency. The ability to customize workflows without vendor lock-in. The source code of these tools is typically available on a central repository like Git. Some licensed solutions also use these source codes as the underlying code for their software. The open-source tools allow data to be migrated between different systems at no cost. The users can also modify or contribute to the open-source code. These tools suit tech-savvy people who can understand the open-source code and implement the changes if required. Let\u2019s look at three popular open-source database and data migration tools: Apache NiFi, Airbyte, and Apache Airflow. Apache NiFi This robust data flow management tool supports real-time and batch data movement between systems. It features a drag-and-drop interface, data provenance tracking, and fine-grained control over flow behavior. The user interface represents the flow through directed graphs that are easy to understand, modify, and monitor. It allows the flow of these graphs to be changed at runtime itself, making it more configurable. Such a platform is perfect for teams who want to automate complex data workflows. It\u2019s free and open-source. Paid support is available through Cloudera. Airbyte This fast-growing open-source ELT tool makes data integration accessible and community-driven. It offers over 300 connectors and allows users to build and deploy new ones easily. Airbyte is highly modular, supports cloud or local deployment, and integrates well with modern data stacks. The tool comes in an open-source version and a commercialized cloud version (Airbyte Cloud)\u00a0 that starts at around $2.50/credit, depending on usage. Apache Airflow This open-source orchestration tool is widely used for managing complex data pipelines. It\u2019s not a data migration tool per se, but it\u2019s often used to schedule and monitor ETL/ELT jobs. Developers love its Python-based Directed Acyclic Graphs (DAG) structure, flexibility, and integrations with cloud and on-prem tools. It can also connect with the popular data sources that allow the data to be migrated between different databases. The solution is free and open-source. Managed services are available via providers like Astronomer and Google Cloud Composer. Pros of Open Source Migration Tools Free to use. No license fees, making them ideal for startups and budget-conscious teams. Highly customizable. Developers can tweak the code, create custom connectors, and build tailored workflows. Community-driven. Backed by active open-source communities offering plugins, updates, and shared solutions. Flexible deployment. Can be hosted on-premise, in the cloud, or in hybrid environments. Ideal for tech-savvy teams. Complete control over pipeline behavior, scheduling, and orchestration. Cons of Open Source Migration Tools Requires technical skills. Not beginner-friendly; setup and maintenance often demand developer involvement. Limited user interface. Most tools prioritize flexibility over visual, drag-and-drop features. There is no official support. Users are on their own for debugging or scaling unless using a managed service. Integration gaps. May lack pre-built connectors or require additional tools to cover full ETL needs. More time-consuming to configure. Expect longer implementation cycles compared to commercial platforms. How to Select the Right Data Migration Software? Now, each business needs a smooth, efficient, and secure data transfer. However, choosing the appropriate software might be a challenge. With so many options available, weighing users\u2019 specific needs, business size, and long-term goals is essential. Organizations have to look at factors like performance , integration capabilities , and security to make sure the tool fits their technical requirements and budget. The perfect migration software should meet your current needs and scale as the business grows. To help you navigate through the options, let\u2019s break down some key aspects to consider before choosing the right one. Performance at Scale When migrating large amounts of data, performance becomes critical. Whether you\u2019re moving petabytes of data or syncing real-time updates across systems, the tool you choose must handle high-volume transfers without issues. Look for software that offers [batch processing](https://skyvia.com/blog/batch-etl-processing/) , parallel data transfer , and load balancing to optimize performance. It\u2019s also essential to consider how the tool will perform with growing data volumes as the business expands. Tools that offer cloud-native architecture tend to scale better and provide elastic performance adjustments as needed. Integration Flexibility How well does the software connect to your existing systems and other third-party solutions? The more cloud apps, on-prem databases, or CRM platforms your migration tool can integrate with, the better. Tools supporting multiple connectors , APIs , or custom integrations give users more options and flexibility. If data is spread across multiple systems (e.g., Salesforce, MySQL, OneDrive), users need a solution that can easily pull data from various sources and push it to their target system. Ease of Use Data migration can get complicated, but it doesn\u2019t have to be. Usability is key, especially for non-tech users. Selecting software with drag-and-drop interfaces , no-code or low-code options , and pre-built templates that make setup and configuration easy is a good idea. The best data migration software will help streamline the process, not add to the complexity. For a small team or an enterprise-scale migration, ease of use can significantly reduce deployment time and help ensure adoption across the team. Security and Governance While working with sensitive info, security and governance are must-have. Ensure that the migration process doesn\u2019t expose data to risks. Opt for tools that offer encryption , secure connections , and compliance with regulations like GDPR, HIPAA, and CCPA. The migration platform must also have strong audit logs , access controls , and data masking features to control who can view and edit the information. Always choose a tool that prioritizes security to avoid costly data breaches or compliance violations. Data Migration Best Practices To ensure a successful data migration, it\u2019s essential to follow these best practices throughout the process: Plan Thoroughly. Start by strategizing a detailed migration plan. Identify potential constraints and lay out a clear diagram of the data sources and destinations for better understanding. Address Data Issues Early. Before migration, carefully examine information for any quality issues, such as duplicates, missing values, or outdated records. Cleanse or transform the data as needed to avoid carrying over errors to the new system. Test with a Replica. Always test the migration on a replica of the target system before full implementation to identify potential issues and fine-tune the approach without risking the actual data. Backup Data. Always create a complete backup of data before migrating. A backup ensures you can recover quickly if something goes wrong during the migration. Verify Storage Capacity. Ensure the target system has sufficient storage space to accommodate all the migrated data. Double-check that the storage size aligns with the volume of data you\u2019re moving. Break Down Data into Chunks. If there are network constraints, consider breaking the data into smaller chunks to avoid performance bottlenecks and to facilitate smoother migration. Post-Migration Review. After the migration is complete, reassess the data to ensure no loss occurs and that everything is transferred accurately. Run integrity checks to confirm all records were moved as expected. Conclusion The right data migration tool ensures a smooth, efficient transition from one system to another. Whether you\u2019re working with cloud-based , on-premise , or open-source platforms, each type offers unique advantages based on business needs and technical setup. Understanding requirements like performance, integration flexibility, and security helps people select the best tool for the job. Following best practices, such as thorough planning, data quality checks, and testing, will further ensure the success of your migration project. F.A.Q. for Top Migration Tools What are cloud data migration tools, and why are they important? Cloud data migration tools help businesses transfer data from on-premise systems or other cloud platforms to cloud-based environments. These tools simplify and automate the migration process, ensuring data integrity, security, and minimal downtime. How do I choose the right data migration software for my business? When choosing data migration software, consider factors like ease of use, scalability, performance at scale, and compatibility with your existing systems. Cloud-based tools may be ideal for scalability and ease of deployment, while on-premise solutions might be better suited for highly secure or legacy environments. What is the role of data transfer tools in data migration? Data transfer tools facilitate the actual movement of data between systems, whether it\u2019s between databases, cloud storage, or other applications. These tools ensure data is transferred efficiently, accurately, and securely, often with minimal manual intervention. Can database migration tools handle large datasets effectively? Yes, modern database migration tools handle large datasets effectively. They often include features such as parallel processing, incremental data migration, and automated error handling to ensure the data is moved without disruption, even for large or complex databases. How do cloud data migration tools compare to traditional database migration tools? Cloud data migration tools are typically more flexible and scalable. Unlike traditional database migration tools , which are often designed for on-premise or legacy systems, cloud tools can handle distributed environments and integrate seamlessly with modern cloud applications. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-migration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Migration+Tools+for+2025%3A+Popular+Options+Compared&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-migration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-migration-tools/&title=Top+10+Data+Migration+Tools+for+2025%3A+Popular+Options+Compared) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-mining-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Mining: Definition, Benefits, and Best of Data Mining Tools By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-mining-tools/#respond) 1785 May 26, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mining-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Mining%3A+Definition%2C+Benefits%2C+and+Best+of+Data+Mining+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mining-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mining-tools/&title=Data+Mining%3A+Definition%2C+Benefits%2C+and+Best+of+Data+Mining+Tools) Most people think the Gold Rush is over, but it has just changed its appearance and transformed into the Information Rush. Like gold miners doing panning, modern data tech specialists use data mining techniques to find hidden gems in the bulks of data. Small businesses usually stand aside from data mining, being convinced that this technique is only for \u2018big fish.\u2019 Contrary to this belief, data mining suits any company with CRM, e-commerce websites, and social media platforms. This article provides a comprehensive overview of data mining and explains its correlation with data integration and business intelligence. You\u2019ll also find a list of open-source and commercial data mining tools suitable for any business. Table of Contents What is Data Mining? Importance and Benefits of Data Mining for Various Industries The Role of Data Integration in Data Mining Best Tools for Data Mining The Role of Data Mining in BI Challenges of Data Mining Conclusion What is Data Mining? The concept of data mining emerged in the early 1930s, long before the first digital data spikes, but the term was stabilized only in the 1990s. Due to the growing amount of data, the relevance and economic impact of data mining has only been strengthening since then. So, let\u2019s have a look at the official definition: \u201cData mining is the process of discovering patterns from large datasets using specific methods relying on statistics, artificial intelligence, machine learning, and [DBMS](https://www.mongodb.com/resources/basics/database-management-system) technology. Data mining in the commercial context helps organizations to transform raw data into useful information.\u201d Data Mining Process Steps Getting tangible results from data mining is like climbing the stairs toward the hilltop. You\u2019ll need to put some effort to get meaningful information from datasets. The data mining process encompasses the following stages: Data collection . At this point, it\u2019s necessary to gather all the data of interest and organize it properly. [Data warehouses](https://skyvia.com/blog/best-data-warehouse-tools/) and data lakes are popular for consolidating data. Data observation. At this stage, decide which data should be processed further by removing unnecessary columns and applying filters. Data preparation. The quality of data is deterministic for the outcomes of data mining. So, it\u2019s necessary to transform data where appropriate, detect any outliers, elaborate on records with missing values, and remove duplicates. Choosing a model. Select the data mining algorithms depending on the existing problem and objective. Evaluation. Assess the performance and effectiveness of the chosen model using a validation or cross-validation set. Key Techniques in Data Mining When climbing a hill, you\u2019ll need some means: hiking shoes, a car, a bike, or even a helicopter. The choice depends on the mountain height, availability of stairs, etc. Data mining offers a range of techniques that help to achieve the hilltop. They can be grouped into supervised and unsupervised learning algorithms. The most popular and widely used of them are provided below. Supervised Learning Decision trees: This algorithm builds a tree-shaped predictive model. It aims to categorize data considering certain attributes and helps draw conclusions based on observations. Regression : This is a group of algorithms used to predict numeric values based on the relationship between input and target variables. It can be useful for estimating profit, sales, mortgage rates, etc. Linear perceptron: It uses binary classifiers that can decide whether the input data belongs to a specific class. Na\u00efve Bayes : It\u2019s based on Bayes\u2019 theorem, which is used in probability theory and statistics. Na\u00efve Bayes is particularly effective for text classification and spam filtering. Neural network: This model is inspired by the structure of the human neural system. It\u2019s widely used in the banking sector to detect fraud on time and in the healthcare industry for disease diagnosis. Unsupervised Learning Clustering methods don\u2019t rely on predefined classes but group data instances together based on their similarities. The most popular clustering algorithms are K-Means, agglomerative clustering, DBSCAN, and DENCLUE. Association rules discover data relationships and insightful patterns in the e-commerce industry. For example, given a set of commercial transactions, they can generate rules that predict the occurrence of an item A based on the occurrences of other items (B, C, D, etc.) in the transaction or purchase. Transaction ID Items 1 Bread, Milk 2 Beer, Bread, Diaper, Eggs 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Bread, Coke, Diaper, Milk Rule examples: {Diaper, Milk} -> {Beer} {Beer, Milk} ->{Diaper} {Beer, Diaper} -> {Milk} {Beer} -> {Diaper, Milk} {Diaper} -> {Beer, Milk} {Milk} -> {Beer, Diaper} Importance and Benefits of Data Mining for Various Industries Small businesses are convinced that data mining practices are only available for enterprises. In reality, organizations of any size and operational sphere can benefit from data mining to discover certain patterns from their existing datasets. Let\u2019s explore the sectors where data mining has become the best friend of decision-makers. We\u2019ll also provide real-world examples showing how both small and big companies use data mining. Retail By analyzing thousands of orders, it\u2019s possible to find out customers\u2019 preferences and purchasing habits. What\u2019s more, data mining algorithms allow for carrying out a market basket analysis to find relationships among products often purchased together. This helps to show the right advertisements on the e-commerce platform. Amazon is the most famous company in the retail industry. It has [tons of data processed with data mining tools](https://www.ijser.org/researchpaper/Data-Mining-by-Amazon.pdf) to craft promotional strategies and enhance customer experience. Finance Data mining in the financial sector helps to detect fraudulent transactions and predict mortgage rates. Recently, banks have also relied on data mining to determine the risk profile of a person planning to take a loan. Healthcare Data mining algorithms take raw medical data and records and derive certain patterns from it. This positively impacts diagnosis accuracy, treatment efficiency, and clinical decision-making. One of the examples of data mining advantages for small businesses within the healthcare industry is the [Z5 Inventory case](https://www.knime.com/success-story/how-z5-inventory-prevents-medical-supply-waste-using-knime) . This software development company has helped dozens of healthcare institutions in the US to improve their physical inventory management. Owing to data mining algorithms combined with Z5 Inventory\u2019s solution, it became possible to optimize supply chains, reduce healthcare waste, and minimize costs. Media Businesses in the media industry often rely on clustering methods to group similar users and identify trends based on location and time. This information helps them better understand their audiences, explore current market trends, and monitor competitors. A great example of data mining outcomes for medium-sized businesses is the [Allente company case](https://www.knime.com/success-story/how-allente-built-recommendation-engine-deliver-personalized-content) . This Scandinavian television provider has managed to build a content recommendation engine for users and predict the likelihood of customer churning. Energy [General Electric Vernova](https://www.ge.com/digital/blog/case-industrial-big-data) collects machine-generated data from sensors on gas turbines and jet engines. Elaborating on these large datasets with data mining tools enables the company to improve working processes and strengthen reliability. Logistics Classification algorithms applied to logistics datasets can reveal the best supply chain partners. They also help to discover and compare possible routes between points A and B to improve transportation efficiency. Meanwhile, neural networks are efficient for forecasting demand and inventory optimization. The Role of Data Integration in Data Mining [Data integration](https://skyvia.com/learn/what-is-data-integration) is the process of combining and harmonizing data from multiple sources into a unified, coherent format that can be put to use for various analytical, operational and decision-making purposes. (IBM) The first step is to collect data that was produced by: Humans: User-generated data from social media platforms (text, photos, videos), emails, documents, clickstream data, etc. It\u2019s usually unstructured or semi-structured and needs preprocessing. Organizations: Commercial transactions, banking records, e-commerce records, medical records, etc. form up the organizational data. Machines: This is data coming from sensors (traffic, weather, scientific, etc.) and computer systems (logs). It\u2019s well-structured and thus suitable for computer processing. Raw data isn\u2019t always 100% suitable for analysis in its original form, so it needs to be preprocessed beforehand. [Data integration tools](https://skyvia.com/blog/data-integration-tools/) , such as Skyvia, have transformation functions, allowing to prepare data properly. The final step of data integration is to move data to a data warehouse (commonly used by data mining specialists) or another destination of interest. A data warehouse is also a place for consolidating data from various sources. [Skyvia](https://skyvia.com/) is the universal SaaS platform capable of resolving various data-related tasks. It offers a [range of solutions](https://skyvia.com/solutions/) for data integration, backup, automation, querying, and connectivity. The [Data Integration](https://skyvia.com/data-integration/) product was designed to transfer data between different cloud apps, databases, and data warehouses. It also provides multiple data transformation, cleansing, and mapping capabilities. To prepare supply data mining algorithms with proper datasets, Skyvia offers the following solutions: [Import](https://skyvia.com/data-integration/import) is a wizard-based tool for a no-code integration of two data sources. It builds refined [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) that ingest data from the source and send it to the selected destination. It can also apply filtering, transformation, and mapping for the given data. The Import tool is suitable for getting data ready before mining. [Replication](https://skyvia.com/data-integration/replication) tool allows users to create [ELT pipelines](https://skyvia.com/learn/what-is-elt) that ingest data from the selected source and move it to the destination in its original form. Such practice is common these days as it ensures faster data load as transformation is skipped on integration and applied on the destination side later. [See ETL and ELT differences](https://skyvia.com/blog/elt-vs-etl/) . [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) tool is a visual constructor of ETL pipelines with 2+ sources. It offers complex data transformations for data preparation and many advanced features. Data Integration Use in Data Mining Now, let\u2019s examine how data integration with Skyvia assists data mining. Unified data view. [Data integration tools](https://skyvia.com/blog/data-integration-tools/) gather data from cloud applications, on-premises services, databases, and DWHs and take it to a centralized location. Consolidated data gives data scientists, data engineers, and business analysts a unified view. Improved data quality. The [ETL pipeline designer tools](https://skyvia.com/blog/10-best-data-pipeline-tools/) provide filtering, transformation, and cleansing functions for enhancing the overall quality of data. As a result, data mining algorithms applied to refined datasets ensure correct results. Automation. Modern data integration services and platforms allow users to load and update data on schedule. This helps to automate processes, speed up data movement, and exclude manual work. Best Tools for Data Mining Now, it\u2019s time to switch to the bread and butter of the data mining process \u2013 data mining tools. Find the best software for elaborating on datasets with a concise feature overview, advantages, and drawbacks. Tool Features Advantages Drawbacks [RapidMiner](https://github.com/rapidminer) 1. Data preparation 2. Deep learning 3. Text mining 4. Predictive analytics 1. Extensive community support 2. Highly customizable Steeper learning curve for beginners [Weka](https://www.weka.io/) 1. Data preparation 2. Data visualization 3. Clustering 4.Classification5.Regression 1. User-friendly UI 2. Supports small and medium datasets Limited scalability for large datasets [Orange Data Mining](https://orangedatamining.com/) 1. Data visualization 2. Component-based data mining A limited number of advanced features Limited number of advanced features [Scikit-learn](https://scikit-learn.org/stable/) 1. Classification 2. Regression 3. Clustering 4. Statistical modeling 5. Data preprocessing 1. Built on Python 2. Excellent documentation Requires programming knowledge [IBM SPSS Modeler](https://www.ibm.com/it-it/products/spss-modeler) 1. Advanced statistical modeling 2. Text analytics 1. No coding 2. Excellent support 1. Expensive 2. Steep learning curve [SAS Data Mining](https://www.sas.com/en_us/software/enterprise-miner.html) 1. Data preprocessing 2. Machine learning 3. Text mining 1. Comprehensive solutions 2. Excellent support 1. Expensive 2. Difficult in use [MATLAB](https://www.mathworks.com/products/matlab.html) 1. Mathematical modeling 2. Simulation 3. Algorithm development 1. Extensive toolbox 2. Strong community support Requires programming skills [H2O.ai](http://h2o.ai) 1. Automated machine learning 2. Deep learning 1. Scalable 2. Supports multiple languages Requires programming skills [DataRobot](https://www.datarobot.com/) 1. Automated machine learning 2. Enterprise AI 1. No coding required 2. Excellent scalability 1. Expensive 2. Limited scalability The Role of Data Mining in BI It\u2019s easy to get perplexed by the variety of operations that can be performed on data. So, data mining is often mistakenly taken as business intelligence (BI) or business analytics. In fact, there are many things in common among them, but there are still more differences than similarities between data mining and BI. Let\u2019s explore what business intelligence (BI) is by looking at how Forrester Research defines it: \u201cBI is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision making.\u201d Now, let\u2019s examine the pyramid, which contains all the stages of the business intelligence processes. It will help you understand the role of data mining in BI and the correlation between these two notions. Operational applications. We have already reviewed the first step of the pyramid: data collection from various tools and its consolidation within a data warehouse using the [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) tools. Reporting and OLAP. OLAP analysis allows users to interactively navigate through the DWH using a number of operations: roll-up, drill-down, pivoting, slice-and-dice, drill-across, and drill-through. That way, all the necessary information is extracted for being processed with the selected algorithm. Data mining. The most commonly used data mining algorithms with the examples are provided above . What-if analysis. It\u2019s a data-intensive simulation with the goal of inspecting the behavior of a complex system under some hypotheses. For example, if marketers want to know how their promotional campaign would run, they should build a simulation model. This model must be able to express the complex relationships between the business variables determining the impact of promotional campaigns on product sales. It\u2019s necessary to run this model against the historical sales data in order to determine a reliable forecast for future sales. Decisions. Based on the information obtained, business leaders arrive at the top of the BI pyramid to make decisions. Now, it\u2019s obvious that data mining can\u2019t be interchangeably used with BI because it only makes a part of it. Challenges of Data Mining Despite its numerous benefits, there are certain difficulties data mining specialists encounter. Here are some of the most common ones: Data heterogeneity. Each online service and app has its preferred format for storing data. Merging data from different platforms might be challenging. Noisy and incomplete data. When dealing with datasets for data mining, there\u2019s a need to detect outliers, find records with missing values, etc. This is crucial because data duplicates and other anomalies can significantly impact the final pattern discovery. Background knowledge. Data mining has complex algorithms that require a solid statistics and programming base. Complexity of operations: Data mining specialists need to pick the right algorithms for a specific case and perform a set of cross-validation procedures to determine its effectiveness. Moreover, much practice is required to interpret the data mining outcomes correctly. Ethical and legal considerations. Unauthorized individuals might access sensitive information in data sets and expose it. There are also insider threats from internal stakeholders that may lead to data leaks. All this imposes a risk on data security and privacy. To address this challenge, data mining practitioners usually apply data anonymization. Moreover, organizations need to adhere to GDPR, CCPA, and HIPAA privacy regulations to impose rules on data collection, utilization, and sharing. Conclusion Data mining gains momentum because of the insights it uncovers for companies. It suits businesses of any size in various sectors of the economy. Data mining algorithms will work well as long as your company generates considerable data volumes and is widely represented over social media. Data mining is also perceived as complex and demanding, but modern tools simplify everything. This refers to data mining tools that apply supervised and unsupervised algorithms on datasets as well as data integration tools that make up a strong foundation for data mining. Therefore, try Skyvia for data integration today to ensure the best data mining results in the future. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-mining-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Mining%3A+Definition%2C+Benefits%2C+and+Best+of+Data+Mining+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-mining-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-mining-tools/&title=Data+Mining%3A+Definition%2C+Benefits%2C+and+Best+of+Data+Mining+Tools) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-pipeline-architecture/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Data Pipeline Architecture: Key Stages and Best Practices By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/data-pipeline-architecture/#respond) 810 December 26, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-architecture%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Pipeline+Architecture%3A+Key+Stages+and+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-architecture%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-pipeline-architecture/&title=Data+Pipeline+Architecture%3A+Key+Stages+and+Best+Practices) Every day, lots of data is generated across all departments of an organization. To extract value from it, you need to ensure it\u2019s correct and standardized before further processing. The implementation of data pipeline architecture can help you with that. Like a conductor coordinates an orchestra to produce pleasing sounds, a data pipeline architecture manages data to reveal its full potential. Solutions for creating and elaborating on data pipelines aim to collect raw data from multiple sources and prepare it for analysis. This article talks about the principal components of a typical data pipeline architecture. It also provides best practices for constructing efficient and resilient solutions. You will also see how Skyvia can improve data orchestration without coding. Table of Contents Key Components of Data Pipeline Architecture Design Patterns for Data Pipeline Architecture How Skyvia Integrates with Data Pipeline Architecture Data Quality and Monitoring in Pipelines Conclusion FAQ Key Components of Data Pipeline Architecture This is an umbrella term that refers to data movement processes between different systems. It also describes the structure and components of a dataflow, which is tailored to each specific business case. The majority of pipelines are linear and simple, but the expansion of digital services may sometimes complicate their structure. Anyway, a typical data pipeline architecture includes a standard set of modules: Sources of data Ingestion methods Transformation options Storage services Delivery and consumption systems 1. Data Sources The toolset may vary from one company to another greatly. Therefore, pipeline architecture needs to consider such a variety of sources and be able to ingest data from: SaaS applications Relational and NoSQL databases Data warehouses Web services APIs Flat files IoT systems Other online and offline services 2. Data Ingestion [Data ingestion](https://skyvia.com/learn/what-is-data-ingestion) is raw data extraction from the chosen sources into a pipeline using either: A batch mode with data collection over predefined intervals. A real-time streaming with immediate data capture. The choice of ingestion method depends on the speed, volume, and diversity of data to be collected. Data retrieval mode also affects pipeline complexity, latency, and infrastructure costs. 3. Data Transformation The [data transformation](https://skyvia.com/learn/what-is-data-transformation) stage comes next after ingestion. Typical transformations include: Data cleansing Duplicate removal Outlier detection Normalization Standardization At this stage, it\u2019s necessary to prepare data for further analytical goals by ensuring its accuracy and quality. The data transformation step also prepares a foundation for subsequent data pipeline stages. [ETL tools](https://skyvia.com/blog/etl-tools/) , such as [Skyvia](https://skyvia.com/) , are often a popular choice for data transformations. 4. Data Storage After transformation, data travels to the destination system to be stored and processed. Relational databases, NoSQL databases, data warehouses, and data lakes are among the most popular solutions for accumulating data. Decide which storage repository to select considering: data types retrieval speed requirements expected scalability grade Whatever storage system you choose must provide data for reporting, analytics, and other purposes defined by a company. For instance, data lakes contain vast amounts of raw data at low cost but aren\u2019t very adapted for analytical queries. Data warehouses also store considerable amounts of data, but they are more convenient for querying. This results from data warehouses storing only structured and semi-structured data, while data lakes may contain unstructured data. 5. Data Delivery Data delivery is the final stage of a data pipeline. At this point, data travels to BI tools, analytics platforms,\u00a0machine learning engines, etc., so that end users (analysts, executives, etc.) can consume and utilize data. Design Patterns for Data Pipeline Architecture Data pipeline creation is a multi-step process where you need to consider many nuances. We have prepared several valuable pieces of advice for building scalable and resistant data pipelines. 1. Make a selection of data sources Explore data types, formats, and other essential characteristics in systems where data is generated. This is necessary for a proper ingestion workflow design. 2. Validate data quality Check the data quality at the entry point by detecting missing values, duplicates, anomalies, and other issues. Timely discovery of quality distortions prevents inaccuracies in further data pipeline stages. 3. Data security Implement regular security checks to protect sensitive data across the pipeline. Data encryption, role-based access control, backup, and data governance are the most effective methods for ensuring data integrity and security. 4. Scalability Design your pipelines with the changing loads in mind to ensure they can handle growing amounts of data. Adopt distributed processing frameworks and additional computational capacities for that. 5. Monitoring and logging Include mechanisms for observing data flow throughout the pipeline. Ensure they can detect errors, perform audits, and execute regular health checks. Proper monitoring mechanisms help to avoid downtime and trace data movement over the pipeline. 6. Opt for data modularity As business requirements evolve, data pipelines also might need to be changed. Therefore, it makes sense to design modular pipelines where components can be added, altered, or removed over time, adding flexibility to the pipeline. A modular structure helps to avoid pipeline creation from scratch but use the existing one and change it. 7. Choose a suitable processing algorithm Decide whether batch, real-time, or hybrid processing suits your business use case. Consider the volume and velocity of the data entering a pipeline to select the correct processing algorithm. 8. Disaster recovery Craft a disaster recovery plan that includes distributed storage solutions and regular backups. This approach will help to minimize downtime and ensure quick pipeline recovery in case of system failures. 9. Regular testing Execute a series of tests on a regular basis on various data pipeline stages. With this approach, you can detect issues within your data pipelines promptly before they affect the integration flow. Testing procedures also help to explore the pipeline behavior on fluctuating data loads. How Skyvia Integrates with Data Pipeline Architecture In modernity, ELT and [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) are trendy types of dataflows. They effectively operate large volumes of information through collection, processing, and delivery to target systems. [Skyvia](https://skyvia.com/) is a universal cloud platform designed for a wide range of data-related tasks, including data pipeline building. This tool supports ETL, ELT, and Reverse ETL pipelines. Note that the ETL, ELT, and Reverse ETL pipelines aren\u2019t the same as a data pipeline in general. Usually, an ETL pipeline represents several stages of a complete data pipeline. Feel free to check the [key differences between ETL and data pipelines](https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/) . Skyvia offers several solutions for building different types of pipelines. [Import](https://skyvia.com/data-integration/import) is a wizard-based tool that allows you to construct ETL and Reverse ETL pipelines, apply data transformations, and map source and destination data structures without coding. [Replication](https://skyvia.com/data-integration/replication) is a wizard-based tool that allows you to build ELT pipelines to copy data from source apps to a database or a data warehouse without coding. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/index.html) is a visual pipeline builder that allows you to design more complex data pipeline diagrams with compound transformations. With this tool, it\u2019s possible to integrate multiple data sources into a pipeline and implement multistage data transformations. Benefits of building data pipelines with Skyvia: Pricing plans that are suitable for any company regardless of its size and industry of operation. An intuitive interface allows you to create pipelines of various difficulty levels without coding. Powerful transformation functions. Being a native-cloud solution, Skyvia allows for easy scalability to match current data needs. Support of 200+ sources, including SaaS apps, databases, data warehouses, etc. Data Quality and Monitoring in Pipelines Monitoring and logging features help businesses track data integration outcomes. These mechanisms also detect data quality issues at any point in the pipeline architecture. All of this ensures data accuracy, which is essential for correct analysis of business performance and credible predictions for the future. Let\u2019s look at the monitoring in data pipelines and explore Skyvia mechanisms for ensuring data quality at certain stages of the pipeline. Data sources. At this stage, data monitoring takes place on the source site, where the system shows issues with raw data and suggests actions to be taken to improve data quality. Collection. When starting to ingest data from a source, Skyvia will show a warning in case of any connection issue or data retrieval problem. Processing. During the transformation and mapping procedures, Skyvia shows warning and error messages if something isn\u2019t set up correctly. For instance, data structure inconsistencies between the source and target system or incompatibility of the chosen data format with the field type will invoke a warning. Storage. Once the integration is executed, Skyvia provides its outcome in the Monitor tab. In case of failure, check the Log for a detailed description of an error. You can also set up email notifications in case of data integration issues. Consumption. At this stage, the warning and notification message appear in the BI tools, reporting solutions, and other software products where data is used for a specific purpose. Conclusion Data pipeline architecture is a compound notion that involves dozens of tools and processes. It needs to implement mechanisms for robust data security, quality checks, and smooth data flow. A pipeline should also be modular, providing scalability and flexibility in data movement. Skyvia is at the heart of any data pipeline since it offers [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) approaches for working with data. It also ensures outstanding data security, powerful transformation options, and a variety of no-code connectors to multiple data sources. FAQ for Data Pipeline Architecture What things to keep in mind when designing a data pipeline? There are several principal things to consider before and during the data pipeline diagram design. First of all, you need to decide which components and processes need to be included in the pipeline. Another important aspect relates to the fluctuating data volumes and the ability of a pipeline to handle them. Then, think of regular quality checks and security measures to ensure data integrity. What is the difference between ETL and data pipeline? A data pipeline is a general term assuming the flow of data from a source system to the destination with the processing operations in between. Meanwhile, the ETL pipeline is a specific approach to data pipeline organization. Feel free to explore other key differences between ETL and data pipelines. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-architecture%2F) [Twitter](https://twitter.com/intent/tweet?text=Data+Pipeline+Architecture%3A+Key+Stages+and+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-architecture%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-pipeline-architecture/&title=Data+Pipeline+Architecture%3A+Key+Stages+and+Best+Practices) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Differences between the Data Pipeline and the ETL Pipeline By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/#respond) 1260 September 27, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-vs-etl-pipeline%2F) [Twitter](https://twitter.com/intent/tweet?text=Differences+between+the+Data+Pipeline+and+the+ETL+Pipeline&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-vs-etl-pipeline%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/&title=Differences+between+the+Data+Pipeline+and+the+ETL+Pipeline) Today\u2019s data-driven environment is all about data integration. It means combining data from CRMs, databases, spreadsheets, cloud services, etc., into a unified view. It\u2019s like assembling puzzle pieces to see the whole picture, enabling better decision-making and more efficient operations. Data pipelines and ETL pipelines are essential parts of effective data integration, helping to collect these puzzles and understand what\u2019s going on. They automate the flow and transformation of data, ensuring that users always have reliable, up-to-date information.\u00a0However, \u201cData pipeline\u201d is a broader term that also includes the \u201cETL pipeline.\u201d Let\u2019s consider both terms in detail to determine the difference and choose the method that best suits your business processes. Table of contents What is a Data Pipeline? What is an ETL Pipeline? Key Differences Choosing between Data Pipelines and ETL Pipelines Integrating ETL in Data Pip [elines](https://docs.google.com/document/d/162E5vjbBxvvWegAdZWvHt2QzdbbPNRh6ArF4PezVM2Q/edit#heading=h.3jwf4p4khd0l) Tools and Technologies Conclusion What is a Data Pipeline? Imagine a big kitchen where the chef is looking for ingredients from different storage areas like the fridge, pantry, and spice rack to the prep station to start cooking. But, instead of running around grabbing things one by one, there is a system bringing everything right to the cooking place. That\u2019s how a data pipeline works, but it uses data instead of food. In other words , [data pipeline](https://skyvia.com/blog/what-is-data-pipeline/) is a general term meaning the series of processes that move data from one system to another, usually from where it\u2019s created or stored to where it will be used or analyzed. It\u2019s like the \u201cconveyor belt\u201d in the kitchen analogy. Such processes involve data extraction, transformation, loading, cleaning, aggregating, and more. The pipeline ensures data flows smoothly and efficiently through various stages, collecting, processing, and sometimes storing before reaching its final destination, whether in real-time or batches. Let\u2019s see how a data pipeline works: Data Collection. The pipeline starts by gathering data from various sources, such as databases, applications, sensors, or social media feeds. Data Movement. Once the data is collected, the pipeline moves it from the source to a destination, such as a database, data warehouse, or big data platform. Data Processing (Optional). Sometimes, the data needs a bit of prep before it\u2019s ready to be used, like chopping vegetables before cooking. This step could involve cleaning the data, transforming it into a usable format, or combining it with other data in a data pipeline. Data Storage or Output. Finally, the data arrives at its destination, ready to be used. It might be stored in a database, analyzed in a report, or fed into a machine-learning model for predictions. Let\u2019s say one company has an online store. Its pipeline might start by extracting customer data, order details, and inventory levels from CRM, payment processors, and inventory management software. The next step is to transform this data to ensure consistency (e.g., converting different date formats to a standard one) and finally load it into a data warehouse. From there, they can run reports, analyze sales trends, or personalize marketing campaigns based on this unified data. What is an ETL Pipeline? [ETL pipeline](https://skyvia.com/learn/etl-pipeline-meaning) is a specific type of data pipeline that extracts data from various sources, transforms it into a usable format (cleaning, aggregating, or converting it as needed), and loads it into a target system, like a data warehouse. It automates the heavy lifting of data processing, ensuring that the information users need is accurate, consistent, and ready for analysis. Imagine a company that gets financial data from different branches worldwide, all in different formats and currencies. The ETL pipeline: Extracts this data. Transforms it by converting currencies into standard ones. Formats it uniformly. Loads it into a central database. These steps allow the company\u2019s financial team to generate reports accurately reflecting global operations without manually cleaning and merging data from various sources. Key Differences While data pipelines and ETL are essential for managing and moving data, they serve different purposes and operate differently. The table below briefly displays the key differences: Factor to Consider Data Pipeline ETL Pipeline Purpose and Scope Moving data from one place to another, handling various data types. Extracting, transforming, and loading data into a target system. Process Flow Real-time or [batch processing](https://skyvia.com/blog/batch-etl-processing/) focused on data flow. Structured process: Extract, Transform, Load. Data Handling Real-time or batch processing, focused on data flow. Primarily batch processing, with some real-time capabilities. Transformation Capabilities In a typical data pipeline, the transformation of data is either minimal or optional. The main goal is to ensure that data moves smoothly from its source to its destination, whether it\u2019s a database, a data lake, or another storage system. If any transformations do occur, they are usually lightweight, such as basic data cleaning or formatting adjustments. Extensive data transformation, including cleaning and formatting. Latency and Real-Time Processing Optimized for low-latency, real-time data flow. Traditionally batch processing with higher latency, but evolving towards real-time. Flexibility and Scalability Highly flexible and scalable for different environments. Less flexible due to structured steps but effective for specific data workflows. Use Cases Real-time data streaming, log aggregation, data transfer. Data preparation for analysis, data warehouse population, data quality assurance. Choosing between Data Pipelines and ETL Pipelines As we have considered before, data and ETL pipelines are powerful tools that cater to different requirements depending on the nature of the data processes. Here\u2019s a quick guide of factors to keep in mind to help users make the best choice. Data Processing Requirements ETL Pipelines are perfect for batch processing, where data needs to be transformed and cleaned before being loaded into a target system, like a data warehouse. Data Pipelines are the best for real-time or near-real-time data processing where immediate data flow is crucial. Complexity and Maintenance Data Pipelines are generally simpler and easier to maintain, especially for straightforward data flows with minimal transformation needs. ETL Pipelines are more complex due to the transformation processes involved. They require regular maintenance to ensure that transformations are updated with the evolving data requirements. Cost and Resource Availability Data Pipelines typically are cheaper and resource-intensive, especially if users are dealing with straightforward data flows. They can often be managed with minimal infrastructure. ETL Pipelines can be more costly due to the need for a robust infrastructure to handle complex transformations and large data volumes. They often require specialized resources to manage and maintain. End-User Needs Data Pipelines are perfect when end-users need real-time or near-real-time data for applications like dashboards, monitoring systems, or customer-facing applications. ETL Pipelines may be a good choice when end-users require clean, structured data for reporting, analytics, and decision-making, usually at regular intervals. Integrating ETL in Data Pipelines Data zoos of different tools and services usually require a versatile approach, so real businesses most often use a hybrid approach. This approach combines a data pipeline\u2019s flexibility with ETL\u2019s data transformation capabilities to achieve the best of both worlds. It\u2019s like owning a powerful Jeep: not just letting it sit in the garage but taking it out for real adventures. A hybrid approach uses ETL processes within a data pipeline to handle specific tasks where data needs to be cleaned, transformed, or enriched before it moves to its final destination. This setup is perfect when users need the real-time data flow of a pipeline and ensure that the data is in the correct format or structure before using it. How It Works Extract. Data is pulled from various sources, like databases, APIs, or cloud storage. Transform. Instead of performing all transformations at once, users might do them at different pipeline stages. For example, it\u2019s possible to clean and standardize the data early on and then enrich it with additional information later. Load. The data is loaded into its final destination, like a data warehouse, a real-time analytics tool, or even a machine learning model. Real-World Examples TitanHQ Data Analytics Pipeline Automation to Get a 360-degree Customer View Scenario. TitanHQ uses [Skyvia](https://skyvia.com/case-studies/titanhq) to automate its data analytics pipeline, creating a 360-degree customer view. It means extracting data from multiple sources, transforming it to align with their reporting needs, and loading it into their data warehouse for analysis. Benefit . This automation allows TitanHQ to make data-driven decisions faster and more efficiently, with a consistent and up-to-date view of customer data. Skyvia and Cirrus Insight: Salesforce-QuickBooks Integration Scenario. Cirrus Insight was adopted to integrate Salesforce with QuickBooks using [Skyvia](https://www.youtube.com/watch?v=qf-6nYllrIU) to enhance productivity and reduce costs. Benefit. Improved operational efficiencies, streamlined financial reporting with NetSuite, and significant cost savings. Tools and Technologies [ETL](https://skyvia.com/blog/etl-tools/) and [data pipeline](https://skyvia.com/blog/10-best-data-pipeline-tools/) tools are often called the backbone of modern data processing. They help businesses move and transform their data to make it usable for analytics, reporting, and more. All these tools have strengths, depending on what users seek, like flexibility, integration with existing platforms, or real-time data processing. When choosing the right tool, don\u2019t be lazy to review a lot to find your pearl. The first step is to look through the tools\u2019 comparison listings in popular blogs; for example, you can check these [no-code ETL tools](https://skyvia.com/blog/top-no-code-etl-tools/) . The next good idea is to compare the [tool\u2019s capabilities](https://skyvia.com/etl-tools-comparison) and consider your specific needs, budget, and the complexity of data workflows. At the same time, check the tools\u2019 ratings on [G2 Crowd](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , [Trustradius](https://www.trustradius.com/products/skyvia/reviews#overview) , and [Capterra](https://www.capterra.com/p/146167/Skyvia/reviews/) . Review users\u2019 responses describing the pros and cons of each tool. You\u2019re also free to use freemium pricing plans and demos to select the tool of your dreams, not depending on the company size. Start-ups, small businesses, and even enterprises can use free plans and demo versions to feel how it goes. 5. The next step is to review [video materials](https://skyvia.com/learn) and [white papers](https://skyvia.com/whitepapers) to investigate the abilities of such tools. Universal [data integration tools](https://skyvia.com/blog/data-integration-tools/) like [Skyvia](https://skyvia.com/) , [Apache Nifii](https://nifi.apache.org/) , [Talend](https://talend.com/) , etc., provide users with a comprehensive pool of information. Conclusion Choosing between a data pipeline and an ETL pipeline depends on companies\u2019 specific data processing needs, the complexity of data transformations, the budget, and the requirements of end users. A data pipeline is the way to go if you need real-time data flow with minimal transformation. However, an ETL pipeline will serve you better if data requires extensive cleaning and structuring before it can be used. Understanding these differences helps you make the right choice for your project. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-vs-etl-pipeline%2F) [Twitter](https://twitter.com/intent/tweet?text=Differences+between+the+Data+Pipeline+and+the+ETL+Pipeline&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-pipeline-vs-etl-pipeline%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-pipeline-vs-etl-pipeline/&title=Differences+between+the+Data+Pipeline+and+the+ETL+Pipeline) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-preparation-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Data Preparation Tools in 2025 By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/data-preparation-tools/#respond) 1609 June 27, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-preparation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Preparation+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-preparation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-preparation-tools/&title=Best+Data+Preparation+Tools+in+2025) In the world of Big Data, organizations around the world strive to make the maximum possible benefit from the data they generate. With a clean and structured way to process this data, it becomes easier to gain insights from it, which obstructs making informed decisions. The raw data needs to be cleaned, organized, and transformed into a suitable structure for the business. Data preparation tools aid in this process by providing specialized services that help developers and analysts design stable workflows, automate the process of cleaning and transforming raw data and thus help the organization make informed and smarter decisions. In this article, let\u2019s understand what data preparation tools are about, some key features to consider for a data preparation tool, and take a look at some of the best tools for data preparation in 2025. Table of Contents What Are Data Preparation Tools? Key Features to Consider Best Data Preparation Tools Alteryx Tableau Prep Fivetran IBM Data Refinery Microsoft Power Query OpenRefine How does Skyvia Data Integration help? Conclusion What Are Data Preparation Tools? As the name suggests, data preparation tools help users prepare and ingest raw, uncleaned data into a format useful for analysis. Often, raw data from multiple sources needs to be ingested and enriched with business features to make meaningful insights. Although it\u2019s possible to write custom applications that help prepare your data, it comes with a cost of development and, thus, more time to market. Data preparation tools are readymade, mostly low-code solutions with easy-to-understand user interfaces that enhance the process of preparing and cleaning data easily. Such tools are not only for technical users but also designed to cater to business users, thus empowering them to design and develop workflows. Key Features to Consider Whether you are using a data preparation tool or plan to use one in the future, it\u2019s important to understand some of its key features before choosing one. Features and Functionality Most modern data preparation tools allow ingesting raw data from multiple sources, such as on-premise or cloud-based data stores. Additionally, integration with AI-based processing helps automate mundane tasks such as fixing column formats or typos within the text. The data that is prepared and transformed should also be deterministic, and thus, tools must have the functionality to run automated pipelines on a schedule. Ease of Use Data exploration and profiling are performed on the ingested data to better understand and analyze it. The easier the tool is to use, the more value it provides to the business stakeholders. Support of multi-user collaboration and version control makes the tool a suitable candidate when choosing a data preparation tool. Customer Support Another important factor to key in while choosing a data preparation tool is training and support. With an effective customer support plan it becomes easy for the users to overcome common challenges faced while setting up the solution or simply integrating it with existing systems. Support teams can offer access to their comprehensive knowledge base and share best practices on how to leverage the full potential of the tool. They can also help provide tailored solutions as per the business, tune performance and design optimized workflows. Best Data Preparation Tools In this section, let\u2019s take a deeper look at some of the best data preparation tools on the market in 2024. While most of these have some common overlapping features, a few prominent distinctions make them stand out. Alteryx [Alteryx](https://www.alteryx.com/) is one of the leading platforms in the space of data analytics. It provides an easy-to-use interface to ingest raw data from multiple sources, clean, and transform using the Visual Workflow Designer. It\u2019s suitable for both technical and business users to design simple and complex data transformation pipelines without the need to learn any programming language background. With its drag-and-drop interface, users can visually design data pipelines by integrating data from various sources and applying transformations. The final cleaned data and be transferred to an on-premise or a cloud solution of choice. Some key features of Alteryx are as follows: AI-driven recommendations for data preparation and quality Workflow automation User-friendly graphical user interface Connections with on-premise and cloud-based storage Tableau Prep [Tableau Prep](https://www.tableau.com/products/prep) is a self-service data preparation tool offered by the developers of the popular data visualization tool Tableau. Originally a part of the visualization tool itself, it\u2019s now offered as a stand-alone solution for data preparation, data wrangling, and integration. With Tableau Prep, users can connect to various data sources and ingest and clean data before visualizing them with Tableau. Key features include: Strong integration with Tableau suite of products. Support for a wide array of data sources and sinks. Fast data preparation techniques Fivetran [Fivetran](https://www.fivetran.com/) is a popular cloud-based data integration tool with extensive support for data preparation and transformation. It integrates with almost every cloud-based SaaS application and datastore, making it the go-to choice for users who run their businesses on the cloud. Key Features of Fivetran: Connectors for all major cloud vendors Real-time data preparation Sink data to Amazon Redshift or Google BigQuery for analysis IBM Data Refinery [IBM Data Refinery](https://www.ibm.com/products/data-refinery) is a part of a broader suite of IBM Watson Studio. With Data Refinery, users can clean, transform, and integrate raw data from sources and start exploration and profiling. Data Refinery aids in enhancing the datasets by integrating with external APIs and thus enriching the data with more business context. Key Features of IBM Data Refinery: Integration with IBM Watson and other products within the IBM ecosystem Cloud-based data processing for large datasets Built-in integrations for data security and governance Support for machine learning and advanced analytics Microsoft Power Query Microsoft has an entire ecosystem built for data ingesting, preparation, transformation, and self-service business intelligence. Within its plethora of products, [Power Query](https://learn.microsoft.com/en-us/power-query/power-query-what-is-power-query) is a data preparation engine developed by Microsoft that enables users to ingest and transform raw data within the Microsoft suite of products. Power Query comes inbuilt with popular tools such as Microsoft Excel and Power BI, so there isn\u2019t a need to install it separately. Key Features of Power Query: An easy-to-use interface like Microsoft Excel Complex transformations can be done using the M Query language Support for integration with Microsoft and Azure-based solutions OpenRefine [OpenRefine](https://openrefine.org/) , formerly known as Google Refine, is an open-source tool for cleaning and transforming messy data. It provides a user-friendly interface for exploring data, reconciling inconsistencies, and transforming data into a structured format. With its intelligent processing capabilities, Open Refine can detect and mitigate minor inconsistencies within the source datasets, such as removing duplicates, fixing typos, etc. There are options within the tool to split, merge, or transform datasets using pre-built or custom-designed functions. On top of these, Open Refine also provides a user-friendly interface for navigating and exploring large datasets, giving users the ability to understand the structure of their data. Key Features include: Faceting and Filtering Data Reconciliation Extensibility How does Skyvia Data Integration help? [Skyvia](https://skyvia.com/solutions/) is one of the leading data integration providers in modern days. With its universal cloud data platform, Skyvia provides a powerful data preparation web interface that is very easy to set up and use. There are a lot of source collectors including SaaS applications from which data can be extracted and fetched. Skyvia is a market leader in providing fully functional business analytics solutions. Services range from data extraction, preparation, transformation, and finally, loading into cloud warehouses like Amazon RedShift or Google Big Query. With a Free plan to get started, you can extract and prepare your data in a format essential for analysis and processing. However, with paid plans such as the Basic or Standard , there are a lot of options possible. They also offer a Business and Enterprise plan for heavy usage. Advantages of Skyvia Some of the advantages of using Skyvia Data Integration are: Bidirectional data integration to and from [multiple source systems](https://skyvia.com/connectors/) (DWHs, CRM tools, and SaaS applications) manually or by schedule. Auto mapping of columns from data sources. Automated backup. Data integration, [ETL](https://skyvia.com/data-integration/synchronization) , ELT, and Reverse ETL. Conclusion Data preparation is an important step in data processing and business intelligence. Cleaning and preparing data before analysis yields better insights from data and thus contributes towards data-driven decision-making. Data preparation tools come to the rescue while dealing with raw, uncleaned data from source datasets. Skyvia is a market leader in providing multiple solutions for data extraction and preparation that users can greatly benefit from. To learn more about Skyvia, please [check out the product page](https://skyvia.com/) or start it for free now! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-preparation-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Preparation+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-preparation-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-preparation-tools/&title=Best+Data+Preparation+Tools+in+2025) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-profiling-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 Data Profiling Tools for Improved Data Quality By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-profiling-tools/#respond) 998 November 14, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-profiling-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Profiling+Tools+for+Improved+Data+Quality&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-profiling-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-profiling-tools/&title=Top+10+Data+Profiling+Tools+for+Improved+Data+Quality) In data management, data profiling means examining data to understand its structure, quality, and content before using it for analysis, reporting, or any other data-driven tasks. Data profiling is all about getting to know the data: understanding its patterns, spotting any inconsistencies, and identifying potential issues that might affect its accuracy or usability. In a world where businesses rely on data to make crucial decisions, data profiling might sound like a behind-the-scenes task. Still, its impact is huge in improving data quality and ensuring it is ready to support our business needs. Now, let\u2019s review the numbers. Improving Data Quality. According to a survey by [Experian Data Quality](https://www.edq.com/resources/data-management-whitepapers/2019-global-data-management-research/) , 84% of businesses believe that poor data quality impacts their ability to make accurate decisions. Data profiling identifies errors, inconsistencies, and missing data early in the process, allowing users to address these issues before they cause problems. Boosting Data Integrity: Data integrity ensures that users\u2019 data is accurate, consistent, and reliable. A study by [Gartner](https://www.gartner.com/en/data-analytics/topics/data-quality#:~:text=Why%20is%20data%20quality%20important,to%20Gartner%20research%20from%202020.) found that poor data quality costs organizations at least $12.9 million annually, and 40% of all business initiatives fail to achieve their targeted benefits due to poor data quality. Data profiling tools help maintain data integrity by detecting anomalies and ensuring data adheres to defined standards and business rules. Enhancing Usability: Clean, well-structured data is easier to use and more valuable. According to [Harvard Business Review](https://hbr.org/2016/09/bad-data-costs-the-u-s-3-trillion-per-year) , poor data quality costs the US economy $3.1 trillion per year. Data profiling ensures that data is well-organized and usable, allowing teams to extract meaningful insights more efficiently. This article will discover the top 10 data profiling tools and how they help businesses grow and develop. Table of contents Benefits of Using Data Profiling Tools Criteria for Selecting Data Profiling Tools Top 10 Data Profiling Tools Dataedo Atlan Datamartist IBM InfoSphere Information Analyzer Talend Data Preparation SAS Data Management Informatica Data Quality Oracle Enterprise Data Quality Microsoft Power Query SAP Information Steward Enhancing Data Profiling with Skyvia Conclusion Benefits of Using Data Profiling Tools Data profiling tools are like the unsung heroes of data management, working behind the stage to ensure that the data users work with is reliable, accurate, and ready for action. Of course, everyone knows what words like \u201cdata enhancements,\u201d \u201cdata-driven decisions,\u201d and so on mean, but now, we\u2019ll break down the real benefits supported by real numbers. Improved Data Quality According to a report by Experian [Global Data Management Report](https://www.experian.co.uk/assets/data-quality/experian-global-data-management-report-jan-2019.pdf) , 72% of organizations said that data quality issues had impacted trust and perception of their analytics. With data profiling, businesses can significantly reduce these issues, leading to more reliable and trustworthy data. Reduced Errors A study by [Harvard Business Revie](https://hbr.org/2017/09/only-3-of-companies-data-meets-basic-quality-standards) [w](https://hbr.org/2017/09/only-3-of-companies-data-meets-basic-quality-standards) found that poor data quality can lead to productivity losses of up to 20% . By using data profiling tools, businesses can minimize these errors and reduce the potential for costly mistakes. Enhanced Decision-Making According to [Deloitte](https://www2.deloitte.com/ca/en/pages/consumer-business/articles/using-data-driven-insights-to-enable-better-business-outcomes.html) , companies that use data-driven decision-making are 5% more productive and 6% more profitable than their competitors. Data profiling tools help ensure that the data driving these decisions is accurate and reliable. Identifying Data Anomalies and Patterns A study by [Forbes](https://www.forbes.com/councils/forbesagencycouncil/2020/02/03/how-to-apply-anomaly-detection-and-reap-these-three-benefits/) reports that companies using advanced data analytics, including anomaly detection, can achieve up to a 73% improvement in business performance. Data profiling tools are key to this by ensuring that anomalies are spotted and addressed early. Criteria for Selecting Data Profiling Tools Picking the right data profiling tool is a bit like choosing the perfect gadget. People always want something that fits their needs, makes their life easier, and gets the job done without too much hassle. Let\u2019s review the essential factors when selecting a data profiling tool. Ease of Use . A tool that\u2019s easy to use, with an intuitive interface and clear instructions, will save users time and frustration. Look for tools that offer drag-and-drop functionality, visual dashboards, and straightforward workflows that anyone on your team can pick up quickly. Integration Capabilities . Choosing a tool that integrates seamlessly with the systems and platforms companies are already using is essential. Whether it\u2019s a CRM, data warehouse, or cloud storage, the tool should be able to pull in data from multiple sources and give a complete view of the data landscape. Scalability . As each business grows, so does the data. Companies need a tool that can scale with them, handling everything from small datasets to massive data lakes without missing a beat. Look for tools that can expand your processing power and storage capabilities as your needs evolve so you\u2019re not left scrambling for a new solution down the road. Cost . Budget matters, but so does value. [When considering cost](https://skyvia.com/blog/etl-cost/) , look beyond the price tag and think about the return on investment. A more expensive tool might offer features that save time and money in the long run, while a cheaper option could end up costing more in terms of inefficiency or lack of features. Balance cost with the benefits you\u2019re getting to ensure you\u2019re making a smart investment. Top 10 Data Profiling Tools Now, let\u2019s review the top 10 data profiling tools that fit the above criteria and can help organizations manage their data more effectively. Dataedo [Dataedo](https://dataedo.com/) allows teams to easily create and manage data catalogs, ensuring all metadata is well-documented and accessible. Dataedo supports various database platforms, which is versatile for different environments. Its robust documentation features help businesses understand their data structures, relationships, and dependencies, leading to better data governance and decision-making. Key Features Metadata management and data documentation. Robust data cataloging and data lineage tracking. Pros Highly intuitive and easy to use. Strong focus on metadata documentation. Support of multiple database platforms. Cons Lacks some advanced data profiling features. It is not ideal for large-scale enterprise data profiling. Pricing The basic plan [starts](https://dataedo.com/pricing) at $149 per user/year . Ratings [G2 Rating](https://www.g2.com/products/dataedo/reviews) : 5.0 [Capterra Rating](https://www.capterra.com/p/199871/Dataedo/) : 4.7 Atlan [Atlan](https://atlan.com/) is a modern, collaborative workspace that excels in data discovery and profiling. With it, teams can explore, profile, and manage their data in a unified platform. Atlan\u2019s collaborative features allow data teams to work together more effectively, ensuring that data assets are well-documented and easily accessible across the organization. Its profiling tools help users quickly identify data quality issues and understand data distributions. Key Features Easy data discovery and profiling tools. Strong data governance and cataloging capabilities. Pros Great integration with various cloud and on-premise data platforms. Encourages collaboration between data teams. Cons Pricing can be high for small businesses. Some users may initially find the interface complex. Pricing Custom pricing based on company size and requirements. Ratings [G2 Rating](https://www.g2.com/products/atlan/reviews) : 4.5 [Capterra Rating](https://www.capterra.com/p/185548/Atlan/) : 4.5 Datamartist [Datamartist](http://www.datamartist.com/) simplifies the process of understanding and transforming complex datasets. The tool allows users to create visual workflows for data preparation so businesses can see the impact of data transformations in real-time. Datamartist helps work with ETL processes and enables users to manage and cleanse data more efficiently without writing complex code. Key Features Visual data profiling and transformation capabilities. Simplifies complex ETL processes. Pros Excellent for visualizing data transformations. Drag-and-drop interface for easy data manipulation. Great for small to medium-sized datasets. Cons Limited scalability for large enterprises. It is not as feature-rich for advanced analytics. Pricing The basic plan [starts](https://analyticscanvas.com/signup/) at $49 per user/month . Ratings [Capterra Rating](https://www.capterra.com/p/248277/Canvas/) : 4.5 IBM InfoSphere Information Analyzer [IBM InfoSphere Information Analyzer](https://www.ibm.com/products/infosphere-information-analyzer) offers comprehensive data profiling and analysis features. That\u2019s a good choice for large enterprises. It supports big data environments and provides detailed insights into data quality through advanced profiling, including column analysis, rule validation, and relationship discovery. The tool helps organizations ensure data accuracy, compliance, and integrity, integrating well with other IBM products for a complete data management solution. Key Features Comprehensive profiling for large and complex datasets and rule validation. Robust compliance and data governance features. Pros Supports various data sources, including big data platforms. Integrates well with other IBM data management products. Cons Expensive, with complex pricing structures. The platform requires a higher level of technical expertise. Pricing Custom pricing based on organizational needs and data volume. Ratings [G2 Rating](https://www.g2.com/products/ibm-infosphere-information-analyzer/reviews) : 4.2 [Capterra Rating](https://www.capterra.com/p/152188/InfoSphere-MDM-Server-for-PIM/) : 5.0 Talend Data Preparation [Talend Data Preparation](https://talend.com/products/data-preparation/) is part of Talend\u2019s open-source suite of data tools, providing data profiling and cleansing functionalities. It allows users to quickly clean, enrich, and standardize data through an intuitive interface. Talend\u2019s approach to data profiling is highly flexible, supporting various data sources and formats. The tool\u2019s open-source nature makes it accessible to organizations of all sizes, offering a cost-effective solution for improving data quality. Key Features Data cleansing features. Real-time data transformation and profiling. Integration with Talend\u2019s broader data management suite. Pros Cost-effective with an open-source version. Highly flexible and customizable. User-friendly interface for data preparation. Integration with Talend\u2019s broader data management suite. Cons Some advanced features are only available in the paid version. It is not ideal for very large or highly complex datasets. Pricing The free, open-source version is available. Paid [plans](https://talend.com/pricing/) for Talend Data Fabric are flexible and depend on the user\u2019s requirements. Ratings [G2 Rating](https://www.g2.com/products/talend-data-preparation/reviews) : 4.5 [TrustRadius Rating](https://www.trustradius.com/products/talend-data-integration/reviews) : 7,6 SAS Data Management [SAS Data Management](https://www.sas.com/en_us/solutions/data-management.html) provides a set of data profiling, cleansing, and monitoring tools, integrated with the broader SAS analytics ecosystem. This platform handles large datasets and complex data environments, offering advanced analytics to detect data anomalies and enforce data standards. SAS\u2019s data profiling features improve data quality and reliability, so the solution is perfect for organizations requiring deep data insights. Key Features Robust data profiling and cleansing tools. Advanced anomaly detection and data standardization. Pros Excellent for large-scale enterprise data management. Customizable and scalable for different data needs. Integration with SAS\u2019s analytics and data management ecosystem. Cons High learning curve for new users. Pricing can be prohibitive for smaller organizations. Pricing Custom pricing based on organizational needs. Ratings [G2 Rating](https://www.g2.com/products/sas-sas-data-management/reviews) : 4.1 [Capterra Rating](https://www.capterra.com/p/168302/SAS-Data-management/) : 4.5 Informatica Data Quality [Informatica Data Quality](https://www.informatica.com/products/data-quality.html) is a leading tool for advanced data profiling, offering features like machine learning integration and real-time data processing. It handles complex data environments, providing comprehensive profiling capabilities that help identify and resolve data quality issues quickly. Informatica\u2019s robust framework supports continuous data quality monitoring, ensuring data remains accurate and reliable. Key Features Advanced data profiling with machine learning integration. Real-time processing and continuous data monitoring. Automated anomaly detection and data cleansing. Pros Highly scalable for large datasets. Offers advanced machine learning-based data profiling. Seamless integration with various data sources and platforms. Cons High cost, especially for smaller companies. Requires technical expertise to fully leverage. Pricing Custom pricing based on deployment size and requirements. Ratings [G2 Rating](https://www.g2.com/products/informatica-informatica-data-quality/reviews) : 4.5 [Capterra Rating](https://www.capterra.com/p/170772/Informatica-Data-Quality/) : 4.3 Oracle Enterprise Data Quality [Oracle Enterprise Data Quality](https://www.oracle.com/data-quality/) is particularly suited for environments using Oracle databases. The tool provides detailed data quality assessments, helping organizations maintain high data accuracy and consistency standards. Oracle\u2019s integration with its broader database and application ecosystem ensures seamless data management and profiling processes. Key Features Advanced data profiling and quality monitoring. Comprehensive data quality assessments and metrics. Pros Strong integration with Oracle\u2019s ecosystem. Excellent for ensuring data accuracy and compliance. Scalable for large enterprises with complex data environments. Cons High cost and complex pricing. Pricing Custom pricing based on company needs and usage. Ratings [G2 Rating](https://www.g2.com/products/oracle-data-quality/reviews) : 4.1 [Capterra Rating](https://www.capterra.com/p/170841/Oracle-Enterprise-Data-Quality/) : 4.3 Microsoft Power Query [Microsoft Power Query](https://learn.microsoft.com/en-gb/power-query/power-query-what-is-power-query) is a data connectivity and transformation tool available in Excel and Power BI. It offers robust data profiling tools that help users visualize data quality and distribution. The platform simplifies importing, cleansing, and shaping data, which makes it accessible to both technical and non-technical users. Power Query\u2019s integration with Excel and Power BI makes it an excellent choice for organizations already using Microsoft products, providing a streamlined workflow for data management. Key Features The graphical interface allows visualizing data quality, distribution, and trends. ETL data processing capabilities. Pros Perfect for Excel and Power BI users. Easy to use and ideal for smaller datasets. Affordable, with a free tier available. Cons Limited scalability for enterprise-level needs. It is not as feature-rich as dedicated data profiling tools. Pricing A free tier is available with Excel and Power BI. Premium versions vary based on Office and Power BI subscriptions. Ratings [Capterra Rating](https://www.capterra.com/p/228526/Microsoft-Power-Apps/) : 4.5 SAP Information Steward [SAP Information Steward](https://www.sap.com/products/technology-platform/data-profiling-steward.html) is a data profiling and metadata management tool that easily integrates with SAP solutions. It offers businesses robust tools for assessing data quality, processing metadata, and ensuring data governance across the enterprise. SAP Information Steward helps organizations maintain data accuracy and compliance. Such capability is vital for businesses that rely heavily on SAP systems. Key Features Data profiling and metadata management. Comprehensive data governance and compliance tools. Pros Seamless integration with SAP products. Scalable for enterprise data profiling needs. Cons Expensive for small to mid-sized businesses. Pricing Custom pricing based on SAP integration and company size. Ratings [G2 Rating](https://www.g2.com/products/sap-information-steward/reviews) : 4.3 [TrustRadius Rating](https://www.trustradius.com/products/sap-information-steward/reviews) : 9.0 Enhancing Data Profiling with Skyvia Any company\u2019s data ecosystem can\u2019t operate with only one tool; it also needs integration. In this case, [Skyvia](https://skyvia.com/) is like the glue that holds everything together. Skyvia\u2019s ability to integrate with data profiling tools, like Talend, Informatica, IBM, and more, is robust enough. It\u2019s a universal ETL, ELT, and reverse ETL data integration tool, the all-in-one platform for bi-directional data syncing, importing, exporting, [mapping](https://skyvia.com/blog/best-data-mapping-tools/) , replication, and backing up data from various systems. Skyvia\u2019s [integration](https://skyvia.com/data-integration) capabilities make it a breeze to pull data from [190+](https://skyvia.com/connectors) sources, send it to the data profiling tool for quality checks, and then push it back into operational systems. It connects with databases, cloud apps, and CRMs, meaning companies can profile their data wherever they live. The platform is in the top three of the 20 easiest-to-use [ETL tools](https://skyvia.com/blog/etl-tools/) for 2024 by [G2](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . [TrustRadius](https://www.trustradius.com/products/skyvia/reviews#overview) score rates it 9,8 out of 10. Additional Benefits of Using Skyvia Skyvia doesn\u2019t stop at just integrating with profiling tools. It brings a lot of additional benefits that make managing data simpler and more efficient. Automated Data Synchronization. Users don\u2019t have to worry about manually syncing data between systems. Skyvia automates this process, ensuring their data is always up-to-date across platforms. So, after profiling data in the appropriate tool, Skyvia automatically syncs the cleaned data back into the operational databases or apps. Cloud-to-Cloud Data Integration. Working with multiple cloud platforms? No problem. Skyvia excels at cloud-to-cloud integration, meaning users can easily integrate data between platforms like Salesforce, Google Sheets, and more without breaking a sweat. Control Flow for Advanced Automation. Skyvia\u2019s Control Flow feature allows users to create complex automation workflows by adding conditional branching and iteration capabilities to their data integration tasks. This means that users can automate data processes that require decision-making, such as syncing specific data based on conditions like value ranges, timestamps, or custom rules. Data Replication. For businesses needing to back up or mirror their data, Skyvia\u2019s Data Replication feature is a game-changer that allows users to [replicate data](https://skyvia.com/blog/top-data-replication-tools/) from cloud applications into databases like MySQL, SQL Server, and PostgreSQL. This replication ensures that businesses always have a secure copy of their data for reporting or disaster recovery, helping them maintain data integrity and continuity. Backup & Restore. Skyvia also offers Backup & Restore capabilities, making it easy to back up data from cloud platforms like Salesforce, Dynamics 365, and QuickBooks Online. Skyvia keeps a secure copy of the data, and in the event of data loss or corruption, it provides simple, flexible restore options to recover the exact data users need. Querying and Analysis. Skyvia\u2019s Query feature lets users run SQL-like queries across their cloud data without needing deep technical knowledge. Whether pulling data from Salesforce or other platforms, Skyvia\u2019s query builder allows you to analyze data directly in the cloud and create reports. This feature provides: Real-time insights and analysis. Making it easier to spot trends. Outliers. Anomalies in the data. These features also enhance data accessibility and reliability no matter where it\u2019s stored, so users always work with high-quality, up-to-date information. Case Studies and Examples Let\u2019s look at how some businesses have successfully used Skyvia alongside data profiling tools to improve their data quality and streamline operations. Skyvia helped a Healthcare Company in Data Processes Automation A healthcare company faced difficulties in managing data from multiple sources, including CRMs, Excel files, and backend systems. They needed to integrate these disparate data sources and improve reporting capabilities while reducing manual data entry errors. They selected [Skyvia](https://skyvia.com/case-studies/healthcare-company) to streamline data integration processes and connect all data sources seamlessly. This integration enabled automated data synchronization, significantly reducing manual errors and improving the accuracy of their data analytics. As a result, the company saw improved data accuracy, reduced labor costs, and enhanced reporting capabilities, leading to better decision-making and operational efficiency. Horizons (Global Recruitment Services) Elevated its Data Insights and Visualization with Skyvia Horizons needed to aggregate data from Jira, HubSpot, and Xero into a centralized system for reporting and visualization in Power BI. The complexity of integrating multiple data sources for real-time reporting posed a significant challenge. [Skyvia](https://skyvia.com/case-studies/horizons) provided a no-code integration platform that easily connected these diverse data sources to a data warehouse feeding into Power BI. The pre-built connectors and user-friendly interface allowed Horizons to quickly set up and automate these integrations. So, Horizons successfully aggregated data across platforms, creating comprehensive dashboards that provided valuable insights across departments without needing extensive engineering resources. Conclusion Choosing the right data profiling tools often looks like searching for the perfect gadget to bring everything into focus and transform how organizations manage data, enhance its quality, and ultimately lead to better business insights. Whether companies are cleaning up messy datasets, integrating data from multiple sources, or ensuring compliance, the right profiling tool makes these tasks more effective. Remember, no two businesses are the same, so evaluating data profiling tools is important based on your specific \u201cI want this and nothing else.\u201d Consider factors like ease of use, integration capabilities, scalability, and cost. Think about how these tools will fit into your existing data processes and how they can help you achieve your business goals. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-profiling-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Profiling+Tools+for+Improved+Data+Quality&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-profiling-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-profiling-tools/&title=Top+10+Data+Profiling+Tools+for+Improved+Data+Quality) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/data-warehouse-best-practices/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Best Practices of Data Warehousing in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/data-warehouse-best-practices/#respond) 2302 January 31, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-warehouse-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Practices+of+Data+Warehousing+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-warehouse-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-warehouse-best-practices/&title=Best+Practices+of+Data+Warehousing+in+2025) What\u2019s DWH? If you google or ask chartGPT, the definition will be like \u201c\u2026a data warehouse is a centralized repository designed to store integrated data from multiple, often disparate, sources\u2026\u201d Let\u2019s dive deeper and review data warehousing best practices and the benefits of usage. Data warehouses started in the 1990s as alternatives to traditional operational databases to solve their complex queries and analytics performance limitations. The evolution of DWHs is impressive \u2013 from OLAP (Online Analytical Processing) technologies to Big Data (2010s) integration and cloud-based solutions and ML, offering scalability and decreasing costs. Here are a few reasons why DWH is a helper in the modern business world: Data is in a centralized data repository. You can analyze large volumes of historical data. BI and reporting abilities are ready to use. You\u2019re armed to make a data-driven decision. Table of Contents Best Practices in Data Warehousing Advantages of Cloud-Based Data Warehousing Solutions Key Factors to Consider in Data Warehousing The Role of Skyvia Data Integration in Data Warehousing Conclusion Best Practices in Data Warehousing Data warehousing has become extremely popular in daily business analytics; its market is growing fast and is expected to reach over [$30 billion by 2025](https://www.sdcexec.com/warehousing/news/21090469/data-warehousing-market-to-exceed-30-billion-by-2025) . Here are the data warehousing best practices to help companies enhance BI and data analytics. Involving Key Stakeholders and Defining User Roles This step is the first for DWH\u2019s successful implementation and further management. It ensures that the data warehouse aligns with business objectives to be used effectively throughout the company. The business executives, department heads, IT leaders, data analysts, and end-users interact with DWH, so it must fit their needs. Communication according to their expectations and feedback is crucial in this case. With RBAC (Role-Based Access Control), you\u2019ll provide data security and appropriate user access depending on business usage. For instance, the data analytics dashboard will differ from the business user\u2019s one because of different goals. Implementing Effective Data Governance Strategies The data governance framework helps define the relationships between people, processes, and technologies. It develops and implements clear policies and standards for data management, establishing roles like data owners, stewards, and custodians to oversee data quality control, security, and compliance and creating a data-driven culture based on automating governance processes. Choosing the Right Data Warehouse Schema Design A DWH schema is a logical description of the entire database structure used in a data warehouse. It defines the relationships between different data types and impacts how data is stored, organized, and retrieved. A well-designed schema optimizes data retrieval and analysis and ensures that the data warehouse aligns with the business\u2019s analytical and reporting needs. The list below shows the schema types and their abilities: Star Schema : Simple, with a central fact table connected to dimension tables. It fits for simple to medium-complexity queries and is easy to use. Snowflake Schema : Dimension tables here are divided into sub-dimension tables. It\u2019s suitable for complex queries but is more complicated to maintain. Galaxy Schema (or Fact Constellation) : Consists of multiple fact tables that share many dimension tables. It suits complex businesses with varied reporting needs across different subject areas. Before selecting the schema type, define your current and future business requirements, like possible changes in increasing data volumes and structures. Consider that more normalized schemas like snowflake maintain higher data integrity but at the cost of query complexity. At the same time, less normalized schemas, like a star, can be more redundant in some cases, but depending on reporting needs, it might be acceptable. Utilizing ELT (Extract, Load, Transform) Over Traditional ETL Modern DWHs handle massive amounts of data and complex processing tasks. ELT shifts the transformation workload to the data warehouse, providing users with advantages like scalability, cost-saving, speed, efficiency, data transformation flexibility, data integrity maintenance, etc. Let\u2019s see the details: Scalability : A common scenario for ELT is big data projects. Such scenarios are more scalable for handling large data volumes and eliminate the need for a separate transformation layer, which can become a bottleneck in traditional ETL. Cost saving : By leveraging the computational power of cloud data warehouses, ELT reduces the need for additional processing and transformation infrastructure. Speed and efficiency : In ELT, data is loaded into the warehouse immediately after extraction, making it available faster. Transformation happens after the data is already in the warehouse, reducing the time to insight. At the same time, modern data warehouses perform parallel processing, allowing for more efficient and faster data transformations within the warehouse. Data transformation flexibility : ELT provides ad-hoc transformation abilities. Since the raw data is already in the warehouse, you may transform it multiple times and in different ways to suit various analytical needs. Data integrity maintenance: Storing raw data in the warehouse before transformation helps maintain data integrity and provides a complete historical record. Emphasizing Iterative Testing and Continuous Improvement Develop small, well-defined task(s) in a short cycle, test it, get feedback from key stakeholders, and then improve it in a new iteration without any pain for end users. Emphasizing iterative testing and continuous improvement in a data warehousing environment ensures that the system remains effective, efficient, and aligned with business objectives. It optimizes performance, reduces risks, and fosters a culture of innovation and responsiveness to change. Advantages of Cloud-Based Data Warehousing Solutions The cloud-based DWHs are scalable, flexible, accessible from anywhere, easy to use, adaptable, and reconfigurable as business needs change. Let\u2019s expand the list of their advantages. Dynamic Scaling : DWHs can scale up or down on demand, ensuring that you have the necessary resources during peak times and are not paying for unused capacity. Access and Data Sharing : Cloud-based solutions are accessible from anywhere with an internet connection, allowing data sharing with different departments and external partners. High Performance and Analytics Optimization : These platforms offer high-performance computing resources that efficiently handle large volumes of data and complex queries. They\u2019re often optimized for analytics and big data processing, providing faster insights. Strong Security Measures and Compliance : They provide strong measures like encryption, identity management, and network security and comply with various regulations and standards, including HIPAA, GDPR, ISO 27001, etc. Advanced Analytics and Integration : Many cloud data warehouses have built-in analytics and business intelligence tools, allowing easy integration with other cloud-based services and applications. Pay-as-You-Go Pricing : With cloud DWHs, you typically pay only for the resources you use, which can be more cost-effective than the upfront and maintenance costs of an on-premises warehouse. Key Factors to Consider in Data Warehousing The table below displays the key factors businesses have to consider while creating and working with DWHs. Factor Consideration Aligning with Business Requirements and Goals \u2013 Ensure the data warehouse aligns with overall business objectives and strategies. \u2013 Discover the specific data needs of different departments and how they will use the DWH. Cost Considerations and Budget Management \u2013 Consider the costs associated with the data warehouse\u2019s implementation, maintenance, and scaling. \u2013 Evaluate the potential ROI (return on investment). Evaluating Technical Capabilities and Technological Suitability \u2013 Choose the database technology and BI tools that best fit your data warehousing needs. \u2013 Decide between cloud-based or on-premise solutions based on cost, scalability, and maintenance needs. Ensuring Accessibility, Speed, and Efficiency \u2013 Make the data warehouse accessible to non-technical users. \u2013 Provide training to users on how to effectively utilize the data warehouse. \u2013 Optimize for fast query performance to support business intelligence and analytics. \u2013 Use real-time data processing solutions. \u2013 Use ETL processes and seamless data integration for timely and accurate warehouse updating. Planning for Scalability and Future Growth \u2013 Ensure the system can scale up to increasing data volumes and user queries. The Role of Skyvia Data Integration in Data Warehousing To \u201cmake friends\u201d with DWH and apps like Salesforce, HubSpot, Mailchimp, etc., you need a fast, simple, and user-friendly solution that can [replicate data](https://skyvia.com/blog/top-data-replication-tools/) from the app into DWH and update data into the app with a reverse ETL scenario. Skyvia fits these requirements entirely and a bit more. Let\u2019s explore Skyvia\u2019s [data integration](https://skyvia.com/data-integration/) abilities in data warehousing in detail. [Skyvia](https://skyvia.com/) is the universal, no-code cloud-based data management solution supporting a wide range of data integration scenarios of any complexity, including ETL, ELT, reverse ETL, data migration, one-way and bidirectional data sync, workflow automation, data sharing via REST API, backups for cloud apps, mapping, recovery, etc. The app is [simple to use](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , even for non-techs, and it saves you time and money for staff learning and additional coding. The tool connects to [180+](https://skyvia.com/connectors/) data sources, including databases, cloud services, and CRMs like Salesforce, which is crucial for gathering the disparate data needed for comprehensive data analytics. You may use Skyvia for free or choose flexible [pricing](https://skyvia.com/pricing/) plans to feel the differences in how it works, offering your business more space for rolling. Skyvia\u2019s pricing model is pay-as-you-go, ranging from $15/mo for the Basic to $399/mo for the Professional one. The Enterprise plan offers your business top performance. Conclusion Data warehousing is an essential part of the analytical process for every business, having a zoo of different services and needing a single source of truth. You must choose a good, trusted service that meets business requirements to make everything work like a clock. If the words to describe your company\u2019s expectations in the DWH area are \u2018accessibility,\u2019 \u2018simplicity,\u2019 and \u2018honest price,\u2019 try Skyvia in action. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdata-warehouse-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Practices+of+Data+Warehousing+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fdata-warehouse-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/data-warehouse-best-practices/&title=Best+Practices+of+Data+Warehousing+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/difference-between-etl-and-ssis/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best SSIS Alternatives in 2025: Comparing ETL & Data Integration By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/difference-between-etl-and-ssis/#respond) 5022 April 4, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdifference-between-etl-and-ssis%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+SSIS+Alternatives+in+2025%3A+Comparing+ETL+%26+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fdifference-between-etl-and-ssis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/difference-between-etl-and-ssis/&title=Best+SSIS+Alternatives+in+2025%3A+Comparing+ETL+%26+Data+Integration) SQL Server Integration Services (SSIS), an honorable member of the Microsoft family of products, has made a name for itself as a capable and reliable ELT tool. However, its close ties with Microsoft take their toll \u2013 particularly in terms of compatibility, licensing costs, and platform dependency. These limitations often drive organizations to seek more flexible options. In this article, we\u2019ll explore top SSIS alternatives in 2025 to help you strike the balance between your budget and data integration needs. Along with other products, our comparison includes Skyvia, a universal data integration platform celebrated for its ease of use and broad compatibility. Table of contents What is ETL? What is SSIS? What is the Difference Between ETL and SSIS? Criteria for Choosing an SSIS Alternative Top SSIS Alternatives & Competitors Skyvia Talend Azure Data Factory AWS Glue Boomi SnapLogic Workato SSIS vs. ETL: Which Is the Future of Data Transformation in 2025? Conclusion What is ETL? [ETL](https://skyvia.com/learn/what-is-etl) , or Extract, Transform, Load, is a 3-stage data integration process that involves collecting data from different sources, reshaping it into a required format, and loading it into a target destination. The importance of ETL for businesses is enormous, as it drives core activities aimed at achieving data consistency, enhanced analytics, and a holistic view across disparate channels. The critical nature of ETL creates demand for up-to-date tools capable of handling modern challenges, which in turn increases supply: there are numerous ETL solutions available for every need, budget, and skill level. Some of them are specifically optimized for particular platforms, such as [PostgreSQL](https://skyvia.com/blog/top-etl-tools-for-postgresql/) or [Snowflake](https://skyvia.com/blog/snowflake-etl/) . So, what kind of tasks are [ETL tools](https://skyvia.com/blog/etl-tools/) capable of? Provide a unified interface for data sources and destinations. Apply transformation logic to the source data based on target data source requirements. Handle multiple data sources and formats in a single workflow. Include error-handling logic to eliminate bad data without any user intervention. Perform data transformation such as sorting, filtering, grouping aggregation, cleaning data, and removing duplicates. What is SSIS? [SSIS](https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver15) is an integral part of SQL Server, designed to perform data integration tasks within the Microsoft suite, with a particular focus on ETL. This defines SSIS\u2019s core functions: Extract data from different sources, such as flat files, XML files, Microsoft Excel workbooks, and raw data files. SSIS also supports retrieving data from relational databases by accessing tables, views or executing queries. Perform various transformations , including business intelligence logic, column value updates, and rowset operations. Load the data into one or more destinations, such as databases, flat files, or other supported storage systems. One of the great features of SSIS is its ability to execute tasks in serial or parallel mode. \u200bThis is managed through precedence constraints, which define the conditions under which tasks run \u2013 for example, whether a previous task ends in success, failure, or completion. Let\u2019s say you have a sequence of tasks in your SSIS package: Task A (e.g., copy a file) Task B (e.g., load data into a database) Task C (e.g., send a success email) Now, with precedence constraints, you can configure their execution in a desired order, like this: Run task B only if task A succeeds Run task C only if task A fails Run task D regardless of whether task A succeeded or failed (i.e., on completion). SSIS\u2019s built-in capabilities make it a popular tool among database administrators who manage [SQL Server](https://skyvia.com/blog/sql-server-etl-tools/) environments. Indeed, it has everything you need for designing maintenance tasks \u2013 it\u2019s like having a toolbox built just for your data engine! Direct, native integration with SQL Server. Ability to schedule and automate routine tasks, and define their execution order. Built-in set of predefined maintenance tasks. Logging and error-handling mechanisms. Ability to extend workflows using custom scripts and integrations. History of SSIS SSIS has a long history of improvements in features, reliability, speed, and stability. The chart below illustrates the main enhancements introduced in each SSIS version. What is the SSIS Package? An SSIS package is a collection of components for an ETL task. You can think of it as a project file that holds all the logic and elements needed to perform a specific ETL operation. Inside that package, you\u2019ll typically find: Source and destination connections : SSIS provides built-in connection managers that must be configured for a package to connect to specific data sources. Tasks : units of work (e.g., execute SQL task, file system task, script task). Control Flow : the top-level workflow of the package that defines the sequence and logic of execution for tasks. Data Flow : specialized sub-process within Control Flow focused on ETL operations. Configurations : allow storing package settings externally, e.g. in XML files, environment variables, or SQL Server tables. Variables : used to store values dynamically during package execution. Parameters : allow to accept values from outside the package (like from a command line or SSIS catalog). Event Handler : allows to define custom logic depending on certain events that occur during package execution. The diagram below depicts the high-level structure of an ETL package in SSIS: Key Features of SSIS Source and destination connections . Multiple built-in source and destination connectors enable integration with various data sources, including: Flat files (TXT, CSV) Excel workbooks (XLS, XLSX) Relational databases (SQL Server, Oracle, MySQL) OLE DB and ODBC Azure services FTP servers SMTP mail servers WMI (Windows Management Instrumentation) SMO (SQL Server Management Objects) SSIS transformation tasks . SSIS provides a rich [transformation framework](https://learn.microsoft.com/en-us/sql/integration-services/data-flow/transformations/integration-services-transformations?view=sql-server-ver16) that supports data modification across different levels, including: Logical and physical transformations Row-level and set-level operations Employment of both declarative UI-driven logic and custom procedural code. Graphical interface and workflow . SSIS is a workflow-oriented tool that defines control and data flow tasks using precedence constraints. It offers an intuitive graphical interface through SQL Server Data Tools to configure tasks easily. It is important to note that while SSIS provides in-built tools to monitor package execution and view logs, they are not centralized by default and require configuration on the package level. Package deployment and reuse . The tool supports two deployment models: legacy package deployment and newer project deployment. Both models enable deploying ETL processes across multiple servers and facilitate package reuse. Comprehensive documentation and support . As with the rest of Microsoft products, SSIS has extensive documentation that includes official guides, tutorials, and reference pages. Cons of SSIS Limited cross-platform support . You cannot develop SSIS packages on Linux. Even though you can deploy and run them on SQL Server 2019 for Linux, certain features will still be unavailable, such as OLE DB, Excel sources, and some UI components. Steep learning curve for complex logic. SSIS is relatively easy for basic ETL tasks, like loading CSV files into SQL Server. However, implementing complex business logic or error handling often requires developer-level expertise in .NET scripting and expression language. Heavy reliance on Microsoft tools . SSIS is tightly integrated into the Microsoft ecosystem. On one hand, this allows it to extend its functionality through additional tools; on the other hand, it makes SSIS heavily dependent on them. For example, it requires Microsoft SQL Server Management Studio to view logs or SQL Server Data Tools to inspect execution logic. Cumbersome JSON handling . SSIS has limited native support for JSON, which requires users to parse nested structures manually. It makes the process complex and labor-intensive, especially compared to modern tools with built-in JSON mapping. High memory usage under load . The SSIS memory consumption can be pretty high when working with multiple packages in parallel, particularly if each package performs large data transformations. Limited versioning and CI/CD support . SSIS doesn\u2019t integrate natively with modern version control systems or DevOps pipelines. CI/CD is possible but requires extra configuration with external tools like Azure DevOps or Git. It is not cloud-native . Although SSIS can run in Azure via integration runtime in Azure Data Factory, it lacks agility and scalability typical to cloud-native ETL tools. What is the Difference Between ETL and SSIS? Let\u2019s make it clear from the start: these concepts are NOT directly comparable, and here is why. ETL is primarily a data integration process. Though in broader contexts it can refer to the techniques, methods, or architectural patterns used to implement that process, it is still best described in these terms. On the other hand, SSIS is just one of many tools designed to implement ETL. Using the analogy of travel and vehicles, ETL is like traveling from one place to another \u2013 it represents a general concept. While SSIS is like a specific car \u2013 just one of many possible means to get you there. It would be more appropriate to compare SSIS to other ETL tools available on the market \u2013 and that\u2019s exactly what we\u2019ll do in the next sections. Criteria for Choosing an SSIS Alternative We are going to review seven tools \u2013 each one a strong contender as an SSIS alternative. Our selection criteria were based on the following traits: Cloud-native or cloud-ready architecture. Support for both [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) workflows. Wide integration options, especially with cloud-based data sources. Straightforward implementation of complex tasks thanks to user-friendly interfaces. Scalability and performance. Built-in monitoring and logging mechanisms. Top SSIS Alternatives & Competitors All the chosen tools meet the demands of modern data workloads while making up for some of SSIS\u2019s well-known limitations. The table below provides a quick summary, with a more detailed description available further down. Tool G2 Crowd rating Pros Cons Pricing Skyvia 4.8 Ease of use; Multifunctionality; Broad connectivity with pre-built connectors and API endpoints; Advanced data transformation options; Automation and scheduling; Monitoring Feature limitations of free plan; No real-time integration support Affordable, with a freemium tier available. Paid subscriptions start from $79 per month. Talend 4.3 User-friendly UI; Support of direct Java scripting; JDBC API-based connectivity with databases Steep learning curve; Performance issues with large data volumes; Java dependency Includes 4 subscription plans, pricing details are not publicly available. Azure Data Factory 4.6 Cloud-native and fully managed; Intuitive UI; Event-driven data processing; Integrated monitoring & alerting; Seamless connectivity with other Azure services Steep learning curve for users unfamiliar with Azure; Dependency on Azure services; Cost considerations with heavy workloads Calculated on a pay-as-you-go basis. AWS Glue 4.3 Fully managed and serverless; Both visual and code-based UI; Automation and scheduling; Event-driven data processing Steep learning curve for users unfamiliar with AWS; Designed for AWS-native workloads; Unpredictable costs Calculated on a pay-as-you-go basis. Boomi 4.4 Drag-and-drop interface; Extensive connectivity; Real-time data processing; Event-driven processing Performance issues with large datasets; Limited support for complex integration scenarios; Cost considerations Includes five subscription plans, starting from $550 per month. SnapLogic 4.3 Low-code UI; AI-driven integrations; Extensive library of connectors; Real-time data processing; Automation High costs; Performance issues with large datasets Fixed-rate pricing model, pricing details are not publicly available. Workato 4.7 User-friendly UI; Extensive library of connectors; AI and ML capabilities; Event-driven integrations; Real-time data processing Steep learning curve; Cost considerations Includes a platform fee and usage fee, pricing details are not publicly available. Skyvia Skyvia is a no-code cloud data integration platform for versatile data-related tasks, including ETL, ELT, reverse ETL, data migration, one-way and bidirectional data synchronization, and more. Apart from that, Skyvia offers services like [data backup](https://skyvia.com/backup) , [query execution](https://skyvia.com/query) via a visual builder or SQL, and data access through OData interfaces. There are over 200 connectors available, including cloud apps, databases, and data warehouses. Skyvia\u2019s suite of [data integration tools](https://skyvia.com/blog/data-integration-tools/) caters to various scenarios:\u200b [Import](https://skyvia.com/data-integration/import) : load data from source connections into cloud applications and databases. It offers [advanced mapping settings](https://docs.skyvia.com/data-integration/import/) for data transformations, preserves relationships between the imported files and objects, supports INSERT, UPSERT, UPDATE, and DELETE operations, and provides detailed logging, scheduling, and error handling functionality. [Export](https://skyvia.com/data-integration/export) : extract data from databases or cloud applications to CSV files, including both the primary source and its related objects. [Replication](https://skyvia.com/data-integration/replication) : create and maintain copies of cloud application data in relational databases.\u200b The tool [automatically creates](https://docs.skyvia.com/data-integration/replication/) database tables for cloud data. Support for incremental updates ensures the replicated copy stays current when needed. [Synchronization](https://skyvia.com/data-integration/synchronization) : perform bi-directional synchronization between cloud applications and databases. \u200b Complex ETL/ELT scenarios : design data pipelines that involve multiple data sources and conditional logic. With Skyvia\u2019s [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) tools, you can combine data from several sources, load it into multiple destinations, and perform various multistage transformations along the way. Reviews Skyvia holds a [rating](https://www.g2.com/products/skyvia/reviews) of 4.8 out of 5 on the G2 Crowd platform. Users particularly appreciate its ease of use, simplicity of data operations, competitive pricing, and professional customer support. At the same time, they would like to have a broader set of connectors and the support of real-time integrations. Best for No-code data integration : based on customer feedback, Skyvia is ranked among top-10 the most friendly ETL tools. It\u2019s ideal for non-technical users who need to build ETL/ELT pipelines without writing code. Scheduled & automated data transfers : with built-in scheduling, Skyvia supports recurring imports, exports, and sync tasks. Cloud-to-cloud and cloud-to-database scenarios : Skyvia\u2019s cloud-native nature, wide library of connectors, and automatic schema handling make it excel in connecting cloud applications with cloud data warehouses (e.g., Snowflake, BigQuery) and relational databases (e.g., MySQL, PostgreSQL, SQL Server). Building complex data pipelines : with the Data Flow and Control Flow tools on board, Skyvia enables intricate integration scenarios with multi-stage transformations, conditional logic, and multiple source and target destinations involved. Pros Ease of use and no-code data integration. Multifunctionality with diverse integration scenarios. Affordable pricing. Advanced data mapping and transformation. Flexible scheduling and automation. Scalability and performance. Cons Access to advanced features requires higher subscription plans. Streaming data sources are not supported at the moment. Pricing Skyvia\u2019s [pricing](https://skyvia.com/pricing) for data integration is determined by the number of records processed per month and the scheduling frequency of integration tasks. All plans include features such as advanced data mapping, scheduling, and error handling. The available plans are:\u200b Plan Number of records /mo Scheduled integrations /d Price Free Up to 10,000 Once per day\u200b; up to 2 $0 Basic Up to 5 M Once per day\u200b; up to 5 $79 Standard Up to 5 M Once per hour; up to 50 $159 Professional Up to 5 M Once per minute; unlimited $199 Enterprise Customizable upon specific requirements Tailor-made offer Talend Talend is an ETL data integration tool compatible with both on-premises and cloud sources. It runs on the Java platform and utilizes Java for advanced scripting. If something can\u2019t be done out of the box with prebuilt components, Talend allows writing code directly to supplement missing functionality. The platform uses metadata to store the connection strings for XML, Excel, web services, FTP, etc. In other words, it keeps a record of all the connection details so they don\u2019t have to be set up manually every time. In Talend, ETLs are defined as jobs, while SSIS defines them as ETL packages. The solution offers scheduling, logs, and error services similar to the SSIS package. Reviews Talend Open Studio\u2019s rating on the G2 Crowd platform is 4.3 out of 5 . Among its advantages, users often mention a straightforward, easy-to-use interface and the ability to customize workflows with Java scripting. At the same time, they report issues with connecting to cloud-based sources, feature limitations, and a lack of community support for the free version. Best for Data engineers & integration teams : those who can leverage custom Java scripting for building complex data pipelines. Enterprises with hybrid or multi-cloud environments : Talend supports both on-premise and cloud systems. Teams needing open architecture & customization : Talend\u2019s open-source roots and Java foundation enable extension of ETL logic with other platforms or services. Pros User-friendly interface with a straightforward design of jobs. Support of custom logic with direct Java scripting. Extensive compatibility with database systems thanks to utilizing Java Database Connectivity (JDBC) standard API. Advanced error handling with JavaScripts. Cons Steep learning curve. Performance issues with large data volumes. Heavily Java-dependent, which can be a limitation to teams with little Java expertise. Commercial licensing costs. Pricing After being open-source for nearly 20 years, the manufacturer ceased supporting the free version, Talend Open Studio. As of January 31, 2024, users are encouraged to work with Talend on a commercial basis or [opt for alternatives](https://skyvia.com/blog/talend-alternatives/) . According to Talend, a Qlik company, this step enabled them to provide higher service levels for their users in terms of security, compliance, and a feature set for enterprise deployments. There are four subscription plans that vary depending on the number of users, data volume, and specific requirements. That said, pricing is not explicitly available and requires contacting Talend for more accurate information. Starter : rapid data movement from SaaS applications and databases.\u200b Standard : includes real-time synchronization using change data capture.\u200b Premium : offers automated data transformation and supports various architectures.\u200b Enterprise : comprehensive quality, governance, and AI capabilities.\u200b Azure Data Factory Azure Data Factory (ADF), a browser-based ETL tool from Microsoft, is positioned as a hybrid data integration solution designed to handle enterprise-scale data needs. Like SSIS, it supports aggregations, fuzzy lookups, derived columns, and other visually designed data transformations. ADF mapping data flows run on Azure-managed Spark clusters to transform and process data. The solution supports three types of triggers to initiate pipeline executions, including: Event-based Tumbling window triggers Scheduled batch triggers. It also integrates with Azure Databricks and Azure Synapse Analytics for more complex analytics workloads. Reviews Azure Data Factory rates 4.6 out of 5 on the G2 Crowd platform. Users particularly appreciate its ease of use, versatile data integration capabilities, and cross connectivity with other Azure services. Common areas for improvement include feature limitations and cost issues. Best for Enterprises needing scalable cloud ETL/ELT: the tool is specifically designed to process large volumes of data across hybrid and multi-cloud environments. Organizations already in the Azure ecosystem : ADF\u2019s tight integration with other Microsoft cloud services makes it almost a cross-platform tool. Use cases requiring flexible triggering and scheduling : suits everything from hourly data syncs to event-driven processing. Pros Cloud-native and fully managed. Intuitive UI with drag-and-drop pipeline designer. Broad connectivity options with over 90 built-in connectors. Pipeline automation and scheduling with triggers. Integrated monitoring and alerts. Supports the lift-and-shift of SSIS packages. Cons Steep learning curve for users unfamiliar with Azure. Dependency on other Azure services. Fast increase in costs with high-volume workloads. Pricing Azure Data Factory uses a pay-as-you-go pricing model. There are no subscription plans \u2013 the cost is based on usage across three main components: Component Billed by Pipeline orchestration Number of activity runs (per 1,000) Data flow execution Based on vCore-hours (virtual CPU cores) Data movement (copy activity) Charged per Data Integration Unit (DIU) hours, based on the volume of data moved and the duration of the copy activity. As with other cloud services, Azure Data Factory pricing is complex, with many variables involved. In addition to activity runs and data volume, costs depend on factors like the integration runtime, cluster size, instance type, and compute resources. Pricing may also vary by region and usage patterns. For accurate cost estimates, it\u2019s recommended to consult with Azure specialists. AWS Glue AWS Glue is a fully managed, serverless [ETL and data integration](https://skyvia.com/blog/data-integration-and-etl/) tool from Amazon Web Services, designed to handle big data and analytics workloads. It offers over 100 diverse data sources to discover and connect to. The tool includes AWS Glue Studio for no/low-code development, and supports Python/Scala scripting for advanced users. Among its prominent features are built-in crawlers that identify data formats and suggest schema tables for storing your data in the Glue Catalog. Reviews AWS Glue\u2019s rating on the G2 Crowd platform is 4.3 out of 5 . Users primarily like its cross-platform connectivity with other AWS services and robust automation options. Among the drawbacks, they often mention feature limitations \u2013 particularly in terms of transformation components and support for only two programming languages \u2013 as well as a somewhat convoluted pricing structure. Best for Companies already using AWS : Glue seamlessly integrates with other services within the AWS ecosystem, such as S3, RDS, Redshift, Lake Formation, and more. Big data & analytics workloads : particularly useful for S3-based data lakes, Redshift pipelines, and Athena querying. Teams with mixed skill levels : visual interface supports analysts, while developers can leverage code-base environments for advanced scenarios. Pros Fully managed and serverless. Visual + code-based development. Powerful data catalog that integrates with Athena, Redshift, and other AWS services. Automatic schema discovery. Support event-driven ETL workflows. Cons Steep learning curve for users unfamiliar with the AWS environment. High cost that can be hard to estimate in advance, especially for dynamic workflows. Clear optimization for AWS-native workflows makes it less flexible with external services. Pricing AWS Glue employs a pay-as-you-go model with pricing based on the resources consumed during various operations. Key components include: Component Rate Billing Default Apache Spark and Spark streaming jobs $0.44 per Data Processing Unit (DPU) per hour. Per second, with a 1-minute minimum for Glue versions 2.0 and later. Each job defaults to 10 DPUs but can be adjusted based on workload requirements. Python shell jobs $0.44 per DPU-hour. Per second, with a 1-minute minimum. Allocated 0.0625 DPU by default. Crawlers $0.44 per DPU-hour. Per second, with a 10-minute minimum per crawler run. Storage Up to 1 million objects per month \u2013 free. $1.00 per additional 100.000 objects per month. Requests Up to 1 million requests per month \u2013 free. $1.00 per additional million requests. Interactive sessions $0.44 per DPU-hour. Per second, with a 1-minute minimum. Requires a minimum of 2 DPUs, with a default of 5 DPUs. Development endpoints $0.44 per DPU-hour. Per second, with a 10-minute minimum. Requires a minimum of 2 DPUs, with a default of 5 DPUs. Boomi Boomi is an integration platform-as-a-service (iPaaS) designed to connect systems across diverse environments, including hybrid and multi-cloud. Its ease of use, scalability, and extensive connectivity options make it a top choice for organizations with dynamic data needs. Reviews Boomi rates 4.4 out of 5 on the G2 Crowd platform. Among its strong points, users typically name connectivity, an intuitive UI, scalability, and simple error handling. However, they also report issues with high-volume integrations, limited support for complex logic, and poor mapping capabilities. Best for Small to medium-sized businesses : companies needing a low-code, easy-to-use integration platform to connect popular cloud applications. Hybrid IT environments : Boomi\u2019s support for both on-premise and cloud-based systems appeal to companies with mixed integration needs. \u200b Organizations with diverse integration needs : beyond integration, Boomi offers capabilities like API management, master data management, and data preparation. Pros Ease of use with drag-and-drop interface. Extensive connectivity with over 1000 supported connections. Support of event-driven architecture. Real-time data processing. Rich feature set. Cons Pricing could be a concern for companies with limited budgets. Performance issues with large volumes of data. Limited support for complex integration scenarios. Pricing Overall, Boomi\u2019s pricing structure includes five subscription tiers, referred to as integration editions, with a free trial available. The starting price for the Base Edition is $550 per month, and it can go up to $8,000 or more for the Enterprise and Enterprise Plus editions. In addition, pricing is influenced by these factors: The number and type of connectors required Support level Specific requirements related to data volume and additional features. SnapLogic SnapLogic is an intelligent iPaaS for data operations. Positioned as a pioneer in generative integration, SnapLogic utilizes LLM-powered applications to make integrations accessible to users of all technical levels. It offers a user-friendly, low-code interface with pre-built connectors known as Snaps. What sets Snaps apart from traditional connectors is that they come with embedded logic to interact intuitively with specific applications or data sources. Snaps require minimal configuration and are maintained and updated by SnapLogic. Reviews The platform\u2019s rating on G2 Crowd is 4.3 out of 5 . Users mention its ease of use, simple pipeline implementation, and API integration capabilities as key strengths. Commonly cited areas for improvement include slow performance when handling large volumes of data, the need for manual debugging, and connectivity issues during long pipeline executions. Best for Businesses prioritizing ease of use : SnapLogic\u2019s low-code, intuitive interface and AI-led integrations enable quick deployments without the need for coding expertise. Organizations with scalable integration needs : companies that anticipate growth will largely benefit from SnapLogic\u2019s cloud-native scalability. Pros Drag-and-drop UI. Code-free, AI-driven integrations. Flexible connectivity with over 700 intelligent connectors. Real-time replication support. Event-driven processing. Cons Performance issues when processing large datasets. Relatively high costs when compared to alternatives. Pricing SnapLogic employs a fixed-rate pricing model, offering Business and Enterprise packages for a predictable monthly fee. Specific pricing details are not publicly available, and users are encouraged to contact SnapLogic directly or consult with an authorized partner.\u200b Workato Workato is an AI-driven iPaaS with a strong focus on data integration and business workflow automation. Unlike the dedicated tools described above, Workato is a unified platform where data integration is just one \u2013 albeit a leading \u2013 aspect of a broader functionality. Overall, the platform delivers AI-powered solutions for common departmental processes across IT, marketing, sales, HR, finance, and customer support. Reviews The platform\u2019s rating on G2 Crowd is 4.7 out of 5 . Users particularly appreciate its rich functionality, fast delivery of integration tasks, and responsive customer support. Among the disadvantages, they mention the high cost and somewhat unclear pricing structure. Best for Businesses seeking no-code platforms : teams with limited coding expertise will benefit from Workato\u2019s user-friendly and simple-to-use interface. Companies prioritizing AI-driven automation : the platform incorporates AI features to predict and suggest automation opportunities within the context of your business. Enterprises requiring unified integration solution : Workato provides a single source of truth to enable teams to unify their experience across various departments.\u200b Pros User-friendly interface. Wide integration options with over 1000 pre-built connectors. AI and ML capabilities. Real-time data processing. Support of event-driven architecture. Cons Pricing transparency. Cost considerations for small businesses. Steep learning curve. Pricing Workato\u2019s pricing structure revolves around two main components: the workspace (a platform plan fee) and tasks (usage fee). \u200bAlthough specific pricing details are not publicly disclosed, reports indicate that subscription fees for Workato typically range from $15,000 to $50,000 per year. SSIS vs. ETL: Which Is the Future of Data Transformation in 2025? In 2025, SSIS remains a reputable part of the ETL landscape, with its solid functionality still enough for traditional batch-oriented workflows. However, ETL is evolving fast beyond the model for which SSIS was built. In today\u2019s dynamic data environments, it is more about agility, cloud, and real-time capability rather than the classic \u2018extract, transform, load\u2019 cycle. With that said, the future of data transformation is aligned with the demands of modern data workflows, including: Real-time and event-driven architectures Cloud-first design AI-driven automation and pipeline optimization Compatibility with DevOps environments: integration with CI/CD pipelines and infrastructure-as-code. Conclusion In this article, we\u2019ve taken an in-depth look at seven strong SSIS alternatives \u2013 each with its strengths and designed to meet different integration needs. In this constellation Skyvia shines as a feature-rich yet affordable ETL solution, capable of various data-related tasks. Trusted by thousands of data-driven organizations, Skyvia remains a top choice for handling data integrations of any complexity. FAQ for SSIS Alternatives Is SSIS still relevant in 2025? Yes, SSIS remains relevant for on-premise and hybrid environments, but its limitations in cloud-native and real-time use cases make modern alternatives more attractive for evolving data needs. What is the best open-source alternative to SSIS? Talend Open Studio has long been a popular open-source alternative. However, since its free version is no longer officially supported, other options like Apache Nifi or Pentaho may be considered. Which SSIS alternative is best for cloud data integration? Azure Data Factory, AWS Glue, and Skyvia stand out for cloud data integration, offering native support for cloud apps, scalable architecture, and built-in connectors for major platforms. Can SSIS alternatives handle real-time data integration? Yes, many modern alternatives like SnapLogic, Workato, and Boomi support real-time or event-driven data processing, far beyond SSIS\u2019s batch-oriented design. How difficult is it to migrate from SSIS to an alternative? Migration complexity depends on your packages. Simple workflows may port easily, but complex SSIS logic often requires redesign in the target tool. Many vendors offer migration tools or services. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdifference-between-etl-and-ssis%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+SSIS+Alternatives+in+2025%3A+Comparing+ETL+%26+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fdifference-between-etl-and-ssis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/difference-between-etl-and-ssis/&title=Best+SSIS+Alternatives+in+2025%3A+Comparing+ETL+%26+Data+Integration) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/dropbox-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Dropbox and Salesforce Integration By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/dropbox-salesforce-integration/#respond) 4194 March 21, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdropbox-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Dropbox+and+Salesforce+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fdropbox-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dropbox-salesforce-integration/&title=Dropbox+and+Salesforce+Integration) Storage limits , high costs , scattered files , and manual file transfers often slow down managing documents in Salesforce. A Dropbox-Salesforce integration eliminates these bottlenecks by enabling seamless file access, automatic synchronization, and improved team collaboration. Document handling is critical for sales, customer support, and legal teams . Contracts, invoices, support tickets, and marketing assets need to be easily accessible, securely stored, and always up to date . Without integration, businesses face: Time-consuming manual file uploads to multiple platforms. Disorganized document storage , leading to lost or outdated files. Limited Salesforce storage space and rising costs for additional storage. Collaboration inefficiencies, meaning teams struggling to access the latest files. With Dropbox-Salesforce integration , businesses can: Automatically sync Salesforce attachments to Dropbox to free up CRM storage. Access Dropbox files directly from Salesforce records , reducing the need to switch platforms. Enable real-time document collaboration , ensuring teams work with the latest versions. Automate document management , reducing errors and manual effort. This guide is about two popular approaches to integrating Salesforce and Dropbox: the native one and using Skyvia. We\u2019ll show both in detail to help companies decide which method mostly fits each unique business case. Table of contents What is Salesforce? What is Dropbox? Why Do You Need Dropbox Salesforce Integration? How to Connect Salesforce and Dropbox: Two Effective Methods Method 1: How to Use Dropbox for Salesforce Application Method 2: Dropbox and Salesforce Integration Using Skyvia Scenario 1. Importing Product Data from Dropbox to Salesforce Scenario 2. Exporting Salesforce Data to Dropbox How to Automate Dropbox and Salesforce Integration Benefits of Connecting Dropbox and Salesforce Summary What is Salesforce? [Salesforce](https://skyvia.com/connectors/salesforce) is the undisputed leader in cloud-based CRM, and for good reason. It\u2019s a full-blown ecosystem that helps businesses: Track leads. Automate sales workflows. Manage customer interactions. Optimize marketing campaigns. All these activities are possible from a single platform. It\u2019s extremely customizable . Whether you\u2019re tweaking fields, building custom objects, or deploying Apex code and Lightning components , Salesforce adapts to all needs. And it\u2019s not just about storing data. Users can generate reports, visualize trends, and predict customer behavior with built-in analytics and AI-powered insights. Plus, it plays well with third-party apps like Dropbox, so integration is smooth. Whether you\u2019re a sales rep, a marketer, or a hardcore developer, Salesforce gives you the tools to work smarter, not harder. What is Dropbox? [Dropbox](https://skyvia.com/connectors/dropbox) is a top cloud storage and file-sharing platform trusted by millions worldwide. It\u2019s a secure, easy-to-use solution that helps individuals and businesses: Store and access files from anywhere with an internet connection. Sync files across devices automatically. Back up important data to prevent loss. Collaborate on shared documents in real-time. With it, you don\u2019t have to worry about losing files. The platform keeps everything backed up and in sync across all the devices. Its team-friendly tools make collaboration a breeze, allowing multiple users to: Work on shared projects. Track recent activity. Manage file versions effortlessly. And security? Top-notch. With customizable sharing permissions and file recovery for up to 30 days , your data stays protected. Dropbox also integrates seamlessly with business tools like Salesforce , making it an essential part of any modern workflow. Whether you\u2019re working solo or as part of a team, Dropbox helps you stay organized, productive, and connected. Why Do You Need Dropbox Salesforce Integration? It allows direct access , management , and sharing of Dropbox content from Salesforce. Users no longer have to switch between Salesforce and Dropbox to access or upload pictures, files, or documents. Instead, they can quickly retrieve the correct information from Dropbox while working in Salesforce. For example: Sales Teams. Easily attach Dropbox files (contracts, proposals, product catalogs) to Salesforce opportunities and share them directly from the opportunity record with team members. So, sales reps always have the latest materials without searching for files manually. Customer Support Teams. Access case-related files (logs, screenshots, user reports) from Dropbox directly in Salesforce support case records, allowing agents to resolve issues faster by having all necessary documentation in one place. Legal and Compliance Departments. Automatically store signed agreements, compliance documents, and audit reports in Dropbox while keeping them linked to relevant Salesforce records. In this case, you have secure storage, regulatory compliance, and easy document retrieval during audits . Project Management and Operations. Centralize project files, vendor contracts, and work orders in Dropbox while linking them to Salesforce records to streamline task tracking, project collaboration, and operational workflows. So , all stakeholders have real-time access to the latest documents. How to Connect Salesforce and Dropbox: Two Effective Methods You may integrate these two platforms with the [Dropbox for Salesforce Application](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N30000000prqLEAQ) (native to the Salesforce AppExchange). However, it has some limitations, like: Slow upload times. Limited tech support. Poor basic functionality. Another approach to such an integration is available with [Skyvia](https://www.youtube.com/watch?v=41JIlV1bP9c) , allowing users to integrate CSV data from Dropbox with Salesforce quickly. Salesforce administrators can easily set up a mapping between Dropbox and Salesforce and schedule automatic data loading. With [Skyvia\u2019s](https://skyvia.com/data-integration/integrate-salesforce-dropbox) Dropbox to Salesforce integration approach, users can enrich data and simplify business processes. Let\u2019s walk through the two ones to select the best fit for each business story. Method 1: How to Use Dropbox for Salesforce Application The Dropbox for Salesforce Application provides a basic yet effective integration for managing files between Salesforce and Dropbox . This solution allows users to: Sync files between Salesforce and Dropbox , ensuring that attachments and documents are always available on both platforms. Access Dropbox files directly from Salesforce records , reducing the need to switch between applications. Easily share and manage documents , improving sales, customer support, and project team collaboration. Who is it best for? Sales teams who want to attach contracts, proposals, and customer documents to Salesforce records. Customer support teams that need to link case-related files directly to support tickets. Legal and compliance teams need secure access to signed agreements and policy documents in Salesforce. Now, let\u2019s walk through the step-by-step process of setting up the Dropbox for Salesforce Application. Install Dropbox Package for Salesforce Install the Dropbox for Salesforce Application, which can be found on the [Salesforce AppExchange](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N30000000prqLEAQ) . Login to your Salesforce account and access this link to find the Dropbox for the Salesforce Application. Then, click the Get It Now button. Following the first step, a page will open asking where you want to install the package. After installing the package in the Production Environment, you can get access to all end-users. If you install the package in a Sandbox, you can first test the package using a copy of the production org. Choose which option best suits your needs, and then click the Confirm and Install button. At that point, you will be redirected to the Salesforce login page to continue the installation. Next, click the Install button to continue with the installation process. Set Up Salesforce to Dropbox Connection After installing the Dropbox for the Salesforce Application, you need to configure the settings. First, log in to your Salesforce Account and click the [Salesforce App Launcher](https://help.salesforce.com/s/articleView?id=xcloud.identity_app_launcher.htm) . Search for Dropbox in the Salesforce App Launcher. Then, click the Dropbox icon. From this screen, click Connect to Dropbox and then click Continue . Next, log in to Dropbox; remember, you must have Dropbox for Business to proceed with this step. At this point, Dropbox for Salesforce will request access to the files and folders in your Dropbox. Click Allow . You must then create a Remote Site Setting in the org. before configuring Dropbox for Salesforce. Click the Create Remote Site Setting button to do it. Next, you must configure the page layout by selecting the Update Page Layouts button. As a result, all Account , Contact , Case , and Lead objects will now contain the Dropbox component on their page layout. Share Dropbox Folders with Salesforce To complete the installation process for the Dropbox for Salesforce application, you must confirm that the folder is shared with Salesforce. To do it, navigate to an Account , Contact , Case , Opportunity , or Lead on the Salesforce Dashboard. Then, access the Dropbox component and click Add Files . Upload a test file. Note :\u00a0 At this point, if you have correctly installed the Dropbox for Salesforce Application, the test file should be created in your Dropbox account. Pros Users can open, view, and manage Dropbox files directly from Salesforce records, reducing the need to switch between platforms. Unlike custom-built integrations, this application requires minimal setup and is accessible to non-technical users. Since it is a native AppExchange solution, it provides a straightforward way for small businesses or teams to integrate Salesforce and Dropbox without additional costs for custom development. Centralized Document Storage helps users organize Salesforce-related files in Dropbox, ensuring a single location for document management instead of scattered files across multiple platforms. Cons Users note that the application is buggy and clunky, with slow upload times. Multiple users complain about the [lack of tech support](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N30000000prqLEAQ) when finding solutions for bugs and error messages. All Dropbox files must be saved under a new Salesforce Documents folder rather than linking a record to a specific folder already in Dropbox. The application only works for Leads , Accounts , Contacts , Opportunities , and Cases . The Dropbox for Salesforce Application is only useful for very basic integration tasks. Method 2: Dropbox and Salesforce Integration Using Skyvia [Skyvia](https://skyvia.com/) is a comprehensive Software as a Service (SaaS) no-coding platform that connects to [200+](https://skyvia.com/connectors) data sources and solves diverse data-related tasks: data integration (ETL, ELT, reverse ETL), workflow management and automation, cloud sources data backup, sync, and API endpoints management. With Skyvia, you can: Upload CSV file data to any available Salesforce object, extract Salesforce data to a CSV file, and place this file in any folder in Dropbox. Decide what Salesforce object to integrate and what scenario to implement. Skyvia helps to make it fast and easy. Avoid additional installations. All you need is a web browser. Try any paid Skyvia [subscription](https://skyvia.com/pricing) with a two-week trial. You don\u2019t need any premium ones to use this method. Who is it best for? Enterprises and growing businesses that require customizable and automated data integration rather than just basic file linking. Sales and marketing teams that need to sync product catalogs, customer data, and marketing assets between Salesforce and Dropbox efficiently. Customer service teams managing large volumes of case-related documents, ensuring files are automatically stored and accessible when needed. Finance and compliance departments that require secure data transfers, scheduled backups, and audit-ready documentation stored in Dropbox. IT teams and data administrators who need a no-code solution to automate and control data movement between Salesforce and Dropbox without manual intervention. Below, we show two examples of data integration scenarios using Skyvia. Note : Before starting to implement the use case, you must be [registered](https://app.skyvia.com/register) there. Then, let\u2019s create the connection for each scenario. To do it, log in to Skyvia and connect [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Dropbox](https://docs.skyvia.com/connectors/file-storages/dropbox_connections.html) . Scenario 1. Importing Product Data from Dropbox to Salesforce Example: A retail company regularly updates its product catalog and needs to sync product lists stored in Dropbox with Salesforce. By importing CSV files into the Salesforce Product2 table, the sales team always has the latest product details, including pricing, stock availability, and descriptions, ensuring accurate sales and inventory management. Create the Import Data Integration In Skyvia, click + Create NEW and selec t Import . Click CSV from the storage service and choose the Dropbox connection as a Source . Select Salesforce connection as a Targe t and enable other available package options optionally. Click Add New to create the package task. On the Source Definition tab, set the CSV mode. In our case, we select the Single File mode. Select the needed CSV file in the CSV Path box and optionally adjust the [CSV options](https://docs.skyvia.com/common-platform-features/working-with-csv.html) . On the Target Definition tab, select the target Salesforce object and click the needed operation. We select the Insert one. On the Mapping Definition tab, assign the column [mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . The columns with the matching names are mapped automatically. We leave the default mapping as it is. You can optionally set the automatic package launch by [schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) . Save the package and run it. You can check the [package results](https://docs.skyvia.com/data-integration/package-run-history.html) on the Monitor and Log tabs. Scenario 2. Exporting Salesforce Data to Dropbox Example: A marketing agency manages customer interactions and campaign performance in Salesforce. To generate monthly reports and share them with external partners, they export Salesforce data (such as leads, campaign results, and customer engagement stats) into a CSV file stored in Dropbox. This enables seamless collaboration, easy data access, and external analysis using BI tools. Create the Export Data Integration In Skyvia, click + Create NEW and select Export . Select Salesforce from the dropdown list as a Source . Click CSV to storage service and choose Dropbox as a Target. Specify the folder to place the file in and adjust the package options if needed. We leave the default values for this scenario. Click Add New to create the package task. Select the Editor Mode . In our case, we use the Simple one. The [Advanced Editor Mode](https://docs.skyvia.com/data-integration/export/how-to-create-export-task.html#advanced-task-editor-mode) is more convenient when you need to perform a complicated custom command or run a report. Select the source table to export data. We use Product2 in the example. You can uncheck the unnecessary fields and use filters if needed. On the Output Columns tab, you can rename the ones and adjust their order. Save the task and package and proceed to run the package. Check the package results on the Monitor and Log tabs. Pros This method is different from the native one. Skyvia supports the integration of [Salesforce](https://skyvia.com/connectors/salesforce) and [Dropbox](https://skyvia.com/connectors/dropbox) in both directions, meaning that you may use the Dropbox files either as a source or as a target. Users can automate data import and export tasks, ensuring up-to-date records without manual intervention to reduce errors and save time for sales, marketing, and operations teams. Unlike basic file syncing, Skyvia allows users to map, filter, and transform data before transferring it to ensure only relevant and structured information is moved between Salesforce and Dropbox. The native Dropbox integration only supports specific objects, but Skyvia allows integration with any Salesforce object, giving businesses full control over their data workflows. The platform enables multi-source data integration, meaning users can connect Dropbox and Salesforce while integrating other platforms like Google Drive, AWS, or HubSpot, making it a scalable solution for growing businesses. Cons Skyvia requires initial setup and mapping to define data flows. However, it remains a no-code solution. While Skyvia offers a free plan, advanced features like automation, scheduling, and larger data volumes require a paid subscription, making it less budget-friendly for very small teams. Since Skyvia is not built directly into Salesforce, users must access Skyvia\u2019s web platform to configure and monitor their integration. How to Automate Dropbox and Salesforce Integration Automating the integration between Dropbox and Salesforce eliminates the need for manual file transfers, ensuring seamless data synchronization and improved workflow efficiency. With automation, businesses can streamline document management, reduce errors, and enhance collaboration across teams. Ways to Automate the Integration Using the No-Code Integration Platforms like Skyvia, Zapier, and Workato allows businesses to: Automatically upload Salesforce attachments to a designated Dropbox folder. Sync Dropbox files with Salesforce records, ensuring teams always have the latest documents. Trigger automated workflows based on file changes, such as notifying a sales team when a new contract is uploaded. Salesforce Flow and Process Builder trigger Dropbox actions based on record updates. Example: When a new opportunity is closed , automatically generate a folder in Dropbox and store related documents there. Dropbox API for Advanced Automation enables custom workflow automation. Example: A custom script can detect when a new proposal is uploaded in Dropbox and update the associated Salesforce opportunity record. Use Cases and Real-Life Examples Customer Contracts Management When a contract is signed in Salesforce , it is automatically moved to a secured Dropbox folder for legal storage. The sales team gets a notification that the contract has been archived successfully. Sales and Marketing Collateral Sync Marketing teams can upload the latest sales materials to Dropbox, and Salesforce automatically updates related records with the latest versions. Ensures sales reps always have access to the most up-to-date presentations and brochures . Support Ticket Attachments Storage Customer support cases in Salesforce often include attachments such as logs, screenshots, and documents. Automating the transfer of these files to Dropbox reduces Salesforce storage costs while keeping all case files easily accessible. Benefits of Connecting Dropbox and Salesforce Integrating Dropbox with Salesforce takes document management, collaboration, and efficiency to the next level, all while cutting down on storage costs. Let\u2019s summarize the key benefits and real-world examples of how businesses can maximize this integration. Lower Salesforce Storage Costs Salesforce storage isn\u2019t [cheap](https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000A7vICSAZ?) , especially when it comes to file attachments. Instead of piling up large files in Salesforce (and paying extra for storage), businesses can offload them to Dropbox while keeping everything accessible. Example: A real estate agency moves high-resolution property images and legal documents from Salesforce to Dropbox. They save on storage costs while keeping quick access to essential files. Access Documents Without Leaving Salesforce No more switching between platforms. With Dropbox integrated into Salesforce, teams can view, edit, and manage files without leaving their CRM. Example: A sales team working on a deal can instantly access brochures, contracts, and proposals stored in Dropbox. Right from Salesforce, speeding up the sales process. Automate Document Management Manually uploading and organizing files? A waste of time. With automation, documents are stored, sorted, and linked to the right records without extra effort. Example: When a new contract is signed in Salesforce , it\u2019s automatically sent to a Dropbox folder labeled with the customer\u2019s name , ensuring everything stays organized. Boost Team Collaboration Dropbox\u2019s real-time file sharing combined with Salesforce workflows makes teamwork effortless. Teams can work together on files without endless email threads . Example: A customer support team uploads case-related logs and screenshots to Dropbox. Engineers and support reps can collaborate instantly, resolving issues faster. Secure and Compliant Storage Security matters, and Dropbox delivers advanced security features, version control, and compliance certifications to keep data protected. Example: A hospital handling patient records meets HIPAA compliance by securely storing files in Dropbox while linking them to patient records in Salesforce. Automate Workflows and Save Time Automating Dropbox-Salesforce workflows eliminates repetitive tasks , saves time, and reduces errors. Example: A company automates moving signed agreements from Salesforce to Dropbox and instantly notifies the legal team , ensuring timely processing. Support Remote Work and Mobility Since Dropbox is cloud-based, employees can access files from anywhere. With Salesforce integration, documents are always linked to the correct records. Example: A field service technician uploads photos and reports from a mobile device to Dropbox, giving the back-office team real-time access to the latest updates in Salesforce.With this integration , businesses can cut costs, streamline workflows, and improve collaboration while keeping files secure and easily accessible. Summary Integrating Dropbox with Salesforce: Streamlines document management , collaboration, and workflow automation . Allows access and management of files directly from Salesforce while uploads sync automatically to Dropbox. Decreases Salesforce storage costs , improves efficiency, and eliminates manual file handling, enabling teams to collaborate on shared documents like contracts and support cases in real-time. Dropbox\u2019s strong security and compliance features make it a trusted storage solution.For integration, businesses can choose the Dropbox for Salesforce App (with some limitations) or Skyvia , a no-code tool offering flexible, automated data sync. Either way, combining these platforms helps businesses work smarter and faster. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdropbox-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Dropbox+and+Salesforce+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fdropbox-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dropbox-salesforce-integration/&title=Dropbox+and+Salesforce+Integration) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/dynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Dynamics 365 vs NetSuite: How to Migrate Data By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/dynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite/#respond) 2797 October 13, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite%2F) [Twitter](https://twitter.com/intent/tweet?text=Dynamics+365+vs+NetSuite%3A+How+to+Migrate+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite/&title=Dynamics+365+vs+NetSuite%3A+How+to+Migrate+Data) The need for enterprise resource planning (ERP) systems emerged in the early 1960s in the manufacturing industry at times of economic outburst. The first ERP systems as we know them today appeared only in the 1990s when information technology made it possible. Interestingly, the manufacturing industry is still a top consumer of such applications, though companies from multiple sectors use such applications as well. Around [95% of businesses](https://www.netsuite.com/portal/resource/articles/erp/erp-statistics.shtml) using ERP software reported to have greatly benefited from it, especially in cost savings and performance metrics. Microsoft Dynamics and NetSuite are the most popular ERP solutions because they have long been in the market and offer a wide range of features. By 2022, Microsoft owned more than [31% of the ERP software market share](https://softwareconnect.com/erp/erp-market/) while NetSuite took around only 5% but is characterized by fast-growing development. In this article, we compare Microsoft Dynamics vs. NetSuite systems for enterprise resource planning. Moreover, we discuss and evaluate NetSuite vs. Dynamics 365 CRM solutions for businesses. And finally, we explain how to migrate data from one application to another with Skyvia. Table of Contents What is Microsoft Dynamics 365? What is Oracle NetSuite? Microsoft Dynamics 365 vs. Oracle NetSuite Discover the Advantages of Skyvia for Data Integration and Beyond Transitioning from Dynamics 365 to NetSuite Migrating CRM Data from NetSuite to Dynamics 365 Conclusion What is Microsoft Dynamics 365? [Microsoft Dynamics 365](https://dynamics.microsoft.com/en-us/) consists of various modules to benefit sales, customer service, marketing, supply chain, HR, and finance. This solution is suitable for large enterprises as well as small and medium-sized businesses. Owing to its flexible subscription-based model, companies pay only for the modules they use. Let\u2019s have a look at what particular distinctive features Microsoft Dynamics has and how they appear favorable for businesses. Features: Dynamics 365 perfectly integrates with other Microsoft products. Powerful sales forecasting based on historical data. Superior data protection due to compliance with modern security protocols and defining user access permissions. Great compatibility with data analytics and business intelligence tools. Benefits: Customers are put at the center of almost each and every business, so practicing personalized customer service is a great advantage. Luckily, Dynamics provides such an opportunity to businesses as it contains all the information about user interactions, purchasing habits, and contact details. Data privacy is another great concern. Being hosted on Azure cloud that complies with GDPR and similar regulatory policies, Dynamics grants exceptional user data protection. Businesses also appreciate the great extent of customizability offered by the platform \u2013 it\u2019s possible to deploy it completely in the cloud or on-premises with the needed modules. Built-in analytic tools such as Cortana Intelligence and Power BI ensure predictive analysis and reporting respectively. What is Oracle NetSuite? [NetSuite](https://www.netsuite.com/) is another ERP solution that is currently owned and managed by Oracle. It\u2019s particularly suitable for startups, fast-growing businesses, family-owned companies, and medium enterprises. NetSuite also has a CRM system for better customer management and experience. Microsoft Dynamics 365 vs. Oracle NetSuite Having a general idea of each tool, it\u2019s now possible to make a Microsoft Dynamics 365 vs. NetSuite comparison. This could be done based on feature juxtaposition along with the weak and strong points of each system. Based on this NetSuite vs. Dynamics analysis, businesses could consider either platform that best suits their needs. General Overview Let\u2019s start by reviewing the fundamental aspects of each tool: Parameter Microsoft Dynamics 365 NetSuite Modularity Major Dynamics ERP and CRM modules are for finance, sales, order management, supply, chain management, marketing, human resources, field services, and project management. NetSuite contains modules for finance, procurement, manufacturing, inventory management, order management, supply chain management, CRM, project management, human resources, e-commerce, and marketing. Pricing Has solutions for companies of any size with a clear pricing policy. Pricing information is provided by a sales manager. Integration Integrates with other Microsoft products natively. Doesn\u2019t integrate with Oracle products natively. Industry fit Widespread in the IT and Services sectors. Widespread in the IT, Retail, Wholesale, Healthcare, and Manufacturing sectors. From the table above, it\u2019s obvious that the selection of modules is more or less the same in each application. However, these tools have gained popularity in companies operating in different economic sectors. The pricing policy of Microsoft Dynamic 365 appears more transparent, which is definitely a great plus for potential customers. NetSuite is rather a standalone tool, while Dynamics integrates easily with other Microsoft solutions. Functionality Now it\u2019s time to highlight the core modules and their functions of each product. Parameter Microsoft Dynamics 365 NetSuite ERP Business Central, Project Operations, Supply Chain Management, and Finance modules are the principal components for effective enterprise resource planning. There are 13 modules in the ERP system of NetSuite. They ensure control over finance, procurement, inventory and order management, and other business-critical operations. CRM Sales and Customer Service are the modules for handling and nurturing customer relationships. One of NetSuite\u2019s modules called CRM cares for marketing and sales automation, and customer service. HRM A dedicated Human Resources module provides the possibility to organize HR processes for businesses. One of the ERP\u2019s modules is dedicated to HRM processes \u2013 employee performance management, payroll, time-tracking, etc. Marketing The module named Customer Insights is responsible for enhancing marketing activity. The module Marketing Automation included in NetSuite ERP is in charge of streamlining marketing processes. Commerce Commerce and Fraud Detection modules are dedicated to managing online and offline stores as well as detecting potentially hazardous customers. E-commerce part of the ERP system takes care of online store management. While Dynamics 365 can be composed of various modules depending on the company\u2019s preferences, NetSuite provides an ERP system with all the necessary components present. The freedom to compose the system the way business wants makes Dynamics a more customizable solution than NetSuite. Advantages and Disadvantages Now let\u2019s explore the advantages and disadvantages of each solution based on the reviews on credible platforms for feedback about software such as [G2](https://www.g2.com/) and [TrustRadius](https://www.trustradius.com/) . Microsoft Dynamics 365 NetSuite Pros \u2013 Embedded machine learning and BI mechanisms. \u2013 User-friendly and easy to set up initially. \u2013 Excellent customer support. \u2013 Easy to customize dashboard and menus. \u2013 Strong accounting and finance module features. Cons \u2013 Big databases may lead to slow reporting. \u2013 Customization is difficult to do in-house. \u2013 Lack of features for developing CRM strategies to engage customers. \u2013 Not very user-friendly and modern UI. \u2013 Requires training before starting because it\u2019s complex software. Key Points Based on the strengths and weaknesses together with the overview of modules and their functionality in both tools, let\u2019s sum up everything discussed above: Functionality. NetSuite is obviously unbeatable when it comes to finance and accounting. Microsoft Dynamics is powerful in sales management, but it lacks some essential features for building strategies aimed at the improvement of customer engagement. User interface. The UI of Microsoft Dynamics seems to be more friendly and easy to comprehend than the one of NetSuite. Implementation. Microsoft is more customizable than NetSuite as it allows selecting only the needed modules. At the same time, such actions require the intervention of resellers because in-house teams usually meet with obstacles when deploying Dynamics 365. Overall, NetSuite seems more easy to implement. Pricing. Microsoft Dynamics 365 has a clear pricing model, while NetSuite prices are revealed during the interaction with sales managers. Discover the Advantages of Skyvia for Data Integration and Beyond Companies looking to migrate data from Microsoft Dynamics to NetSuite or vice versa should consider a data integration tool for that. Whatever the reason for data transfer, consider [Skyvia](https://skyvia.com/) as the cloud-based data integration platform allowing companies to connect Microsoft Dynamics with NetSuite and many other apps. Owing to the user-friendly, intuitive interface of the tool, businesses can easily set up and manage integration scenarios. Moreover, Skyvia provides a monitoring bar for checking the progress of the data transfer. Skyvia contains several components for bringing data integration scenarios to life: Import : Quickly loads data from Dynamics to the NetSuite destination or vice versa. It also includes settings for field mapping, filtering, and validation rules. Export : Exports data from the selected source system to a target destination in CSV format, with filtering criteria. Data Flow : Creates complex data transformations and manipulations between your source and target systems, including joining, filtering, aggregating, and pivoting data. Control Flow : Creates complex control structures and workflows for your integration tasks, including conditional statements, loops, error handling, and more. Synchronization : Keeps Microsoft Dynamics and NetSuite synchronized by comparing and updating data between them. Replication : Create a data copy from the selected source app in a database or data warehouse. Transitioning from Dynamics 365 to NetSuite As mentioned above, NetSuite has a strong module for finance management. That way, companies might want to keep track of all financial transactions and operations right there, so NetSuite becomes a single source of truth. To create a process for importing Dynamics data to NetSuite, do the following: Go to New > Import . Choose Data Source as your source type. Choose the Microsoft Dynamics connection as the Source and the NetSuite connection as the Target . Add a task to the integration by clicking Add New . Select Account as an object to import, set preferred filters, and click Next Step . Select the corresponding NetSuite object to import data along with an action that should be executed over this data, and click Next Step . NOTE: At this point, it\u2019s necessary to select the UPSERT operation so that the system imports non-existing records and updates existing ones without creating any duplicates. Map the required fields and save the task. Click Create to create an Import integration. Click Schedule to set up the task execution timing parameters. Click Run to execute the task. Once the import task is completed, open the NetSuite application to check the newly imported data. Migrating CRM Data from NetSuite to Dynamics 365 Given the point provided above referring to the sales increase potential of Microsoft Dynamics, it\u2019s worth loading all sales-related and CRM data there. To create a process for migrating NetSuite data to Microsoft Dynamics, do the following: Go to New > Import . Choose Data Source as your source type. Choose NetSuite connection as Source and Microsoft Dynamics connection as Target . Add a task to the integration by clicking Add New . Select an object to import, set preferred filters, and click Next Step . Select the corresponding Dynamics object to import data to and an action that should be executed over this data, and click Next Step . NOTE: At this point, it\u2019s necessary to select the UPSERT operation so that the system imports non-existing records and updates existing ones without creating any duplicates. Map the required fields and save the task. Repeat steps 4-7 to create new tasks for importing Contact, Product, and Task objects. Click Create to create an Import integration. Click Schedule to set up the task execution timing parameters. Click Run to execute the task. If you need to keep data up-to-date in both CRM modules, create the synchronization scenario: NOTE: This scenario is applicable only if one system is empty because the system copies all data during the first sync run revealing the risk of data duplicate creation. Click +NEW in the top menu. In the Integration column, click Synchronization . Under Source, select Microsoft Dynamics. Under Target , select NetSuite. Add a task to the integration by clicking Add New . Select an object to synchronize from the Source drop-down list and the corresponding data field on the target side. Click Next Step . Define data mapping settings in the Column Definition tab for both directions. Click Create to create a procedure. Click Schedule to set up the task execution timing parameters. Click Run to execute the task. Once the data transfer is completed, open Dynamics 365 to check the newly imported CRM-related data. Having holistic customer data at hand would definitely help to create new marketing campaigns and nurture the current ones. Conclusion Given that 19 out of 20 companies use an ERP system on a daily basis, obviously, most of the critical information is stored there. Microsoft Dynamics and NetSuite are the most popular ERP solutions containing CRM functions as well. Businesses might need to migrate data from one system to another for: Switching to another ERP tool Reinforcing the features of a certain ERP tool Keeping the information updated in both ERP systems Manual data transfer is not an effective way, so the data integration platform Skyvia comes in handy. It allows businesses to easily transfer data from one SaaS application or database to another in minutes. What\u2019s more, it has an extended set of features and offers good pricing deals for businesses of various sizes. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite%2F) [Twitter](https://twitter.com/intent/tweet?text=Dynamics+365+vs+NetSuite%3A+How+to+Migrate+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dynamics-365-vs-netsuite-how-to-migrate-data-between-dynamics-365-and-netsuite/&title=Dynamics+365+vs+NetSuite%3A+How+to+Migrate+Data) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/dynamics-crm-and-sql-server-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Dynamics CRM with SQL Server Integration: How to Sync Data By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/dynamics-crm-and-sql-server-integration/#respond) 2784 March 26, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-crm-and-sql-server-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Dynamics+CRM+with+SQL+Server+Integration%3A+How+to+Sync+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-crm-and-sql-server-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dynamics-crm-and-sql-server-integration/&title=Dynamics+CRM+with+SQL+Server+Integration%3A+How+to+Sync+Data) The use of CRM systems in modern enterprises is a rule of thumb. Microsoft Dynamics 365 CRM is one of the most popular systems for managing customer records these days. Since client data makes value for most organizational departments, it usually needs to be available even to those employees who don\u2019t interact with customers directly. How to implement that? For instance, customer details can be transferred to SQL Server, one of the most commonly used and highly-rated databases. This article presents five core methods for integrating Microsoft Dynamics CRM and SQL Server and compares them. Table of Contents What Is Dynamics CRM? What Is SQL Server? Dynamics CRM and SQL Server Integration Options SQL Server Integration Services (SSIS) ODBC (Open Database Connectivity) Driver Power Automate (Flow) Third-Party Integration Tools Dynamics 365 API Benefits of Dynamics CRM and SQL Server Connection Conclusion What Is Dynamics CRM? [Dynamics CRM](https://dynamics.microsoft.com/en-gb/what-is-dynamics365/) , also known as Dynamics 365 CRM, is a sophisticated compound solution that comprises diverse modules for customer relationship management. This tool is often used by enterprise-level organizations to help them manage client-related interactions and align those with sales, marketing, and project automation activities. What Is SQL Server? [Microsoft SQL Server](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) is a relational database management system (RDBMS). Similar to other RDBMS systems, this solution uses Structured Query Language (SQL) that enables smooth interaction with relational databases. Since SQL Server has been on the market for 20+ years, there are multiple editions and versions of this system. Dynamics CRM and SQL Server Integration Options In this article, we discuss five principal methods to connect Dynamics CRM and SQL Server together: SQL Server Integration Services (SSIS) ODBC (Open Database Connectivity) Driver Power Automate (Flow) Third-Party Integration Tools: Skyvia Dynamics 365 API We have prepared a table to introduce each approach and its principal characteristics briefly. All these methods differ in difficulty and in the use of instruments for integration implementation. Evaluation criteria Direct API [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) ODBC Driver Power Automate Skyvia Installation and Setup Involves developers and IT specialists for configuration and setup. Requires basic programming and technical knowledge. Requires some time to install and configure the driver, though programming knowledge isn\u2019t needed. Can be installed as a desktop app or can be accessed via a web browser as a SaaS app. Can be used both by technical and non-technical audiences with no coding experience. Key benefits \u2013 High level of customization \u2013 Cross-platform and multi-language support \u2013 Bi-directional integration \u2013 GUI and no coding -GUI with no coding \u2013 Advanced security, reporting, and analysis \u2013 Integration monitoring \u2013 Optimizing business workflow \u2013 Human error elimination \u2013 Pre-defined templates \u2013 Web access- Ample list of integration options \u2013 200+ connectors- GUI and no coding Limitations \u2013 Deep technical knowledge required \u2013 Translation to SQL needed \u2013 Oriented on the Microsoft ecosystem \u2013\u00a0 Limitations associated with on-premises systems \u2013 One-way integration only- SQL knowledge is required \u2013 Limitations on API calls and items for connectors \u2013 Slow performance \u2013 Limited information on run history \u2013 No phone support- Daily data refreshes on a freemium plan Cost Depends on the development and implementation costs. Free. Starts from $129. Free, but there are premium connectors that cost extra. Paid plans start [from $79](https://skyvia.com/pricing) . There\u2019s also a freemium plan. SQL Server Integration Services (SSIS) When thinking of how Dynamics CRM connects to the SQL Server, native integration comes to mind first. Both tools are produced by the same corporation \u2013 Microsoft. [SQL Server Integration Services (SSIS)](https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) is one of the Microsoft tools that could be a good option for connecting Dynamics CRM and SQL Server. With SSIS, you can design ETL pipelines for data transfer and transformation. Best for This Dynamics CRM integration with SQL Server using SSIS suits companies with multiple Microsoft tools and appliances in their data infrastructures. Prerequisites Download Microsoft Visual Studio from the Microsoft Store. Make sure that the Data storage and processing toolset option is selected upon installation. In Visual Studio, download the SQL Server Integration Services Projects extension. Step-by-step instructions for integration using SSIS Open Visual Studio and click Create a New Project . Select the Integration Services Project type and click Next . Configure the project settings and click Create . Select the Data Flow task. Click Source Assistant and select SQL Server . Apply other configuration settings if needed. From the left panel of the project, under Sources , select SQL Server and drag it to the central task area. In the Solution Explorer panel on the right of the main project window, right-click on the Connection Manage r and select New Connection . Select Dynamics CRM and click Add . Click Test Connection . From the left panel of the project, under Destinations , select Microsoft CRM and drag it to the central task area. Connect source and destination blocks as follows. If there\u2019s a need to transform data, select them from the SSIS Toolbox panel and put them between the source and destination. In the Solution Explorer panel, find the task and right-click on it. Select Execute Package . A similar procedure can be done reversely \u2013 with Dynamics CRM as a source and SQL Server as a destination. Pros There is a possibility of executing integration in both directions: from SQL Server to Dynamics CRM and vice versa. A user-friendly graphical interface that doesn\u2019t require any coding. Cons Suitable only for tools within the Microsoft ecosystem. Limitations associated with integration include systems deployed on-premises. ODBC (Open Database Connectivity) Driver Another way to connect both instances is to use the Devart [ODBC Driver for Dynamics 365](https://www.devart.com/odbc/dynamics/) . It allows the creation of a linked server in SQL Server Management Studio that provides permission to execute SQL commands and read data from Dynamics 365. In this case, only mono-directional data transfer is expected, obviously. Best For This method would be suitable for IT specialists who interact with databases on a daily basis and are proficient in SQL. Prerequisites [Install](https://docs.devart.com/odbc/dynamics/installation-clouds.htm) and [configure](https://docs.devart.com/odbc/dynamics/driver_configuration_and_conne.htm) the Devart ODBC driver for Dynamics. Step-by-step instructions for integration with ODBC driver To create a linked server for Dynamics 365 in SQL Server Management Studio, proceed with the following steps: Launch SSMS and select the required SQL Server instance. In Object Explorer , expand the Server Objects section. Then, right-click on Linked Server and select New Linked Server . Configure a newly created linked server: In the Linked server field, type the name of the server. Select Other data source from the Server type drop-down list. Select Microsoft OLE DB Provider for ODBC drivers from the Provider drop-down list. In the Data source field, type the DSN name (e.g., Devart ODBC Driver for Dynamics 365). Alternatively, it\u2019s possible to type the ODBC Driver connection string in the Provider field. The linked server appears under Linked Servers in Object Explorer . Executing distributed queries and accessing Dynamics 365 databases via the SQL Server is now possible. Pros An intuitive GUI for managing and interacting with SQL Server databases and objects. Advanced security, reporting, and analysis services. Integration monitoring. Cons Only one-way integration \u2013 data can be extracted from Dynamics CRM to SQL Server. Extensive knowledge of SQL language is required. Available only on Windows. Power Automate (Flow) [Power Automate](https://www.microsoft.com/en-us/power-platform/products/power-automate) is another Microsoft-based product of the new generation. It extensively uses AI to optimize your business processes and automate repetitive tasks. This tool primarily works with other cloud-based sources. However, it also supports desktop-based applications and systems, such as SQL Server. Best for Power Automate mostly includes connectors for cloud-based tools, so it would be suitable for companies that heavily implement SaaS applications within their data infrastructures. This solution also supports a number of on-premises tools, including SQL Server. Prerequisites Create an account on [the official Power Automate platform](https://www.microsoft.com/en-us/power-platform/products/power-automate) and explore the list of supported browsers if you plan to use this tool only for cloud flows. In case you need to create a desktop flow, [download Power Automate](https://learn.microsoft.com/en-us/power-automate/desktop-flows/install) from Microsoft Store or download the .msi file and perform installation based on it. Step-by-step guide In this example, we show how to create a cloud flow using the web-based version of Power Automate. To start the SQL Server to Dynamics CRM integration using this tool, follow these steps: On the [Power Automate (Flow) main page](https://make.powerautomate.com/) , log in with your Microsoft credentials. Click Create on the left panel and select the type of flow that interests you. In our case, we use the Scheduled cloud flow option. Specify the scheduling options for automated data extraction. You can skip this step for now and configure it later. Click Create. Click + and select Dynamics 365 Business Central desired action from the list. Establish a connection with your Dynamics 365 Business Central account. Click + and select the desired action for SQL Server. Establish a connection with your SQL Server database. Click Save to save the flow. Go to the list of your flows and start the one that corresponds to the SQL Server to Dynamics CRM integration. Pros Automation of routine processes and repetitive manual tasks. Elimination of occasional human errors. Pre-defined templates for data flows. Cons Even though Power Automate has many connectors, most of them offer a small number of items that can be retrieved. A limited number of API calls for most connectors. Slow performance, especially for large flows. Limited information on integration run history. Third-Party Integration Tools To overcome the limitations of all the above-mentioned approaches and decrease the degree of difficulty, we\u2019d like to introduce a third-party [data integration solution](https://skyvia.com/blog/data-integration-tools/) . [Skyvia](https://skyvia.com/) is a cloud-based platform that offers several ways to [integrate Dynamics and SQL Server](https://skyvia.com/data-integration/integrate-dynamics-crm-sql-server) data. Best for Skyvia is seen as a reliable integration platform for any kind of business. In fact, it\u2019s a multifaceted solution that can be used in various use cases. When it comes to SQL Server Dynamics CRM integration, Skyvia offers several tools for smooth data exchange between the systems. In this section, we mainly focus on: [Import](https://skyvia.com/data-integration/import) component (ETL solution) [Replication](https://skyvia.com/data-integration/replication) component (ELT solution) [Synchronization](https://skyvia.com/data-integration/synchronization) (bi-directional sync solution) However, this platform can also be suitable for complex data flows involving several sources and complex transformations with the [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) component. Prerequisites Before going through the details of integration scenarios, take some preparatory actions: Open [Skyvia](https://app.skyvia.com/) and log into the system or sign up to create an account. Connect to Microsoft Dynamics from Skyvia \u2013 [see detailed step-by-step instructions](https://docs.skyvia.com/connectors/cloud-sources/dynamics_connections.html) . Connect to the SQL Server from Skyvia \u2013 [see detailed step-by-step instructions](https://docs.skyvia.com/connectors/databases/sqlserver_connections.html) . NOTE: Select the Agent connection mode for SQL Server to establish a secure channel. Make sure to download [SQL Server Agent](https://learn.microsoft.com/it-it/sql/ssms/agent/sql-server-agent?view=sql-server-ver16) , install it, and run it as Administrator beforehand. Replicate Data (ELT) Skyvia\u2019s [Replication](https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html) scenario helps users copy data from one source to another. Here, we\u2019ll explore the case of data replication from Microsoft Dynamics 365 to SQL Server. Note that there\u2019s no need to create tables in the SQL Server, as Skyvia will do that automatically. Moreover, the replication scenario doesn\u2019t allow any data transformation to be performed. Click +NEW in the top menu and select Replication in the Integration column. Select Dynamics connection as a source and SQL Server as a target. Select the objects to replicate. Select Incremental Updates under Options . That way, Skyvia copies all data only when the replication procedure is executed for the first time. During subsequent replication procedures, the system checks which Dynamics data was updated and then will apply these changes to the SQL Server tables. To run the task on a regular basis, click Schedule to define parameters for its automatic execution. Click Create . Import Data (ETL) Skyvia\u2019s Import is a flexible ETL tool for data migration between different platforms. It permits data load data from CSV files, cloud apps, or relational databases to other cloud apps or relational databases. In contrast to Replication, the Import integration offers data transformation and mapping capabilities along with other [advanced features](https://docs.skyvia.com/data-integration/import/advanced-features/) for working with data. With Skyvia, you can easily configure data migration between Microsoft Dynamics CRM and SQL Server in any direction. Here, we mainly review how to load data from RDBMS to CRM. This case allows complementing the client data with financial or contact details, for instance, which greatly simplifies the work of sales teams. Click +Create New in the top menu and select Import in the Integration column. Under the Source type section, select Database or cloud app and find SQL Server in the Connection drop-down list. Under the Target section, select Microsoft Dynamics CRM from the Connection drop-down list. Click Add new in the Tasks section on the right. In the task editor, it\u2019s possible to define filters, select data import operations (INSERT, UPSERT, UPDATE, or DELETE), and define mapping types (Column, Expression, Lookup, Constant, Relation, External ID, etc.). After all configuration settings are ready, click Save . Click Create . To run the task on a regular basis, click Schedule to define parameters for its automatic execution. Note that it\u2019s also possible to transfer data in the opposite direction. Moreover, After uploading/transferring data with the help of Skyvia to SQL Server, users can then load the prepared data into BI/AI etc tools. Synchronize Data Skyvia\u2019s [Synchronization component](https://docs.skyvia.com/data-integration/synchronization/) aims to sync data between cloud applications (Microsoft Dynamics 365) and relational databases (SQL Server) in both directions. It allows synchronizing data even of a different structure, preserving all data relations and providing powerful mapping settings configuration. \u200b\u200bWhen the synchronization task runs for the first time, the system doesn\u2019t check whether the records in data sources are identical but simply copies data from one side to another. This may cause duplicate records in the destination system, so it\u2019s crucial to ensure it has no records yet. During subsequent sync operations, if a record is modified or deleted in one source, the system modifies or deletes the data mapped to this record in another source, and vice versa. Click +NEW in the top menu and select Synchronization in the Integration column. Select Dynamics connection as a source and SQL Server as a target. Click Add new in the Tasks section on the right. In the task editor, select an object from the Source drop-down list and the corresponding data field on the target side. Define data mapping settings in the Column Definition tab for both directions and save the task. Click Schedule to define the parameters for its automatic execution. Click Create . To start synching right away, click Run and go to the Monitor tab to check the progress. Pros Web-based access with no extra downloads and desktop installations required. Support of [200+ data sources](https://skyvia.com/connectors) , including SaaS apps, flat files, databases, storage systems, and data warehouses. An ample set of integration scenarios and data processing capabilities. Possibility to start with a freemium plan with all basic features included. Cons No phone support is available. The maximum data update frequency on a freemium plan is once a day. Dynamics 365 API Microsoft provides the [API (V2.0)](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/api-reference/v2.0/) for connecting Dynamics 365 CRM with other applications and services. This is typically done with so-called Connect apps that establish point-to-point connections between services. Such applications rely on standard REST API for data exchange. Best for Integration via API is suitable for seasoned developers with multi-year experience in coding and with a solid technical background. Such professionals are capable of creating complex solutions to connect Dynamics CRM and SQL Server. Even though this approach is rather demanding, it offers unlimited customization, so it will be suitable for companies that need specific settings in their CRM system. Prerequisites Before starting integration with the method, you need to [enable the APIs for Business Central](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/api-reference/v2.0/enabling-apis-for-dynamics-nav) . If you already have such APIs enabled, use the latest version. If not, check how to perform the [transition from API v1.0 to API v2.0](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/api-reference/v2.0/transition-to-api-v2.0) . Step-by-step guide One possible solution to connecting Microsoft Dynamics 365 CRM with SQL Server is to create a Connect app using standard REST API for data exchange. A Connect app establishes a point-to-point connection between Dynamics 365 Business Central and a third-party solution (SQL Server, in this case). Feel free to check the detailed [Microsoft guidelines](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-develop-connect-apps) on how to create a Connect app and authenticate against API endpoints. Pros Here are the main benefits of using APIs: High level of customization Cross-platform and multi-language support Cons Deep technical knowledge is required. An additional step is needed to translate API calls to SQL. Benefits of Dynamics CRM and SQL Server Connection Businesses need to consider [Dynamics 365 & SQL Server integration](https://skyvia.com/data-integration/integrate-dynamics-crm-sql-server) because of its tangible and intangible benefits. Some of the most significant ones are provided below. Data visibility. As SQL Server is usually seen as an enterprise ERP system, companies can take data from there to complement Dynamics CRM. Thus, Dynamics contains all the data needed for departments working with customers, which enhances the overall data visibility. Enhanced collaboration. Having integration tools in place makes the collaboration between departments more fruitful. Informed decision-making. Loading data from one source to another and keeping it up-to-date provides a strong basis for analysis. As SQL Server as well as Dynamics easily connect to another Microsoft product named Power BI, driving analytics becomes easier, and thus makes decisions weighted. Conclusion In this article, we have reviewed five fundamental approaches to connecting the SQL Server to Microsoft Dynamics CRM. Some offer bidirectional data transfer, while others can only execute unidirectional integration. The most complex to implement is the API-based connection since it requires deep technical expertise but offers the highest level of customization at the same time. Integration with SSIS also requires technical knowledge and SQL language skills. ODBC driver, Power Automate, and Skyvia rely heavily on the user interface, which makes them really convenient and fast to use. Of course, each way of connecting SQL Server to Dynamics CRM is suitable for a set of use cases. However, among all of them, [Skyvia](https://skyvia.com/) provides a universal approach to integration. It allows businesses to select both simple and complex scenarios for data exchange between a database and a CRM. Don\u2019t hesitate to try Skyvia today! FAQ for Dynamics CRM with SQL Server What are the main benefits of integrating Dynamics 365 and SQL Server? The core advantages of this integration are the following: \u2013 Improved data visibility and availability \u2013 Enhanced collaboration \u2013 Informed decision-making Can I integrate Dynamics 365 (online) with an on-premises SQL Server database? Connecting web-based Dynamics CRM to an on-premises SQL Server is possible. You can do it with Skyvia by configuring pre-built connectors to each of these tools and selecting the integration scenario. Is performing a one-time data migration from Dynamics CRM to SQL Server possible? Yes. You can copy data from Dynamics CRM to SQL Server as a single operation. It\u2019s also possible to set up a data flow from SQL Server to Dynamics CRM and execute a one-time data migration. Which integration method is best for real-time data synchronization between Dynamics 365 and SQL Server? Consider using the Skyvia Synchronization tool for that purpose, but make sure that one of the systems is empty before the synchronization starts. Otherwise, there might be duplicate records and inconsistencies in both systems. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-crm-and-sql-server-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Dynamics+CRM+with+SQL+Server+Integration%3A+How+to+Sync+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fdynamics-crm-and-sql-server-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/dynamics-crm-and-sql-server-integration/&title=Dynamics+CRM+with+SQL+Server+Integration%3A+How+to+Sync+Data) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/elt-vs-etl/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) ELT vs ETL: Main Differences By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/elt-vs-etl/#respond) 5857 April 29, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Felt-vs-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=ELT+vs+ETL%3A+Main+Differences&url=https%3A%2F%2Fblog.skyvia.com%2Felt-vs-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/elt-vs-etl/&title=ELT+vs+ETL%3A+Main+Differences) ELT vs ETL \u2013 The difference in the acronym is so minute. It can cause a typo. And yet, both ETL and ELT processes are important in today\u2019s data processing. So, if you\u2019re looking for their stark differences, you\u2019re in the right place. Maybe you heard that ETL is much more mature. But ELT is the newer kid on the block. So, is it ETL or ELT? If you\u2019re confused, you\u2019re not alone. That\u2019s why this article will discuss the differences, the pros and cons, and the use cases of these two [data pipelines](https://skyvia.com/blog/what-is-data-pipeline/) . Table of contents What Is ETL? What Is ELT? Key Differences Between ETL and ELT Processing ETL vs ELT Comparison Table Advantages and Disadvantages of ETL When is Better to Use ETL Instead of ELT? When is Better to Use ELT over ETL? Summary Cloud ETL and ELT Solutions Let\u2019s start the comparison by introducing each of them. Let\u2019s begin. What is ETL (Extract, Transform, Load)? So, what is ETL? ETL stands for Extract, Transform, Load. Let\u2019s extend the ETL meaning further. It is a data pipeline that copies data from various data sources. Then, this copy is transformed by cleansing, summarizing, filtering, and more. And finally, the transformed data is loaded to a target database. There are a few ways ETL can behave too. It can be by batch by updating chunks of data on a regular schedule. Batching can be full load or [incremental load](https://blog.devart.com/incremental-load-in-ssis.html) . ETL can also be streamed. Streaming ETL or real-time ETL is a way to copy source data to a target when a minute of processing is too long. And last is [Reverse ETL](https://skyvia.com/blog/what-is-reverse-etl) , where the source and targets are reversed. Instead of the [data warehouse](https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide) being the target, it becomes the source. Then, after some transformations, insights are copied back to operational systems. It is an ETL [data pipeline](https://skyvia.com/blog/what-is-data-pipeline) , but of course, reversed. Instead of having the data warehouse as the target, it becomes the source. And the insights coming from it will be formatted and pushed to the target third-party apps. Figure 1 below illustrates that point. ETL is in use [since the \u201970s](https://en.wikipedia.org/wiki/Extract,_transform,_load) for data warehousing. So, it\u2019s already a traditional method or pattern in processing data. It\u2019s also mature to the point that various [ETL tools](https://skyvia.com/blog/etl-tools) exist, and a lot of people with data pipeline skills know it. ETL Use Cases What is ETL used for? Here are some of the common ETL use cases: Combine data from different sources. One scenario is using the same legacy system from 2 remote locations. ETL can merge the data of the 2 locations. Another scenario is combining the data of 2 merging companies into one. Integrating one system to another. From experience, I made an [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) ETL pipeline to integrate 2 systems. The first is a petty cash system done in SharePoint. And the other is SAP Financials used by accountants. Both are on-premise. Every day, petty cash transactions are read and transformed until it reaches the accounting books. Data import/export. One example is when an Accounting staff wants to extract or export invoice data to a CSV file. Or read the data from a biometrics machine, format it, and load it to a SQL Server database. In a later section, you will learn more about when to use ETL. What is ELT (Extract, Load, Transform)? So, what is ELT? ELT stands for Extract, Load, Transform. Let\u2019s extend the ELT meaning further. ELT data pipeline works by copying the data source to the destination. And then, the destination\u2019s computing power will handle the transformations. While ELT history shows it has recently gained popularity, [the concept is not new](https://www.ibm.com/cloud/learn/elt#toc-the-past-a-e4XzkfVv) . With the wider adoption of the cloud and data lakes, [ELT adoption](https://www.ibm.com/cloud/learn/elt#toc-what-is-el-2ZSy_nBX) also accelerated. It makes sense because of these factors: the growing data size, cheaper cloud storage, and faster internet. When we talk about ELT, we generally mean cloud-driven ELT. Well-known ELT tools harness the power of the cloud. ELT is the answer to an ever-increasing size of data where gigabytes are too small. To make this work, the autoscaling cloud infrastructure and near-infinite storage are necessary. So, this scenario will only make sense in the cloud. ELT Use Cases So, why use ELT? Here are some ELT use cases: Performing ML algorithms on thousands of images stored in a data lake from security cameras. [Data mining](https://skyvia.com/blog/data-mining-tools/) a NoSQL data for consumer behavior, product sentiment, or purchasing patterns. Collecting and analyzing a huge volume of meteorological data from weather stations. Note that some use-cases in ELT can also be done using ETL. One example is [data warehousing](https://en.wikipedia.org/wiki/Data_warehouse) . And note that both ELT and ETL can also work with structured data. The difference is in the approach used. In a later section, you will learn more about when to use ELT. Key Differences Between ETL and ELT Processes Let\u2019s explain the ELT vs ETL key differences further. The main difference between ETL and ELT processes lies in the transformation. With this, the difference is like night and day between ELT and ETL workflows. This is shown in the figure below. It illustrates the ETL meaning we had earlier. Meanwhile, ELT delays the transformation until everything is loaded to the destination. This difference affects the pipeline\u2019s maintainability, data security, and compliance. Because of ETL\u2019s approach, errors during transformation will stop the loading to the destination. This doesn\u2019t happen in ELT, as shown in the figure below. Moreover, fixing the ETL pipeline for bugs requires restarting the whole ETL pipeline. So, the transformed data will reach the destination. Another key difference between ELT and ETL lies in the data it can process. Notice in Figure 1 that ELT sources can also be unstructured like images and videos. ETL only allows structured data. The size of these data is a differentiating factor too. ELT can handle big data when ETL performs badly with it. Finally, another key difference is where the pipeline lives. ETL pipelines can either be on-premise or in the cloud. Meanwhile, ELT pipelines are mostly cloud-based. ELT vs ETL Comparison Table Below is the ETL and ELT comparison table. Advantages and Disadvantages of ETL From here, let\u2019s examine the ETL and ELT pros and cons. Let\u2019s start with ETL. ETL Advantages Data in the ETL destinations are cleaner. Duplicates and [orphaned records](https://database.guide/what-is-an-orphaned-record/#:~:text=An%20orphaned%20record%20is%20a%20record%20whose%20foreign,%E2%80%9Cparent%E2%80%9D%20with%20which%20its%20data%20is%20associated%20with.) are removed before reaching the destination. Compliance with GDPR, HIPAA, and others is easier with ETL. This method of data integration and processing is mature. The data warehouse or data lake that used ETL requires less storage. ETL Disadvantages ETL is less flexible. New data requirements not in the data warehouse will require another ETL pipeline. And because it\u2019s less flexible, new requirements can be slower to implement compared to ELT. Then, ETL pipeline maintenance is also higher. The need to transform first before loading slows the loading process. And errors will prevent loading too. When is Better to Use ETL Instead of ELT? Though ETL has its drawbacks, there\u2019s a place for it in your data integration efforts. So, when can you opt for ETL as the better option? The following are the common reasons when ETL should be used: Batch processing is enough for your requirements when data volume is not that much. Do you have 0-10 transactions per day? If you sell products like sports cars, luxury real estate, expensive yachts, or the like, you can opt for a [batch ETL](https://skyvia.com/blog/batch-etl-processing/) . They won\u2019t sell a million items per week for all eternity anyway. When you need to integrate 2 or more legacy systems on-premise. Does your organization use fingerprint readers? And do you import the records from it for attendance and payroll purposes? Is your Human Resource and Payroll system a legacy, on-premise system? Then, ETL is the obvious choice. Integrating one or more Software as a Service cloud solutions for further analysis. If your volume is not that much for each of the cloud solutions, a cloud ETL tool is your best buddy to help you. Pros and Cons of ELT Now, let\u2019s check out the ELT camp. Here are the ELT pros and cons. Pros Faster time to value. If you have several analytic reports in mind and want them realized quickly, ELT is your best bet. Why? See the next item. ELT is more flexible. You can extract and load your raw data and do transformations on demand. This speeds up the development iterations of your analytical reports. Plus, you also have flexible types of data that includes unstructured and semi-structured data. Process massive growth with faster results. ELT requires minimal maintenance. Cons The destination is messy since transformation occurs last. Data reaches the destination as is \u2014 with duplicates and orphaned records. Risk of low to non-compliance with data protection and privacy laws. Storing raw data in the destination requires more cloud storage. ELT is a newer approach so the number of tools and experts in this field is still growing. When is Better to Use ELT over ETL? So, when to use ELT? ELT should be used when: you need to process enormous amounts of data, like petabytes or more. And then store it fast in cloud storage. you foresee massive growth of data and ETL\u2019s speed is no longer tolerable. your eyeing a cloud ELT tool and your organization is cloud-ready. your stakeholders are in a hurry for analytics. ELT can be faster to deliver than traditional ETL pipelines. And it\u2019s also flexible to adapt to their changing minds. you need to do machine learning on an enormous load of unstructured data. Summary Have you decided yet on ELT vs ETL? Both ETL and ELT are important in today\u2019s data-driven organizations. You may want to use both depending on the need. Remember: ELT is for faster loading and on-demand transformation. It deals mostly with big data that is structured, unstructured, or semi-structured on the cloud. ETL is for a few terabytes or less of structured data that can be batch or real-time. ETL is also for on-premise, legacy data. Cloud ETL and ELT Solutions There is a number of cloud tools, supporting ELT or ETL scenarios. Here are some of the examples: ETL Tools XPlenty \u2014 a cloud ETL tool for easy-to-use data integration. Informatica PowerCenter \u2014 an enterprise-grade data integration platform. Stitch \u2014 a cloud-based ETL platform that can be used to integrate with different data sources. Dataddo \u2014 a no-code, cloud-based ETL platform for flexible data integration. ELT Tools Fivetran \u2014 a cloud ELT tool for building reliable data pipelines. Airbyte \u2014 an ELT tool with an open-source and cloud version. Blendo \u2014 a well-known ELT tool for centralizing different data sets into one location. Matillion \u2014 an ELT tool to load data into cloud data warehouses. Skyvia Skyvia is a powerful data platform that offers both ELT and [ETL tools](https://skyvia.com/blog/etl-tools/) . For ELT scenarios it offers [Replication tool](https://skyvia.com/data-integration/replication) which allows copying cloud data to cloud and on-premise databases and data warehouses with little to no configuration efforts. You only need to create connections to corresponding data sources, select what data to replicate, and then schedule replication for automatic execution. Everything can be done in under 5 minutes. ETL use cases, Skyvia offers multiple tools. [Import](https://skyvia.com/data-integration/import) allows loading data from one source to one destination while using powerful mapping settings for transformations. These mapping sessions include lookups, expressions with a number of supported mathematical, string, datetime functions, etc. For more complex ETL scenarios, including multistage transformations, extracting data from various sources and loading them into multiple destinations, you can use Data Flow. It is a designer-based tool, where you create your data pipeline on a diagram by adding and connecting different components. Did you find this article helpful? Then please share it with your friends and followers on your favorite social media platforms. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Felt-vs-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=ELT+vs+ETL%3A+Main+Differences&url=https%3A%2F%2Fblog.skyvia.com%2Felt-vs-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/elt-vs-etl/&title=ELT+vs+ETL%3A+Main+Differences) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Ultimate Guide to Salesforce Data Quality and Data Cleansing By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/#respond) 2161 March 6, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fenhance-salesforce-data-quality-and-cleaning-with-skyvia%2F) [Twitter](https://twitter.com/intent/tweet?text=Ultimate+Guide+to+Salesforce+Data+Quality+and+Data+Cleansing&url=https%3A%2F%2Fblog.skyvia.com%2Fenhance-salesforce-data-quality-and-cleaning-with-skyvia%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/&title=Ultimate+Guide+to+Salesforce+Data+Quality+and+Data+Cleansing) Data is like a basement of the building, so it needs to be foolproof. Otherwise, the house might get damaged in case of a tornado or earthquake. The same goes for businesses: unstable or unreliable data can make them feel dizzy and unconfident within harsh market conditions. No secret that customer data is of particular value to any business. It\u2019s usually registered and kept in CRM systems, and Salesforce is the most popular one. No wonder, Salesforce data quality impacts customers\u2019 experiences with a company and marketing strategy refinement. This article focuses on Salesforce data quality standards and explains how to achieve them. It also describes Salesforce data cleansing as one of the most powerful methods for ensuring Salesforce quality data. Table of Contents Critical Role of Data Quality and Cleansing in Salesforce Identifying and Addressing Data Quality Challenges in Salesforce Practical Steps for Data Cleansing in Salesforce Skyvia\u2019s Role in Enhancing Salesforce Data Quality and Cleansing Maintaining Ongoing Data Quality and Integrity Conclusion Critical Role of Data Quality and Cleansing in Salesforce No matter whether referring to Salesforce data or other applications\u2019 data, [it\u2019s the ISO 8000 international standard](https://www.iso.org/obp/ui/en/#iso:std:iso:8000:-1:ed-1:v1:en) that determines its quality. It contains a series of documents covering all aspects of working with data, from its creation to delivery. ISO 8000 provides such characteristics determining data quality: Availability Accuracy Completeness Consistency Credibility Flexibility Plausibility Relevance Timeliness Uniqueness When it comes particularly to Salesforce, the quality standards imply that unique, accurate, and credible data should be available to relevant Salesforce users. Salesforce quality data must also be complete and credible, meaning that each customer record has all fields filled. Also, the information about customers in Salesforce should be coherent with customer data in other first-party tools. Benefits of high-quality Salesforce data: Trend identification. Having complete information at your disposal helps businesses identify recent trends in the market. Personalized customer experience. Collecting data about customers\u2019 habits and interactions helps companies understand their preferences. Based on that information, businesses can offer discounts or personalized deals to their customers. Enhanced customer care. When there\u2019s enough information about a consumer, it\u2019s easier to avoid miscommunications and misunderstandings when an inquiry is submitted. Fraud detection. Sufficient details about consumers are a clue to the timely detection of suspicious transactions and fraudulent actions. To ensure the best data quality, use cleansing and other similar techniques discussed in this article. Identifying and Addressing Data Quality Challenges in Salesforce Identifying data-related issues is the first step on the way to data quality enhancement. The problems usually get detected when one tries to contact a prospect by dialing a phone number that doesn\u2019t even exist or coming across several records with different details about the same customer. Such cases represent low-quality data that comes as a collateral effect of: Duplicate records Incomplete records Outdated information Data entry errors Lack of consistency Let\u2019s go back to ISO 8000 standards again and see that duplicate records, data entry errors, and incomplete records contradict uniqueness, accuracy, and completeness, respectively. The above-mentioned issues can certainly be prevented or avoided. To address them properly, design and implement Salesforce data maintenance rules, such as: Create a clear and concise data maintenance plan outlining the parameters determinining quality data. List data entry rules by specifying mandatory fields to be filled to prevent users from entering incomplete data. Data entry standards help to avoid structural errors, such as typos, wrong capitalization and naming conventions, and other inconsistencies. Create data validation rules by specifying the exact formats for each data field to prevent Salesforce users from typing inaccurate data. Inform all the responsible personnel working with Salesforce about standards, rules, and procedures related to data quality in Salesforce. Perform regular data cleansing , duplicate removal, and consistency checks. Practical Steps for Data Cleansing in Salesforce There\u2019s no magic pill or one-click button to transform Salesforce data into high-quality one. However, there\u2019s a set of procedures that can take you to that point. What Is Data Cleansing in Salesforce? [Data cleansing](https://help.salesforce.com/s/articleView?id=sf.c360_a_cleansing_data.htm&type=5) , also known as data scrubbing, is the process of modifying or removing Salesforce data that doesn\u2019t correspond to the ISO 8000 quality standard. Simply put, inaccurate, incomplete, irrelevant, or duplicate data must be deleted or corrected with cleansing. Follow these step-by-step guidelines for data cleansing in Salesforce. Actualize outdated data by checking for any outdated information via [CRM data enrichment](https://skyvia.com/blog/best-crm-data-enrichment-tools/) . Remove duplicates . Note that by default, Salesforce has duplicate rules activated for business accounts, contacts, and leads. Go to Setup in your Salesforce account, type Duplicate Rules in the search box, and ensure the Standard Lead Duplicate Rule is activated. Use the right tools to check the data\u2019s health state and perform the necessary cleanup. Importance of Salesforce Data Cleaning Cleaning Salesforce data is essential for improving its quality. Companies having quality datasets at hand obtain such benefits: Adherence to the trends Improving customer experience Supporting analysts with quality data Powering up productivity Refining decision making Skyvia\u2019s Role in Enhancing Salesforce Data Quality and Cleansing Manual Salesforce data cleansing isn\u2019t very exciting, to be honest. Luckily, there are third-party tools designed to automate various data-related processes. [Skyvia](https://skyvia.com/) is one of those tools \u2013 it\u2019s a universal SaaS platform designed for various data integration, backup, automation, query, and management. [Skyvia Data Integration](https://skyvia.com/data-integration/) product can help you ensure high-quality Salesforce data by Addressing missing values Checking for outdated data or updating the existing records Avoiding duplicate records Providing access to up-to-date data The aspects mentioned above come with the Data Integration tools. They will help you to: address missing values by complementing the existing Salesforce records with details from other sources; merge data from several CRM or ERP systems and other business apps; apply data filtering; prevent duplicate records; perform advanced transformations map data fields In this example, we present the case when Salesforce records are completed with data from other sources using the Import tool. The first thing you need to do is [log into Skyvia](https://app.skyvia.com/) and click +New . Then, select the tool (Import, Export, Replication, etc.) needed to implement the required data integration scenario. In our example, it\u2019s the Import tool. Then, select a CSV file or another [cloud app or database](https://docs.skyvia.com/connectors/) as a source from where the data would be imported. Ensure the [Salesforce connector](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) is also set up, and select Salesforce as a target. The next step is defining data transfer rules to Salesforce by clicking Add task . At this stage, it\u2019s also possible not only to address the missing values but to actualize the current ones. Under Source Definition , select the HubSpot object related to the data to be updated in Salesforce. In our case, it\u2019s Contacts, as contact information needs to be actualized in Salesforce based on the HubSpot register. Define filtering parameters and conditions regarding the exact data that interests you. In our case, we consider all the contacts with email and phone numbers present. Then, under Target Definition , select the destination object to which the data will be integrated. Then, select the UPSERT operation to ensure the existing records are updated and the new ones are added to Salesforce. This DML operation prevents the duplicate creation. At the next stage, map fields between [HubSpot and Salesforce](https://skyvia.com/blog/hubspot-salesforce-integration/) to import the data correctly. [See details on mapping settings in Skyvia](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . Finally, explore the Schedule option for continuously supplying Salesforce with fresh data. Just set up the exact time for the integration and save the settings. Note that data can be imported not only from other cloud apps but also from databases, data warehouses, and offline CSV files. For the latter, please check the dedicated video below: The free plan allows only one scheduled integration per day. If you need more frequent updates, Skyvia provides paid plans with more features and options available. To initiate the data loading into Salesforce, click Create and then click Run . Real-Life Use Cases Let\u2019s have a look at several practical examples of how Skyvia addresses the missing values, helps to keep the information up-to-date, and improves the overall data quality in Salesforce. Learn how to merge two Salesforce accounts. Learn how to ensure complete and accurate data in Salesforce by integrating it with other tools or importing data from external sources. The [Exclaimer case study](https://skyvia.com/case-studies/exclaimer) explains setting up integration billing and subscription systems into Salesforce to obtain holistic customer profiles. As a result, they achieve productivity boosts in their sales and marketing. Using Skyvia not only ensures high-quality Salesforce data. It has some other advantages: The no-code concept is suitable for any user. Zero maintenance as Skyvia is a cloud solution that requires no on-premises software and hardware deployments. The pricing model allows you to select the package that meets your business needs, starting from the free version to the enterprise edition. Reporting. Skyvia\u2019s Import, Export, and Data Flow tools can run ready-made reports from Salesforce and use their outputs in the integrations. Here are similar articles about different integration scenarios that might be of interest to you: [Salesforce Oracle integration](https://skyvia.com/blog/oracle-and-salesforce-integration/) [Salesforce BigQuery integration](https://skyvia.com/blog/salesforce-and-bigquery-connection/) [Salesforce Power BI integration](https://skyvia.com/blog/salesforce-powerbi-integration/) [Salesforce NetSuite integration](https://skyvia.com/blog/netsuite-salesforce-integration/) [Salesforce Marketo integration](https://skyvia.com/blog/marketo-salesforce-integration/) [Salesforce Stripe integration](https://skyvia.com/blog/stripe-salesforce-integration/) Maintaining Ongoing Data Quality and Integrity Note that data quality assurance isn\u2019t just a one-time operation. It rather requires a set of regular maintenance checks, practices, and cleanups. Here, we provide some effective practices that introduce the culture of Salesforce data awareness. So pin them up in your daily and monthly agendas. Implement regular data audits and cleanups , allowing users to identify and address duplicate, inaccurate, or incomplete data. Assign responsibilities to certain Salesforce users for data management and quality assurance. Conduct regular training and education sessions to acquaint Salesforce users with quality standards and maintenance procedures. Conclusion Salesforce is one of the pillars for most businesses, so its data quality determines the overall company\u2019s success. Missing values, duplicates, and incorrect information make up obstacles on the way to excellent customer experience, refined analytics, and enhanced decision-making. Addressing all those data issues isn\u2019t simple. Salesforce has embedded mechanisms for preventing duplicate records. Also, professional tools like Skyvia can help you improve the quality of existing data and maintain Salesforce data in a good state. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fenhance-salesforce-data-quality-and-cleaning-with-skyvia%2F) [Twitter](https://twitter.com/intent/tweet?text=Ultimate+Guide+to+Salesforce+Data+Quality+and+Data+Cleansing&url=https%3A%2F%2Fblog.skyvia.com%2Fenhance-salesforce-data-quality-and-cleaning-with-skyvia%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/&title=Ultimate+Guide+to+Salesforce+Data+Quality+and+Data+Cleansing) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/etl-architecture-best-practices/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) ETL Architecture Best Practices, Challenges and Solutions By [Sahil Wadhwa](https://skyvia.com/blog/author/sahilw/) [0](https://skyvia.com/blog/etl-architecture-best-practices/#respond) 4760 January 18, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-architecture-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=ETL+Architecture+Best+Practices%2C+Challenges+and+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-architecture-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-architecture-best-practices/&title=ETL+Architecture+Best+Practices%2C+Challenges+and+Solutions) Table of contents What Is ETL Architecture? 12 Best Practices for ETL Architecture Batch ETL or Streaming ETL Challenges When Building ETL Architecture Solution for Better ETL Conclusion What Is ETL Architecture? ETL architecture is a \u2018master plan\u2019 defining the ways in which your ETL processes will be implemented from beginning to end. This wires a portrayal of the methods in which the data will move from the source to the target areas, in addition to the transformation list that will be executed on this data. ETL architecture revolves around three fundamental steps: Extract . The initial step is the extraction of data, during which the data is explicitly identified and drawn out from several internal as well as external sources. Database tables, pipes, files, spreadsheets, relational and non-relational databases may be included among some of these data sources. Transform . The next step involves transformation. Once the data is extracted, it must be transformed into a suitable format and should be transported physically to the target data warehouse. This transformation of data may include cleansing, joining, and sorting data. Load . The final step is to load the transformed data into the target destination. This target destination may be a data warehouse or a database. Once the data is loaded into the data warehouse, it can be queried precisely and utilized for analytics and business intelligence. 12 Best Practices for ETL Architecture 1. Understand Your Organizational Requirements It is very important to get a clear understanding of the business requirements for ETL data processing. The source is going to be the primary stage to interact with data that is available and must be extracted. Organizations evaluate data, using business intelligence tools, which help in leveraging a wide range of data sources and types. Data Sources/ Targets Analysis . Analyze how the source data is produced and in what format it needs to be stored and evaluate the transactional data. Usage and Latency . Another consideration that must be accounted for is how the information can be loaded and how this loaded data will be consumed at the target. Suppose, a BI team requires data for reporting needs; one has to know how frequently the data is fetched by BI teams. In case the frequency of extracting data is high and the volume of extracted data is low, then [traditional RDBMS](https://www.devart.com/what-is-rdbms/) may be sufficient to store the data as it would be cost-effective. If both the frequency of retrieving data from the source and the volume of retrieved data are high, then the standard RDBMS could actually be a bottleneck for your BI team. In such a situation, improved data warehouses like Snowflake or big data platforms that use Druid, Impala, Hive, HBase, etc., can help. There are many other considerations which include current tools available in-house, SQL compatibility (especially associated with client site tools), management overhead, and support for different data sources, among other things. 2. Audit Your Data Sources Data auditing refers to assessing the information quality and utilization for a selective purpose. Data auditing also means watching key metrics, aside from quantity, to make a conclusion about the properties of the processed data. In short, data audit depends on a registry, which has a limited size for storing the data assets. So, ensure that your data source is analyzed according to different organizational fields and then move forward based on the priority of the field. 3. Determine Your Approach to Data Extraction The main objective of the extraction process in ETL is to retrieve all the specified data from the source with ease. Henceforth, you need to take care while implementing/designing the data extraction process in order to steer clear of adverse effects on the source system with regard to response time taken, performance of the implemented ETL process, and locking. 4. Steps to Perform Extraction Push Notification: It is always better if the required source system is able to provide a notification for the records that have been modified and provide the details of changes. Incremental/Full Extract: Delta load is recommended in cases where push notification services are not provided by the source system, but the system provides details for records recently updated and gives an extract of such records. Changes identified must be propagated down in the ETL pipeline in order to maintain data correctness. When a system is unable to provide the details of the recently changed data, then we are bound to extract full data from the source. One should be sure that a full extract requires keeping a replica of the last extracted data within the same format to identify the changes. It is critical to remember the data extraction frequency while using Full or Delta Extract for loads. 5. Build Your Cleansing Machinery A good data cleansing approach must satisfy different requirements: Data inconsistency should be resolved while loading data from multiple or single source(s). Also, it\u2019s better to remove all major data errors. Mismatched data should be corrected, and the sequence order of columns must remain intact. Use normalized data or convert data to 3rd normal form for ease of data access. Enrich or improve data by merging in additional information (such as adding data to assets detail by combining data from Purchasing, Sales, and Marketing databases) if required. Use declarative function variables to clean the data so that other data sources could reuse the same data cleansing function. 6. ETL Logs Are Important Documenting logs/error logs is the most important step while designing the ETL pipeline. These logs not only help the developer to review/correct the errors in case of job failure but help to collect the data that can be analyzed and used for enhancement purposes. Hence, maintaining ETL logs is as important as developing the source code for the ETL pipeline. 7. Resolving Data Errors These days collecting past data is a key to improving the business. Data is considered one of the important assets for any industry. Data collection helps the business to mark the organization\u2019s profit/loss. It narrows down the search for problems to product or service level. Hence, collecting valuable and correct data is of utmost importance. One way to handle data issues/errors is to place auto-corrective measures to resolve common issues so that manual intervention is minimal. Also, data validation checks should be enabled if the errors persist. 8. Recovery Point It\u2019s important to set up recovery points after particular intervals in case of job failures. These checkpoints help the developer/user to rerun the task from a saved state instead of running it again from the start, which saves a lot of time and helps in maintaining data integrity. Also, the error search is limited if regular checkpoints are used. 9. Build Code Modules Modularizing different tasks involved in the ETL process helps the developer to easily backtrace the issue to its source. It not only ensures good code readability but also helps support engineers to look for errors in a confined block of code known as module. Modules are generally small code blocks that are combined together to implement business requirements. It also reduces code duplicity in future tasks and makes unit testing a lot easier for developers. 10. Staging Area Data prep area in ETL is a dedicated environment where the required and only necessary raw data is dumped from a different number of sources such as XML, Flat files, Relational Databases, SOAP calls, etc. This area is usually called a staging area which forms the building block for any ETL process. All the data sanity checks, data cleaning and correcting is done in the staging environment. Only developers can access this environment in order to maintain data integrity and prevent user confusion. 11. Error Alert Task Error alerts during the ETL process play an important role in keeping the ETL jobs in check. Alert pop-ups help timely resolution of errors. Any unauthorized access or firewall breach check should be included when designing this security/monitoring alert task. 12. Enhancing ETL Solution Optimization in ETL generally refers to identifying different bottlenecks that impact process performance. These bottlenecks can be at the source, target, or transformation level, which can be easily identified using task run logs. One can use different techniques at database/mapping/code levels after identifying bottleneck types. This involves general practices that ensure that the ETL process finishes fast and smoothly. In cases where bottlenecks are not the cause for long run time, parallel processing can be leveraged to resolve time issues if applicable. Batch ETL or Streaming ETL Schedule-based extraction of data is generally preferred by clients in higher environments. The data is extracted, using the best-suited ETL solution tool. This data is now stored in relational databases or non-relational databases. The process of storing raw data into tables is referred to as the batch process . Example: A manufacturer produces a daily operational report for a production line that is run in a batch window and delivered to managers in the morning. However, the streaming ETL process is leveraged in cases where real time data is required by the applications to process daily transactions. Internet of Things (IoT) analytics, instantaneous payment processing, and Fraud detection are examples of applications which depend on streaming ETL processes. Example: A customer gets an instant message for money withdrawal or transfers from the bank hence real-time data processing is necessary in such cases. Challenges When Building ETL Architecture Various challenges can arise during data extraction in the ETL process. Various issues can arise due to the architectural design of the extraction system. Data Latency When the company needs to make quick decisions, depending on data, one should run an extraction process with lower frequencies. The tradeoff happens between old data that is at low frequency compared to high computational resources required at high frequency. Data Volume System design is affected by the volume of the extracted data. When data quality is increased, low-volume data does not scale well. Parallel extraction solutions are recommended for large data extractions if possible. These extractions are complex and hard to maintain from a design perspective. Source Limitations Source limitation/restriction should be taken into consideration while extracting data. Example: Many APIs restrict the data extraction to a certain value within a given time frame. Developers need to work around these restrictions to ensure system stability. Validation Checks Data must be validated during the transformation stage or data extraction. Missing or corrupted data should be handled while performing data validation. Orchestration or Scheduler Data extraction scripts need to be orchestrated to run at a particular time of the day based on data volume, latency, source limitations, and data quality. This can become complex if one implements a mixed model of architectural design choices (which people often do in order to accommodate for different business cases of data use). Disparate Sources Data management and overhead issues are common while dealing with multiple data sources. Different data sources lead to an increase in the data management surface as demands for orchestration, monitoring, and error fixing increase. Data Security and Privacy ETL pipelines majorly contain confidential, sensitive, and personally-identifying information. Regulatory organizations like the EU\u2019s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) strictly govern how different industries can manage and handle their customer data. Solution for Better ETL (Skyvia) Skyvia is a cloud data platform consisting of various integrated products, solving various types of tasks related to data. Below some features of Skyvia are mentioned, which makes it a better solution for solving ETL problems than any other platform. Integration [Skyvia](https://skyvia.com/data-integration/) is packed with specialized tools which offer no-coding data integration between cloud apps and CSV import/export and databases. You can export/import data, copy cloud app data to a relational database and keep it up-to-date, perform bi-directional synchronization between cloud apps and/or databases, use multiple data transformations, and transfer modified data to multiple targets. Backup Both automatic and manual backups are offered to users. Users can back up data from supported cloud applications automatically on a schedule or any time manually. Later backed-up data can be viewed, exported, or restored directly to a data source via web browser from the Skyvia interface. Query Skyvia provides a facility to build visual queries in browsers. It makes it easy for a user to query database and manage cloud. You can apply different aggregation and expression functions to the fields/columns added to queries. Except aggregation and expression functions, you can also configure filters within your queries. Skyvia Query supports SQL SELECT, INSERT, UPDATE, and DELETE statements for cloud sources. For relational databases, Skyvia can execute all the statements, supported by the database, including DDL statements. Connect Skyvia Connect can be used for connecting different data sources with multiple data-related applications (BI, mobile, office, etc.) via a unified interface. You can quickly and easily create web API endpoints without typing a line of code to provide access to your data from anywhere. There is no need to care about API server hosting, deployment, administration at all. Users can create two kinds of endpoints in Skyvia: OData endpoints and SQL endpoints. Platform Connectivity Skyvia supports a huge number of cloud applications, databases, file storage services and cloud data warehouses. The list of supported apps is constantly updated. An unlimited number of connections can be created and used. Skyvia supports different API GET/POST requests for variety of cloud applications in a uniform. On-premise data can be easily accessed with Skyvia. Firewall reconfigurations, port-forwarding are not required to connect to local data sources via secure Agent application. Rest Connector Skyvia can be extended through a REST connector and can be connected to various data sources that have REST API. You only need to contact Skyvia support regarding your data source. Efficient Workspace Organization Skyvia manages your queries, backups, endpoints, integrations, and other Skyvia objects in a suitable interface designed so that you are able to organize your work in the most productive and convenient way. Conclusion In this article, we discussed what ETL is, best practices of ETL architecture, challenges met when building ETL pipelines, and solutions available for better ETL process. Skyvia will be one of the solutions that might work for you. [Sign up](https://app.skyvia.com/) to get started for free and check multiple solutions that can perfectly fit your use cases. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-architecture-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=ETL+Architecture+Best+Practices%2C+Challenges+and+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-architecture-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-architecture-best-practices/&title=ETL+Architecture+Best+Practices%2C+Challenges+and+Solutions) [Sahil Wadhwa](https://skyvia.com/blog/author/sahilw/) Cloud Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/etl-cost/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How Much Does ETL Tool Cost in 2025? By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/etl-cost/#respond) 2636 May 17, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-cost%2F) [Twitter](https://twitter.com/intent/tweet?text=How+Much+Does+ETL+Tool+Cost+in+2025%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-cost%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-cost/&title=How+Much+Does+ETL+Tool+Cost+in+2025%3F) [ETL tools](https://skyvia.com/blog/etl-tools/) are integral to the [modern data infrastructures](https://skyvia.com/blog/how-to-modernize-data-infrastructure/) in SMBs and enterprises. However, many companies find allocating a budget for those solutions challenging. The main difficulty in estimating spending for ETL services is the unpredictability of data amounts and complex pricing models. This article disaggregates popular ETL pricing models to help you resolve the ETL cost puzzle. You\u2019ll also discover which factors impact the overall cost and find the prices for popular data integration services. Table of Contents How Does ETL Pricing Work? Most Popular Pricing Models Among ETL Solutions Factors Influencing the Cost of ETL Tools Leading ETL Tools Cost ETL Tools Price Comparison Pricing of ETL Solutions: What to Expect? Summary How Does ETL Pricing Work? When you go to the market, you buy a bottle of milk for a fixed price, which slightly ranges for different vendors. Simple, right? Things become more complicated for technical devices, such as an espresso machine, where the price depends on the selected item\u2019s technical characteristics, color, and additional features. Though buying ETL software has much in common with buying an espresso machine, the pricing is even more dynamic. Modern ETL tools are usually cloud-based, so they inherit the concepts of major cloud providers for product cost. The features, as well as the intensity of usage in data volume and the number of people exploiting the tool, determine the price. Most Popular Pricing Models Among ETL Solutions Usage-Based or Pay-As-You-Go Model The total cost for ETL services under the pay-as-you-go model is revealed at the end of the billing period. This is similar to electricity or water consumption, with bills arriving at the end of the month. The pay-as-you-go concept is rather flexible and gives businesses unlimited resources. However, it imposes risks on budget planning due to unpredictable data volumes to process. Subscription-Based Pricing This is probably the most popular pricing model since it\u2019s predictable for producers and consumers. It\u2019s like a Netflix subscription, where the monthly payment depends on the number of users, video quality, and supported devices. The subscription-based model is also known as a tiered subscription model. Each level includes features and options available at the corresponding cost. User-Based Pricing This model is fundamental for Salesforce or Zendesk apps, where the number of active users is a decisive factor in the overall software price. Some ETL tools also rely on user-based pricing but in conjunction with a pay-as-you-go or subscription model. Flat-Rate Model This approach is as old as the IT industry, but it\u2019s still used for ETL and other cloud-based software. The flat-rate model establishes the unique price for the product, disregarding the number of features, users, and other characteristics. Freemium Model The freemium model is usually the first tier of a subscription model. It aims to acquaint consumers with the software and convert them to paid customers. As a rule, freemium plans only have basic features and usually set limits on data volume, user number, connectors, etc. Note that a freemium model is NOT the same as a free trial. Even though both options are free for users, the freemium tier isn\u2019t limited in time, while a free trial usually lasts up to 30 days. Moreover, a free trial has all the features available for the requested software. If an ETL provider offers four subscription tiers, there will be four different types of free trials. Factors Influencing the Cost of ETL Tools Usage-based and subscription-based models are common for ETL software. Now, let\u2019s explore the factors influencing the cost of ETL tools. Data Volume Data amount is like Zeus, the god of all gods, on Olympus. It\u2019s the most powerful factor shaping the total ETL cost. The more data rows flow in your data pipelines, the more you\u2019ll have to pay. With the pay-as-you-go model, the price depends on the amount of data rows processed during the billing cycle. In the subscription-based model, each plan indicates the maximum number of data rows for processing at the corresponding price. Connectors ETL tools generally include pre-built connectors for popular apps, databases, and data warehouses. In the subscription-based tiered model, the availability of connectors ranges depending on the plan. Some ETL solutions allow users to create custom connectors for added price. This option facilitates integration from legacy systems, niche apps, newly developed tools, etc. Data Update Frequency Most ETL tools use batch data processing to collect and update data regularly on a schedule. The update intervals usually range between one month and one minute. The more frequent updates are needed, the higher the cost of the ETL tool. Some ETL platforms offer real-time data processing, but it\u2019s much more complex than the batch approach. It relies on real-time transport protocol, large data volumes in disposal, and complex infrastructure. Consequently, the ETL tools with real-time data processing may cost an arm and a leg. Functionality The ETL concept isn\u2019t limited to data extraction from a single source and loading it into another, with transformations in between. Modern tools offer [complex ETL pipeline design](https://skyvia.com/learn/etl-pipeline-meaning) with several sources/destinations, compound data mapping, transformations, advanced features, etc. As a rule, the larger the feature set is, the higher the ETL cost. Support Most ETL solutions include basic support features for their products. Some ETL providers also offer premium or platinum support as an add-on. This usually implies 24/7 support, onboarding sessions, and other exclusive services helping businesses implement, maintain, and effectively use the ETL tool. Leading ETL Tools Cost To better understand the pricing for ETL solutions, let\u2019s examine the most popular tools in the industry. Skyvia [Skyvia](https://skyvia.com/) is a universal cloud data platform designed for a wide set of data-related tasks. It contains five principal products: Data Integration, Query, Connect, Automation, and Backup. Each of them relies on the tiered subscription model with the possibility of a monthly or yearly subscription. Let\u2019s examine the pricing model of Skyvia\u2019s Data Integration product with five plans available. The price for each depends on the following factors: Records per month (data volume) Scheduling frequency Number of scheduled integrations Available integration scenarios Available data mapping and transformation options Below, find the table with four pricing plans and the options available for each. The fifth plan, called Enterprise, offers a tailor-made solution with advanced security features and premium support. To get all the details about the Enterprise plan, feel free to contact sales@skyvia.com . Free Basic Standard Professional Records per month 10k max 5M-200M 500k-200M 5M-200M Scheduling frequency once a day once an hour once a minute Scheduled integrations 2 5 50 unlimited [Import](https://skyvia.com/data-integration/import) Available with all DML operations, binary file import, [target lookup mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html) . Includes features available in the Basic plan, [source lookup mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html) , [expression mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/expression-mapping.html) , [CSV file mask](https://docs.skyvia.com/data-integration/import/how-to-guides/how-to-import-CSV-files-via-file-masks.html) , [relation mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/relation-mapping.html) , [data splitting](https://docs.skyvia.com/data-integration/import/advanced-features/data-splitting.html#what-is-data-splitting) , [returning](https://docs.skyvia.com/data-integration/import/advanced-features/returning.html) . [Export](https://skyvia.com/data-integration/export) Available Includes advanced source mode available additionally [Replication](https://skyvia.com/data-integration/replication) Available [Synchronization](https://skyvia.com/data-integration/synchronization) Includes target lookup mapping as an additional feature Includes target lookup mapping, source lookup mapping, expression mapping, relation mapping, data splitting [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) Not available Available [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) Not available Available [Pricing](https://skyvia.com/pricing/) $0 starts from $79/month starts from $79/month starts from $199/month Fivetran [Fivetran](https://www.fivetran.com/) is another popular tool for designing and running ETL pipelines. It\u2019s also known as a modern data movement platform. Its pricing model is very similar to that of Skyvia. However, Fivetran adds more variables to the equation for cost calculation. Apart from data volume and available features, it considers users and connectors. The Starter, Standard, and Enterprise plans have no exact data volume limitations. The cost depends on the number of monthly active rows (MAR) and the spend rate. The spend rate is a flexible parameter that considers the correlation between the total number of rows, the MARs you use, and the pricing plan you are on. Fivetran\u2019s official website has a pricing calculator that considers all these parameters. Free Starter Standard Enterprise Monthly active rows (MAR) Up to 500,000 Total Monthly Cost = MARs x Spend Rate Scheduling frequency once an hour every 15 min once a minute Users max 10 unlimited Connectors 500+ managed connectors Starter plan connectors + database connectors Standard plan connectors + enterprise database connectors Extra features Automatic schema migrations, integration for dbt Core Access to Fivetran\u2019s REST API Advanced data governance, security, and resilience options; priority support There\u2019s also an extended enterprise plan called Business Critical with custom pricing. It has advanced security features, such as private networking options, access to Fivetran HVR, and customer-managed encryption keys. The Fivetran pricing model is rather complicated as it considers multiple factors. The price for each case is calculated individually, so there are no approximate numbers for Fivetran usage. Airbyte [Airbyte](https://airbyte.com/product/extract-load) offers several solutions, including the extract-load service. In fact, it\u2019s an open-source platform with 1,000+ contributors for data movement between diverse systems. So, let\u2019s shed light on Airbyte pricing options. Open Source Cloud Team Enterprise Hosting Self-hosted Hosted on Airbyte cloud Self-hosted Connectors 300+ sources and destinations + low-code connector builder Open source connectors + custom connectors Open source connectors + enterprise connectors Support Slack community channel Email alerts and support Onboarding, emails, premier support Onboarding, emails, enterprise support Sync frequency Once an hour Custom intervals Security standard compliance \u2013 SOC 2, GDPR, ISO, HIPAA Conduit Extra security features \u2013 \u2013 Role-based access control (RBAC), SSO/SAML/SCIM provisioning The open-source variant is ideal for tech-savvy professionals ready to deploy Airbyte on their private clouds. In this case, organizations implementing open-source solutions are responsible for all the security-related procedures to guarantee data safety during transport. The price for other plans is discussed with Airbyte\u2019s Sales team. Paid packages grant support, alerting, extra security features, and custom sync intervals as additional add-ons to the open-source base. Hevo [Hevo](https://hevodata.com/) provides a SaaS solution for automating data pipelines. Its pricing model relies on tiered subscriptions, similar to those used in Skyvia and Fivetran companies. Free Starter Professional Data volume in events Up to 1 M 5-50 M 20-100 M Users Up to 5 Up to 10 Unlimited Scheduling frequency Once an hour Once an hour Real-time Connectors around 70 150+ dbt integration \u2013 Available Security SOC 2 Type II SOC 2 Type II, SSL encryption, SSH tunnels for encryption Support Email support Email and live chat support Price $0 starts from $239/month starts from $679/month Organizations craving more complex ETL solutions might consider the Business Critical plan. It has multiple workspaces, offers advanced security features, and grants other custom features on-demand. Stitch [Stitch](https://www.stitchdata.com/) is an ETL service that moves data between different sources, including databases and cloud applications. Unlike the above-mentioned tools, it doesn\u2019t offer any free plan but offers a trial for each plan. Stitch\u2019s pricing model is also based on the tiered subscription approach. Data volume is the core factor determining the payment for Stitch solution. Sources and destinations, users, and support options are among other critical factors impacting the overall cost of the ETL service. Standard Advanced Premium Data volume in rows per month 5-300 M 100 M 1 billion Users Up to 5 Unlimited Destinations 1 3 5 Sources 10 Unlimited (enterprise sources) Log retention 7 days 60 days Support Chat, portal Gold package Premium package Price starts from $100/month starts from $1250/month, billed annually starts from $2500/month, billed annually ETL Tools Price Comparison Now, it\u2019s time to recall the factors influencing the ETL cost. Previously, we have mentioned the five core components shaping the overall product price for ETL solutions. Data volume comes as the main factor by default, so it\u2019s not included in the table. As all the above-mentioned solutions use [batch processing](https://skyvia.com/blog/batch-etl-processing/) for data on all predefined tiers, this factor isn\u2019t mentioned as well. Meanwhile, users, connectors, and support factors do impact the ETL costs. Feel free to explore the table below to see how they define the pricing. Free pricing tier Starting price Factors impacting price Users Connectors Support Skyvia Yes $79 No No No Fivetran Yes Contact sales Yes Yes Yes Airbyte Yes Contact sales No Yes Yes Hevo Yes $279 Yes Yes Yes Stitch No $100 Yes Yes Yes Compared to other ETL tools, Skyvia\u2019s price doesn\u2019t depend on users, connectors, or support options. Even users on the basic plan have all the pre-built connectors at their disposal and can contact the Skyvia team via all support channels. It also doesn\u2019t matter how big your data team and organization are; everything depends on the data volume you need to operate. Pricing of ETL Solutions: What to Expect? When considering the future of ETL pricing, it\u2019s worth exploring the [data integration trends](https://skyvia.com/blog/data-integration-trends/) first. By understanding these tendencies, it becomes possible to forecast something related to ETL cost. Switching from ETL to ELT. The trend for [ELT (extract-transform-load)](https://skyvia.com/learn/what-is-elt) has prevailed for the last several years. It speeds up the data transfer as the transformations are performed on the target side. Moreover, ELT supports unstructured data, which is now very common online. Note that Skyvia already supports the design of both [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) pipelines. The price for ELT solutions will most likely depend on the data volume and scheduling frequency. Real-time processing. Even though [batch processing](https://skyvia.com/learn/what-is-batch-processing) is still widely used, real-time data is gaining momentum. Monitoring systems, fraud detection mechanisms, transport systems, etc., rely on real-time data processing. Probably, the tools supporting real-time data processing will likely be costly. Growing number of apps. Given that [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) and new SaaS solutions emerge, ETL and ELT tools will have to support the most popular of them. We\u2019ve already proven that connectors impact the cost of ETL tools. Solutions with many supported connectors or the possibility of adding new ones will likely be a daylight robbery. Data security. The price of tools explored in this article grows with added security features. As online data security importance is obvious, companies would likely invest in protective mechanisms. Summary ETL tools are important for businesses that value data-driven insights. Thus, allocating a budget for such services is crucial. Most ETL services are based on the tiered subscription model, where the price for each software level depends on the following: users, connectors, support options, real-time processing, mapping features, etc. Some ETL tools also offer free trials for different plans, which is great for those who want to test everything before investing in data integration infrastructure. Skyvia is an ETL tool offering a freemium tier as well as free trial for each paid plan, so feel free to test it now! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-cost%2F) [Twitter](https://twitter.com/intent/tweet?text=How+Much+Does+ETL+Tool+Cost+in+2025%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-cost%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-cost/&title=How+Much+Does+ETL+Tool+Cost+in+2025%3F) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/etl-in-power-bi/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) Power BI ETL: A Comprehensive Guide By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/etl-in-power-bi/#respond) 5586 February 7, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-in-power-bi%2F) [Twitter](https://twitter.com/intent/tweet?text=Power+BI+ETL%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-in-power-bi%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-in-power-bi/&title=Power+BI+ETL%3A+A+Comprehensive+Guide) Business intelligence (BI) services like Power BI often struggle with such challenges as messy data formats, gaps in information, and tedious manual processes. Setting up effective ETL processes helps you to turn chaotic data into clear, transparent insights for smarter decision-making. That\u2019s where ETL in Power BI comes to the rescue. In this article, we\u2019ll explain what ETL is in Power BI, why it is not an ETL tool, and how to perform extract, transform, and load tasks using the built-in Power BI tools. We\u2019ll also define the main ETL challenges for Power BI and how dedicated [ETL tools](https://skyvia.com/blog/etl-tools/) may help. Table of contents What is Power BI? What is ETL? Why you need ETL in Power BI? Is Power BI an ETL tool? How to perform ETL in Power BI Using Power Query for ETL Using Power BI Dataflows for ETL Creating Dataflows In Power BI Service Create Dataflow Gen2 in Microsoft Fabric Using Microsoft Fabric Data Pipelines for ETL ETL Challenges in Power BI Skyvia ETL Tool for Power BI Conclusion What is Power BI? Power BI is a collection of software services, apps, and connectors that combine to turn unrelated data sources into coherent, visually immersive, and interactive insights. \u2013 [Microsoft Docs](https://docs.microsoft.com/en-us/power-bi/fundamentals/power-bi-overview) . It\u2019s a diverse [business intelligence](https://www.microsoft.com/en-us/power-platform/products/power-bi/topics/business-intelligence/what-is-business-intelligence) platform designed by Microsoft.\u00a0 It helps organizations analyze and visualize raw data and make decisions. It transforms raw data into meaningful insights through the following features: Interactive visualizations Advanced analytics Real-time dashboards Data transformation You can use the Power BI desktop version, cloud Power BI Service, or Microsoft Fabric. What is ETL? ETL is an acronym that stands for Extract, Transform, and Load. It is a process of obtaining data from one or more sources, transforming it, and loading it into a consolidated repository, such as a [data warehouse](https://skyvia.com/learn/what-is-data-data-warehouse) , [data lake](https://skyvia.com/learn/data-lake-vs-data-warehouse) , or other system, in a [data pipeline](https://skyvia.com/blog/what-is-data-pipeline) that integrates different sources. Check the ETL process scheme on the diagram below. There is a variety of [ETL-dedicated tools](https://skyvia.com/blog/etl-tools/) that fully support extract, transform, and load processes: Skyvia, Informatica, Talend, and others. All of them have common features. They all support multiple sources and multiple destinations. These tools allow you to move your data from any supported source to any supported destination, performing advanced transformations. Why you need ETL in Power BI? Adding the ETL process to the Power BI environment is highly useful for producing clear datasets that are the valuable foundation for analytics and reporting. ETL helps to transform it to ensure consistency and accuracy and load it into Power BI for visualization. For instance, by using ETL to standardize data formats and eliminate inconsistencies, businesses can generate more accurate reports, leading to better decision-making. Without ETL, organizations might struggle with inconsistent data formats and manual data preparation, which can hinder the effectiveness of their analytics efforts. Is Power BI an ETL tool? It may seem that standalone Power BI can also be called an ETL tool. However, that\u2019s not quite so. Power BI is primarily a business intelligence platform whose tools may comprise some ETL capabilities.\u00a0\u00a0It is designed to help users with analysis, reporting, and visualization. It allows you to create interactive reports and build dashboards. Power BI includes [Power Query](https://learn.microsoft.com/en-us/power-query/power-query-what-is-power-query) and [Power BI dataflow](https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-introduction-self-service) , which can solve some ETL-like tasks. Let\u2019s figure out how they perform ETL in Power BI. How to perform ETL in Power BI You can use several tools in Power BI that may help you with ETL. Using Power Query for ETL Let\u2019s implement ETL in Power BI using Power Query. It is the built-in data preparation engine that involves the following capabilities: Extract : the ability to connect to different sources and extract data from them. Transform: its visual editor enables various data transformations such as sorting, grouping, column splitting, and more. The Load stage involves loading the extracted and transformed data to a separate destination, such as a database or data warehouse. In this case, the extracted and transformed data remain inside Power BI or are loaded into Azure Datalake. To see this in action, let\u2019s use Power Query ETL for an [SQL Server data warehouse](https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/) in Power BI. STEP 1: Connect to the Data Source For this example, we use SQL Server Database as the data source. Specify the connection settings for the source.\u00a0You may be asked for a user ID and password upon clicking OK . STEP 2: Select Data Select the tables you want to include in the report and click Transform Data to start the next step. STEP 3: Transform Data Power BI offers various transformations, from simple sorting or grouping to running Python scripts. Below, we demonstrate several simple actions. Let\u2019s try to clean up the source for the report. We select the specific columns, sort the data, and split them into one of the columns: Create a new query by referencing an existing table. Right-click the\u00a0table from which you want to create a new query and select Reference . Another query with * (2) suffix\u00a0will appear. Click Choose Columns and select Choose Columns again to include/exclude the necessary data in the report. Select the column by which you will sort the data and click the sorting button. Let\u2019s remove duplicated records. Click Remove Rows and select Remove Duplicates. All the performed transformations appear in the right pane under Applied Steps . You can select any of them to modify a step. Using Power Query M Formula Language for ETL Though Power Query has a user-friendly editor, you may need to perform advanced transformations that the graphic editor does not cover. Thus, all the performed transformations are coded behind the scenes using the Power Query M formula language, also known as M. For example, the following M code represents all the actions done above: let\n Source = Contact,\n #\"Removed Other Columns\" = Table.SelectColumns(Source,{\"Id\", \"IsDeleted\", \"AccountId\", \"LastName\", \"FirstName\", \"MiddleName\", \"Name\", \"Phone\", \"MobilePhone\", \"Email\", \"Title\", \"Department\", \"CreatedDate\", \"LastModifiedDate\"}),\n #\"Sorted Rows\" = Table.Sort(#\"Removed Other Columns\",{{\"Name\", Order.Ascending}}),\n #\"Removed Duplicates\" = Table.Distinct(#\"Sorted Rows\", {\"Id\"})\nin\n #\"Removed Duplicates\" If you want to do advanced transformations using the Power Query engine, you can use the Advanced Editor to access the query script and modify it as you want. When finished, click Close & Apply. As a result, your data will be imported to Power BI Desktop. Using Power BI Dataflows for ETL Dataflow is another tool which can perform the ETL-like tasks. Power BI Dataflows are collections of entities or tables created and managed in workspaces in the Power BI service. This is much like a table in your database. You can use dataflow output in other Microsoft products and services. When you use Power Query, all the transformations you perform remain within the product where you use it. Dataflows don\u2019t limit you by Power BI or Excel. You can store the dataflows output in other storage options, such as Dataverse or Azure Data Lake Storage Gen 2. Note: Dataflows are available for users with a Pro license and a Premium Per User (PPU) license. Creating Dataflows In Power BI Service This method is suitable for users of the standalone Power BI Service, which is now a component of Microsoft Fabric. Creating Dataflows is similar to the one we did with Power BI Desktop because of ETL Power Query. But first, log on to your Power BI Service. Open your workspace and click New . Then, select Dataflow . There are five options for creating a Dataflow: New data source Linked tables Computed tables Import/Export CDM folder Let\u2019s look at each of them in detail. Create Dataflows Using New Data Source This method allows you to connect to a source and select the needed tables. Click Get Data and select the source. Connect to the source. Select objects to extract. Once you checkmark all the needed tables, click Transform data . Perform the necessary transformations. Create Dataflows Using Linked Tables The second method implies creating a link to an existing table to reuse the existing table in different Dataflows.\u00a0Select the needed table in a tree hierarchy of dataflows and tables, and click Transform data . Create Dataflows Using Computed Tables The third method implies referencing a table and performing in-storage calculations. These calculations don\u2019t involve an external source. The result is a new table called a computed table. To edit a linked table, right-click a table and select Enable load . Then, rename the resulting computed table and make any necessary transformations. Create Dataflows Using Import/Export This method allows you to export an existing Dataflow to a JSON file. To import the model we just exported, go to another workspace and create a new Dataflow. And then choose the Import model . Then, browse for the JSON file you exported and click OK . Create Dataflows Using CDM Folder This method allows you to reference a table written by another application in the Common Data Model (CDM) format. You need to provide the complete path to the CDM file stored in Azure Data Lake Storage Gen 2. NOTE: You need to have permission to access that folder. To create a new Dataflow, click Create and attach . Power BI Dataflows Next Steps You can use the Dataflows you created in Power BI Desktop. To select dataflows, you must sign in to the Power BI Service and choose the tables you need in your report. There\u2019s so much more about Dataflows as an ETL tool in Power BI than we can discuss in this article. Visit these valuable links for more information: [Benefits of using Power BI Dataflows](https://docs.microsoft.com/en-us/power-query/dataflows/overview-dataflows-across-power-platform-dynamics-365#benefits-of-dataflows) and [their](https://docs.microsoft.com/en-us/power-query/dataflows/overview-dataflows-across-power-platform-dynamics-365#use-case-scenarios-for-dataflows) use cases. [Configuring Incremental Refresh for faster loading.](https://docs.microsoft.com/en-us/power-query/dataflows/incremental-refresh) [Limitations and considerations in using Power BI Dataflows](https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-features-limitations) . [Power BI ETL best practices using Dataflows](https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-best-practices) . Create Dataflow Gen2 in Microsoft Fabric Microsoft Fabric is an all-in-one analytics platform that covers everything from data movement to data science, real-time analytics, and business intelligence. Microsoft Fabric consolidates several Azure data workloads with Power BI so that users in different roles and teams can collaborate in one environment. To create a Microsoft Fabric dataflow, perform the following steps. Start by adding a new dataflow to your workspace. Go to your workspace, click New Item, and select Dataflow Gen2. Click Get data or select a source using New source in the ribbon. We select SQL Server for our example. Specify the connection settings. NOTE: Make sure that your database is accessible from the internet. Select the tables to include in the dataflow and click Create . Use the dataflow editor to transform the data into the necessary format and publish them when you finish. Using Microsoft Fabric Data Pipelines for ETL Pipeline is a new solution by Microsoft Fabric that enables performing ETL for reporting and analysis. It helps to create pipelines that manage data ingestion and perform transformation tasks. In contrast to Power BI ETL capabilities, pipelines comprise extract, transform, and load components, enabling the data to be loaded into supported databases or data warehouses. Pipelines enable performing executable tasks against the source, applying transformations, and building the logical flow for these tasks, making Fabric a serious competitor in the ETL tools market. Note: Though pipelines significantly enhance the Power BI ETL experience, they are designed for business intelligence only. You can\u2019t use them as an independent data integration or automation solution. Pipelines also face the same challenges as other Power BI ETL tools. ETL Challenges in Power BI The business intelligence tools offer ETL-like capabilities but don\u2019t fully cover the entire ETL process. Businesses may face several challenges: Automation : though Microsoft Fabric supports a complete ETL cycle, it is tied to analytics and reporting and doesn\u2019t function as a standalone automation process. Scope : Power BI ETL tools are designed for self-service and lightweight data preparation. It is not suitable for large-scale, enterprise-level ETL processes. Sources and Destinations: Microsoft supports only several destinations. Standalone Power BI services can load data from various sources, but the only destination is PowerBI itself or an Azure Data Lake. Performance : poorly optimized queries in Power Query can cause poor performance and excessive memory usage. Import connection mode can speed things up, but it increases dataset size, while DirectQuery connection can suffer from slow query execution, especially with complex joins and aggregations. More details are available in the [Microsoft optimization guide](https://learn.microsoft.com/en-us/power-bi/guidance/power-bi-optimization) . Complex Transformations : Power BI and Fabric tools are great for basic transformations, but when dealing with complex multi-step ETL processes, advanced data manipulation, and execution logic, they give in to dedicated ETL tools. Writing complex M-codes for these scenarios requires special technical skills and involves manual intervention. You are free to overcome these challenges by trying query optimization, proper data modeling, and efficient use of storage modes techniques. However, for the best ETL experience, a dedicated ETL tool like Skyvia or others is a game-changer. ETL-dedicated tools can solve a wide range of data-related tasks, such as data migration, bulk loads, complex pipelines, etc. They usually support various cloud apps, databases, data warehouses, and storage services. All of these apps may serve as both sources and targets for ETL. When dealing with complex multi-step ETL processes, advanced data manipulation, and execution logic, BI tools give in to ETL tools. Let\u2019s look at a simple example in Skyvia. Skyvia ETL Tool for Power BI Skyvia is a cloud platform suitable for multiple data-related operations: data warehousing, ETL and Reverse ETL, workflow automation, etc. You can easily load information into a data warehouse by using Skyvia, and it will then be analyzed with Power BI. Moreover, Skyvia is code-free, which means that any setup is done in the visual wizard. Below, we show you how to extract data from a cloud source, transform and load them to a data warehouse, and then use them for analysis in Power BI. We use [Skyvia Replication](https://skyvia.com/data-integration/replication) , which copies source data and its structure to databases and data warehouses. STEP 1: Connect to the Source Log into Skyvia or [create an account](https://app.skyvia.com/register?) . Go to +Create New -> Connection and select the source from the list. Specify your credentials. STEP 2: Load Cloud Data to Data Warehouse Go to +Create New->Replication in the top menu. Select the app of your choice as a source. Select a preferred data warehouse as a destination. Skyvia supports Amazon RedShift, Google BigQuery, Azure Synapse Analytics, Snowflake, and other popular [data warehouse platforms](https://skyvia.com/blog/best-data-warehouse-tools/) . Select the objects for replication. Configure objects for replication on the right. Click Edit to open the task editor for an object. Here, you can select specific fields to replicate, set the target table name, and specify replication settings. Skyvia also allows you to address compliance requirements with hashing. Select specific object fields to hash, ensuring the secure transfer of sensitive data. Set filters, if needed, and save the task. Name the replication and save it. You can run the replication manually or based on schedule. STEP 3: Load Data into Power BI Power BI offers native integration with major data warehouses. You can transfer the recently uploaded information from the data warehouse (in this example, Snowflake) to Power BI. In Power BI, select Get Data and type Snowflake in the search bar. Select Snowflake on the list and click Connect. Enter your Snowflake credentials. Once the connection is established, the Navigator window with available objects on the server appears. Select the needed source objects and click Load to pull it into Power BI Desktop. As we\u2019ve just observed, Skyvia perfectly carries out data warehousing scenarios with ELT. Skyvia ETL, ELT, Reverse ETL Features The case we demonstrated is only one of the various use cases of using Skyvia for ETL. Skyvia offers several products that enable not only ETL but also [ELT](https://skyvia.com/learn/what-is-elt) and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) implementation. Some of the most popular Skyvia tools are listed below: [Import](https://skyvia.com/data-integration/import) tool is suitable for ETL and Reverse ETL. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) , [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) for complex data transformations, and orchestration across compound pipelines. NOTE: Skyvia\u2019s Data Integration product also offers export to CSV files functionality and synchronization between sources. Use [Connect](https://skyvia.com/connect) to create OData and SQL endpoints for your data in less time with no coding and in convenient wizards. [Automation](https://skyvia.com/automation) helps automate workflows by performing actions based on events. Skyvia vs. Other ETL Tools No doubt, Skyvia is a robust ETL solution, but it works with different data integration scenarios. However, it\u2019s not unique in its market genre. There are pretty many similar solutions by competitors. Let\u2019s have a look at other [popular data integration solutions](https://skyvia.com/blog/data-integration-tools/) and see how they are different from or similar to our platform. The best way to start is by providing a comparison table with several critical parameters. Skyvia Informatica Stitch Hevo Focus ETL, ELT, Reverse ETL, data sync, workflow automation ETL ETL, ELT ETL, Reverse ETL, ELT Skill level No-code and visual wizard No-code, low-code No-code, low-code No-code, low-code Sources 200+ 100+ 140+ 150+ The ability for customers to add other sources With REST API With Connector Toolkit Developer With Singer, Import API, or Incoming Webhooks With REST API G2 customer satisfaction rate 4.8/5 4.4/5 4.5/5 4.3/5 Compliance with safety certificates HIPAA, GDPR, PCI DSS.ISO 27001 and SOC 2 (by Azure) SOC 1, SOC 2, SOC 3, HIPAA / HITECH, GDPR, Privacy Shield SOC 2 Type II, HIPAA BAA, ISO/IEC 27001, GDPR, and CCPAR SOC 2, HIPAA, GDPR, CCPA Pricing Volume- and feature-based.Has a free plan. Consumption-based. Volume- and feature-based.Has a 14-day trial. Consumption-based pricing based on Hevo events.Full-feature 14-day free trial.Has a free plan. Compared to its major competitors, Skyvia supports the largest number of data integration scenarios and sources. It also has [the highest rankings for usability on the G2 review platform](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . In terms of skill level requirements, all tools are very similar, as each of them offers no-code and low-code interfaces. Each service also strongly complies with modern security protocols and certificates. As for pricing, Skyvia and Hevo offer [a free plan](https://skyvia.com/pricing/) with limited monthly data volume and features. Pricing plans for all tools rely on a consumption-based model that considers data size and feature set. Conclusion ETL plays a significant role in working with Power BI, ensuring data accuracy, consistency, and usability for analytics and reporting. Though we can\u2019t call Power BI an ETL tool, it still offers powerful built-in ETL features like Power Query and Dataflows that help perform ETL tasks for analyzed data. Now you know how to perform ETL using Power BI tools and what challenges you may face due to this.\u00a0Together with that, such solutions like Skyvia solve these problems, enabling a seamless, code-free way to integrate, transform, and load data from various sources to Power BI. They provide businesses with robust ETL, ELT, and [Reverse ETL solutions](https://skyvia.com/blog/best-reverse-etl-tools/) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-in-power-bi%2F) [Twitter](https://twitter.com/intent/tweet?text=Power+BI+ETL%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-in-power-bi%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-in-power-bi/&title=Power+BI+ETL%3A+A+Comprehensive+Guide) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/etl-tools-for-bigquery/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best ETL Tools for BigQuery in 2025: Features & Pricing By [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) [0](https://skyvia.com/blog/etl-tools-for-bigquery/#respond) 3860 November 27, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+ETL+Tools+for+BigQuery+in+2025%3A+Features+%26+Pricing&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-bigquery/&title=10+Best+ETL+Tools+for+BigQuery+in+2025%3A+Features+%26+Pricing) Summary Google BigQuery is a serverless data warehouse that can store large amounts of structured and semi-structured data. ETL (Extract, Transform, Load) is essential for preparing data for analysis, ensuring accuracy and consistency. The ELT (Extract, Load, Transform) approach allows for faster data processing and is increasingly used with BigQuery. When selecting an ETL tool for BigQuery, consider factors such as available connectors, ease of use, scalability, and security. Skyvia is highlighted as a top ETL tool for BigQuery due to its user-friendly interface and extensive connector options. Google BigQuery has become a leading solution for storing and processing big data. Since it doesn\u2019t generate data by itself but collects it from other sources, you will need a decent ETL tool to extract, transform, and load data into a data warehouse. Without proper ETL solutions, BigQuery may end up processing inaccurate or inconsistent data, leading to unreliable business insights. In this article, we observe a number of [ETL tools](https://skyvia.com/blog/etl-tools/) for BigQuery. We also explore the functionality and specifics of each ETL tool and help you select the right one for your particular business case. Table of Contents What Is Google BigQuery? What Is ETL? What Is ELT? Selection criteria for the best BigQuery ETL solution Top 10 Google BigQuery ETL Tools Skyvia Google Cloud Data Fusion Dataddo Hevo data Integrate.io Talend Fivetran Stitch Apache Spark Keboola Conclusion FAQ What Is Google BigQuery? BigQuery is a serverless data warehouse included in the Google Cloud Platform. It can store large amounts of structured and semi-structured data coming from different sources. This data warehouse is widely used by data scientists and data analysts working in different industries since it provides a uniform way to work both with [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) . BigQuery also offers robust security and high availability that grants 99,99% uptime SLA. Google BigQuery integrates with the machine learning module and analytics engine right within the Google Cloud Platform itself. This ensures the processing of large data sets, allowing users to make forecasts and get instant insights into business process improvements. What Is ETL? [ETL](https://skyvia.com/learn/what-is-etl) is a well-known approach to moving data, which originated in the 1970s. It involves a three-step process: data extraction from source systems, transformation (cleansing, filtering, organization, etc.), and loading into the destination system. BigQuery with ETL is a win-win mix for preparing data for analysis beforehand. Since the transformation step takes place before the actual data loading in a data warehouse, the data arrives normalized and ready for use. Using ETL processes with BigQuery is a great way to reduce time to insights and spend less time on data organization within a data warehouse. What Is ELT? The [ELT approach](https://skyvia.com/learn/what-is-elt) is also commonly used to load data into Google BigQuery in the contemporary data integration landscape since it can handle large data sets and ensure high loading speed. The ELT has the same stages as ETL, though the transformation occurs at different moments of time. With ELT, data is transformed and processed on the data warehouse side using such tools as dbt, for instance, which makes this approach faster and more adapted to the current data volumes. Here are some other benefits of ELT over ETL: Better scalability Support of unstructured data More flexibility for data analysis Easy detection of data inconsistencies In this article, we\u2019ll use the unified term \u2018ETL tool\u2019 to describe data integration services that support both [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) approaches in data integration. Selection criteria for the best BigQuery ETL solution With dozens of options for BigQuery ETL, it might seem challenging to choose the right tool. Before making the final decision, you need to think of the integration scenarios and data sources involved. Consider these criteria to select the appropriate ETL tool. Pool of available connectors. First, decide which services you need to transfer data to BigQuery. See whether these tools that interest you are on the list of pre-built connectors for simplified data integration. Explore whether there is an ability to create new connectors. Data transformations. The transformation of data is the core module of the ETL process. Explore the transformation capabilities of a tool and decide how they match your needs for organizing and preprocessing data. Ease of use. To build ETL and ELT pipelines easily and quickly, it\u2019s necessary to select a data integration tool that guarantees ease of use. See whether a data integration platform has an intuitive interface and allows you to create data pipelines without coding. Scalability . Ensure the tool can handle growing data volumes and complex workflows without compromising performance. Security. Data protection is the paramount duty of any digital solution. Explore whether an ETL tool corresponds to modern security standards and protocols and whether it\u2019s compliant with data protection regulations, such as GDPR. Support and Documentation . Evaluate the availability of customer support, community resources, and comprehensive documentation for troubleshooting and learning. Pricing. Carefully inspect the pricing plans of the ETL tools of your interest. Decide which ones correspond to your data coverage, connectors of interest, and other useful features. See how the price meets your budget expectations. Top 10 Google BigQuery ETL Tools In this section, we provide a comprehensive list of ETL tools for BigQuery. You will also discover key features, limitations, pricing, and common use cases for each of these services. Let\u2019s start with the comparison table. Platform G2 Rating Key features Pricing Skyvia 4.8 out of 5 \u2013 200+ sources \u2013 Powerful transformation capabilities \u2013 Detailed logs of errors \u2013 A free plan is available \u2013 Price starts at $79/mo Google Cloud Data Fusion 5.0 out of 5 \u2013 150+ pre-built connectors \u2013 Limited custom transformations \u2013 Three editions available \u2013 Price starts at $0.35 per instance/hour Dataddo 4.7 out of 5 \u2013 300+ connectors \u2013 Zero-code UI \u2013 Gaps in documentation \u2013 A free tier with a limit of 3 data flows \u2013 Price starts from $99/month Hevo Data 4.4 out of 5 \u2013 Over 150 supported connectors \u2013 Requires Python knowledge \u2013 Supports reverse ETL \u2013 Free plan with 50 free connectors \u2013 Three pricing plans available Integrate.io 4.3 out of 5 \u2013 150+ supported connectors \u2013 No-coding visual pipeline designer \u2013 On-premises connectors are supported in a $25000-plan or higher \u2013 A free trial period \u2013 Pricing plans start at $15,000 per year Talend 4.0 out of 5 \u2013 Over 1000 supported connectors \u2013 Extendable using Python \u2013 Pricing isn\u2019t publicly available; you need to contact the sales Fivetran 4.2 out of 5 \u2013 300+ connectors supported \u2013 Custom connector creation is available \u2013 Post-load data transformation via SQL \u2013 A trial period is available \u2013 Several pricing tiers with different features Stitch 4.4 out of 5 \u2013 No-coding integration configuration \u2013 Extensible via REST API \u2013 Limited transformation features \u2013 A free trial period \u2013 Pricing plans start at $100 per month for 5 million records Apache Spark 4.2 out of 5 \u2013 Supports different programming languages \u2013 Requires coding skills \u2013 No dedicated technical support \u2013 Available for free Keboola 4.7 out of 5 \u2013 Pre-built connectors for data sources \u2013 Programming language is required to perform data transformations \u2013 A free tier with a limit \u2013 Custom pricing 1. Skyvia [Skyvia](https://skyvia.com/) is a powerful cloud data platform that solves data integration tasks without coding. It offers tools for different use cases and supports all major cloud applications, databases, and data warehouses. Skyvia offers such [data integration tools](https://skyvia.com/blog/data-integration-tools/) : [Replication](https://skyvia.com/data-integration/replication) tool is a wizard-based ELT solution designed for fast and simple data copy to BigQuery (or another data warehouse) and keeping it up-to-date. [Import](https://skyvia.com/data-integration/import) tool is a wizard-based solution suitable for creating ETL and Reverse ETL tasks using a convenient, user-friendly interface. [Data Flow](https://www.youtube.com/watch?v=Opc1W9lq-Es&list=PLE66g1kd4iq3OOjHBEEbhEaddZXfX2JB1&pp=iAQB) is a visual data pipeline designer used for advanced ETL scenarios. It allows you to move data across multiple data sources and build complex multistage transformations. Reviews G2 Rating: [4.8 out of 5](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) (based on 200+ reviews). Key Features Intuitive GUI that makes Skyvia easy to set up and use. Connectivity with [200+ sources](https://skyvia.com/connectors) . A variety of tools for multiple data integration scenarios. Support of ETL, Reverse ETL, and ELT pipelines. Powerful [transformation capabilities](https://skyvia.com/learn/what-is-data-transformation) . Detailed logs of errors. Email notifications are sent to signal integration status, limits exceeded, etc. Limitations Limited integration capabilities compared to tools for data integration that support coding. Features are limited in the Free plan. Pricing Skyvia has several different [pricing plan tiers](https://skyvia.com/pricing) for different data volumes and feature sets. The pricing plan doesn\u2019t depend on the number of users or connectors used or the number of created integrations \u2013 these numbers are unlimited in any plan. A free plan is also available, but it includes limited ETL features and tools. Basic plan starts at $79/month. Standard plan starts at $79/month. Professional plan starts at $199/month. The price for the Enterprise plan is custom. Best Suited for Skyvia is suitable for companies of all sizes that are looking for a cloud-based platform for ETL, ELT, and Reverse ETL with minimal coding requirements. Its affordability and ease of use make Skyvia an excellent choice for businesses that need to connect SaaS apps, cloud services, and databases efficiently. 2. Google Cloud Data Fusion Data Fusion is a product on Google Cloud that is designed for ordinary data integrations and transformations. It offers a visual no-code interface for deploying code-free [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) . This tool can extract data from various cloud and on-premises sources and load it to BigQuery. Data Fusion also has a number of pre-configured transformations suitable for both batch and real-time processing. Reviews G2 Rating : 5.0 out of 5 (based on 2 reviews). Key features Centralized management of data pipelines. Graphic UI promotes reduced complexity of ETL pipeline deployment. Support of 150+ pre-built connectors and transformations. Limitations The graphic user interface is not adapted for creating complex data pipelines. Limited custom transformations. Less control over infrastructure. Issues with data inaccuracy. Pricing Data Fusion comes in three editions: The Developer plan starts at $0.35 per instance/hour. The Basic plan starts at $1.80 per instance/hour. The Enterprise plan starts at $4.20 per instance/hour. Best Suited for Google Cloud Data Fusion is a good choice for companies that require a fully managed and scalable solution for complex data integration. It performs well at integrating data across hybrid and multi-cloud environments. Google Data Fusion supports advanced use cases such as big data processing and real-time analytics, which makes it a good choice for cloud-native teams handling diverse data workflows. 3. Dataddo Dataddo is a no-code data integration platform for creating ETL, ELT, and [Reverse ETL pipelines](https://skyvia.com/learn/what-is-reverse-etl) . This tool primarily focuses on sending data from cloud-based sources to databases, data warehouses, and BI tools. Dataddo allows users to implement data pipelines both for real-time and [batch processing](https://skyvia.com/blog/batch-etl-processing/) . You can also automate the extraction and loading of data by selecting the regular update intervals. Reviews G2 Rating : 4.7 out of 5 (based on 180+ reviews). Key features Zero-code UI for building and deploying data pipelines. 300+ connectors for both source and destination applications. Testing of data models before sending data to BI apps. Limitations Gaps in documentation. Sync frequency needs to be improved. Not suitable for complex data pipelines. Pricing Dataddo comes in four editions: A free tier with a limit of 3 data flows. Data to Dashboards allows users to load data in BI and visualization tools. The price starts from $99/month and depends further on the number of needed data flows. Data Anywhere allows users to load in any available destination. The price starts from $99/month and depends further on the number of needed data flows. Headless Data Integration for custom integrations. Best Suited for Dataddo is best suited for business teams in marketing, sales, and operations that need to connect data from SaaS tools like HubSpot or Salesforce to analytics platforms. Its no-code approach and focus on flexibility make it ideal for non-technical users who aim to create lightweight, analytics-ready pipelines for dashboards and reports with minimal IT involvement. 4. Hevo Data Hevo Data is an end-to-end data pipeline platform that allows the quick loading of data from different data sources to a data warehouse. Hevo positions itself as an ELT tool and includes transformation features. It consists of both visually configurable transformation blocks and Python code-based transformations. Hevo Data can also automatically detect and map the source schema to the destination. It can also handle schema drift, signaling that the data structure changes on the source. All this tends to reduce the manual input for schema management. Reviews G2 Rating: 4.4 out of 5 (based on 200+ reviews). Key features Over 150 supported connectors. Near-real-time data loading to some destinations. Ability to schedule data loading to other sources. Supports reverse ETL from BigQuery. Limitations Requires Python knowledge for advanced data transformations. Personal addresses, such as Gmail or Outlook, are not allowed. Only 50+ free connectors. Data from BigQuery can only be loaded to cloud data warehouses and relational databases. Pricing Hevo Data pricing plans differ in support options and the number of events (new or modified records). There are three pricing plans available: Free plan with 50 free connectors, 1000000 events, and email support. Starter plan with up to 100 mln events, all the supported connectors, and 24\u00d77 live chat support added. Custom plan if you have even higher needs. Best Suited for Hevo Data is a good choice for fast-growing companies that need real-time, automated pipelines for synchronizing operational data with cloud warehouses. It\u2019s optimized for handling large volumes of structured and semi-structured data, which makes it ideal for companies that scale their analytics infrastructure and aim to quickly implement ELT workflows with minimal setup. 5. Integrate.io Integrate.io is a no-code data pipeline platform. It supports ETL, reverse ETL, ELT, CDC, API generation, etc. It allows visually designing data pipelines on a diagram by dragging and connecting components \u2013 sources, transformations, and destinations \u2013 without coding. At the same time, Integrate.io offers advanced customization options for development. It allows for pipeline design for batch and real-time data pipelines, applying the needed transformations on the go. Reviews G2 Rating : 4.3 out of 5 (based on 200+ reviews). Key features 150+ supported connectors. No-coding visual pipeline designer. ETL, reverse ETL, ELT, CDC, etc. tools. Limitations Available connectors are more focused on the e-commerce use cases. Cannot synchronize the data in real-time. On-premises connectors are supported in a $25000-plan or higher. Pricing Pricing plans start at $15,000 per year. This only includes two connectors and daily scheduling and limits the number of concurrently running packages to three. Additional clusters can be purchased at extra cost for a greater number of concurrently running packages. The next pricing tier starts at $25000, including on-premises connectors, 99.5% SLA, and other features. There is also the custom Enterprise plan with a negotiable price. Integrate.io also provides a free trial period. Best Suited for Integrate.io is a good option for e-commerce and retail businesses that require a low-code platform for data migration and transformation. Its drag-and-drop interface and pre-built connectors to Shopify, Salesforce, and other platforms, make it a top choice for teams focused on customer analytics to obtain better marketing and operational insights. 6. Talend Talend offers several different products for performing ETL tasks. It has an open-source Talend Data Studio as well as paid ETL services \u2013 Talend Data Fabric. The Talend Data Fabric includes the Talend Studio, Stitch, Talend Big Data, Management Console, API Services, Data Inventory, Pipeline Designer, Data Preparation, and Data Stewardship. Talend Solutions supports over 1000 connectors and offers a Talend Component Kit to add custom connectors. There are pre-built integration templates and various components to ease data-related processes. Reviews G2 Rating: 4.0 out of 5 (based on 65 reviews). Key Features Over 1000 supported connectors. Open-source tool available. Extendable using Python. Limitations Talend Data Fabric is costly. Pricing isn\u2019t publicly available; you need to contact the sales. Pricing Talend pricing isn\u2019t publicly available. You need to contact sales to discuss pricing. However, Talend offers a free trial. You can also use their open-source Talend Data Studio for free. Best Suited for Talend is best suited for mid-to-large enterprises requiring robust integration and governance capabilities. Talend\u2019s strengths come from its open-source roots and enterprise-grade features, which makes it perfect for managing complex ETL workflows, ensuring data quality, and complying with regulations in industries like healthcare, finance, and manufacturing. 7. Fivetran Fivetran is an automated data movement platform that moves data between different sources using API interfaces. It allows business users to create and deploy data-driven reports and dashboards based on data from their primary data sources. Fivetran relies on automation to effectively handle schema changes, significantly minimizing manual input. This makes it a popular choice for streamlined data replication using the ELT approach. Reviews G2 Rating: 4.2 out of 5 (based on 380+ reviews). Key Features 300+ connectors supported. Custom connector creation is available. Streaming data loading support. Post-load data transformation via SQL. Limitations No data loading to cloud apps. No sync schedule by a fixed time. No data transformations are available before sending data to the warehouse; they are only applied afterward. Pricing Fivetran pricing is mostly volume-based and feature-based. There are several pricing tiers with different features available, and for each tier, the monthly quote depends on the number of records loaded per month. The more records you load, the less your cost per row for the next rows. A trial period is available for several pricing plans. Best Suited for Fivetran is good for organizations seeking a hands-off approach to ELT pipelines that prioritize reliability and scalability. It automatically adjusts to schema changes in source systems, making it ideal for teams that want effortless maintenance and continuous data replication to cloud warehouses like Snowflake, BigQuery, or Redshift. 8. Stitch Stitch is a popular ETL tool to help you load data from various cloud apps and databases to cloud data warehouses and databases. It was acquired by Talend in 2018. After that, it continues to operate as an independent unit. Stitch supports over 130 connectors to cloud apps, databases, and file storage. Reviews G2 Rating: 4.4 out of 5 (based on 60+ reviews). Key Features No-coding integration configuration. Extensible via REST API. Enterprise-grade security and data compliance. Integration with Singer open-source framework. Limitations Limited transformation features. It supports ETL in BigQuery but not from BigQuery. Pricing Stitch pricing plans start at $100 per month for 5 million records. The number of records per month can be customized. The high-tier pricing plans offer more features and a larger amount of rows. Stitch also offers a free trial period. Best Suited for Stitch is suitable startups and small teams seeking an affordable, straightforward ELT tool to move data from popular sources like MySQL, MongoDB, or Stripe into analytics warehouses. Its simplicity and transparent pricing make it ideal for companies starting their data integration journey or testing out analytics use cases. 9. Apache Spark Apache Spark is an open-source data transformation engine. It allows batch data transfer, data analysis with ANSI SQL queries, performing Exploratory Data Analysis (EDA) with huge data volumes, machine learning, etc. Apache Spark supports different file formats and databases via JDBC and has a number of third-party connectors, including connectors to Google BigQuery, Snowflake, etc. Reviews G2 Rating: 4.2 out of 5 (based on 40+ reviews). Key Features Support both structured and unstructured data. Supports different programming languages. Limitations Requires coding skills. No dedicated technical support. Pricing This is an open-source solution available for free. Best Suited for Apach Spark is good for large organizations handling massive datasets and real-time data stream processing. It also supports machine learning and advanced analytics workflows, which makes Apache Spark indispensable for data engineering teams working on big data applications. 10. Keboola Keboola is a self-service data management platform suitable for designing ELT pipelines. It also has data transformation and orchestration features for organizing data on the go. Keboola offers powerful automation features for data transfer scheduling and management. This allows businesses to prioritize their mission-critical tasks and minimize manual intervention. Reviews G2 Rating: 4.7 out of 5 (based on 90+ reviews). Key Features Many pre-built connectors for data sources. Centralized management of data pipelines. Scalable data pipelines handling large data volumes. AI-driven data management tasks automation. Limitations Programming language is required to perform data transformations. Error messages aren\u2019t self-descriptive. Pricing Keboola offers a free tier with a limit of 120 minutes of computational time for the first month and 60 minutes for subsequent months. In general, it offers custom pricing, which depends on the individual needs of each business. Best Suited for Keboola is suitable for companies with data-heavy operations that need an all-in-one solution for extraction, transformation, and orchestration. With strong collaborative tools, built-in data governance, and modular features, Keboola helps teams involved in complex analytics workflows that span multiple departments or involve diverse data sources. Conclusion In today\u2019s fast-paced business environment, companies face an overwhelming amount of data that needs to be harnessed effectively to gain a competitive edge. ETL tools are crucial in solving this challenge by automating and simplifying data integration tasks, reducing manual intervention, and ensuring data quality and consistency. In this article, we delve into the world of ETL tools designed to enhance your experience with Google BigQuery, a popular and powerful analytics platform. Among various noteworthy BigQuery ETL tools, Skyvia stands out as a superior choice. It offers advanced data integration features, a user-friendly interface, a wide range of available connectors, and a generous free plan for beginners to explore. FAQ for BigQuery ETL Tools Why Are the Big Query ETL Tools Necessary? BigQuery is often selected as a central data repository since it can store and process huge data volumes. There are various Google BigQuery ETL tools that allow businesses to gather data, preprocess it, and quickly send it to a central repository. Overall, the BigQuery ETL tools help to prepare data for use in BigQuery thanks to their: \u2013 Simplified process of data collection from many different sources. \u2013 Seamless data transformation to match BigQuery structure. \u2013 Ability to save lots of time and effort for data engineers and analysts. \u2013 Ability to handle increasing data volumes. How to Transfer Data from BigQuery? There are different ways of transferring data from BigQuery, starting from API integrations to ETL tools. If you decide to use Skyvia, setting up the data integration scenario will take several minutes. Select the data integration tool, set BigQuery as a source, set the required destination, apply the transformation on data if needed, set scheduling for automated data loading, and run the integration. What Data Can Be Extracted from BigQuery? BigQuery supports semi-structured data (XML, JSON, and Avro files) and structured data organized in tables with definite data types. It also supports geospatial, time-series, and machine-learning data. Data that can be extracted from BigQuery also largely depends on the destination app and its supported data types. What Is the Difference between ETL and ELT? Both ETL and ELT aim to extract and load data to the destination, though the transformation stage takes place at different periods of time. With ETL, data is transformed before being transferred to the destination. With ELT, data is transformed on the destination side. Here are some other key differences between [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) . ELT ETL ELT is mostly used for loading data into a cloud data warehouse or a data lake. ETL doesn\u2019t impose limitations on the destination \u2013 data can be loaded into any supported source. ELT works with structured, semi-structured, and unstructured data. ETL supports structured data. Transformations on the destination side usually require programming skills. Transformations are made after data extraction and before its loading to the destination. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+ETL+Tools+for+BigQuery+in+2025%3A+Features+%26+Pricing&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-bigquery/&title=10+Best+ETL+Tools+for+BigQuery+in+2025%3A+Features+%26+Pricing) [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) Sergey combines years of experience in technical writing with a deep understanding of data integration, cloud platforms, and emerging technologies. Known for making technical subjects approachable, he helps readers navigate complex tools and trends with confidence. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/etl-tools-for-oracle/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best 9 ETL Tools for Oracle By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/etl-tools-for-oracle/#respond) 3525 May 29, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-oracle%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+9+ETL+Tools+for+Oracle&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-oracle%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-oracle/&title=Best+9+ETL+Tools+for+Oracle) By the [VMR\u00a0 report](https://www.verifiedmarketresearch.com/product/data-integration-market/) , the Data Integration Market\u2019s value was about USD 10.44 billion in 2021, and it tends to grow at a CAGR of 12.8% from 2023 to 2030 to reach USD 30.88 billion. This evaluation calls about the data orchestration needed to support your services, big data, and historical data. And that\u2019s how we end up with [ETL tools](https://skyvia.com/blog/etl-tools/) . (Read more about [ETL vs ELT difference here](https://skyvia.com/blog/elt-vs-etl/) ). For Oracle, ETL tools have elements and benefits that make them ideal for managing and transforming data within specifically Oracle environment. We\u2019re talking about such critical aspects as extracting data from multiple sources, transforming it using complex business rules, and loading data into Oracle databases or data warehouses. Overall, the business pains to be solved include: Avoiding errors and discrepancies by reducing the need for hand-coding. Spotting duplicates and inconsistencies to ensure the data quality with data accuracy testing. Reaching compliance and regulatory standards by providing more precise data analysis. Boosting ROI in the long run generation by saving time, effort, and resources. This article provides an overview of Oracle\u2019s best extract, transform, and load (ETL) tools presented on the market in 2025. The list is based on their features, reviews, and popularity among users, focusing on usability and Oracle compatibility: Best 9 ETL Tools for Oracle Skyvia Hevo Data AWS Glue Oracle Data Integrator Informatica Apache Airflow IBM Infosphere Datastage Talend Hadoop ETL tools\u2019 key considerations Conclusion Best 9 ETL Tools for Oracle Skyvia The efficient data integration procedure, easy-to-use interface, and cloud-based nature of [Skyvia](https://skyvia.com/) make it a strong choice among companies seeking ways to handle their data ecosystem. The platform grant users the possibility to perform different integration scenarios with the Oracle database, including ETL and Reverse ETL, ELT, two ways data sync, workflow automation, etc. Key benefits [No code platform](https://skyvia.com/connectors/oracle) \u2014 an ideal choice for companies that don\u2019t have ready IT teams or data engineers. Simple to set up and easy to manage data pipeline. The platform offers pre-built connectors to over [180+ data sources](https://skyvia.com/connectors/) , making it convenient for users who want to efficiently aggregate and push data to data warehouses, CRMs, and other cloud-based applications. Support for various data integration capabilities, including [ETL, ELT, reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) , and bi-directional data synchronization, guaranteeing that data is accurate and up-to-date across multiple platforms and making it a versatile tool for data handling. Advanced data mapping and transformation capabilities. Users can map fields between sources/destinations and transform data using various functions, expressions, and formulas. This ensures that data is correctly formatted and optimized for its intended use. Support of data replication and backup. It allows for arranging consistent and up-to-date information across multiple systems. Potential downsides The sources\u2019 selection ability is a bit cut, just 180+. Pricing Skyvia offers cost-effective pricing plans, including the version for free. Click [here](https://skyvia.com/pricing/) for more detail. Hevo data [Hevo Data](https://hevodata.com/) is a no-code data pipeline platform that helps businesses integrate data from multiple sources into their warehouse in near real-time. Key benefits Real-time data replication from Oracle and streaming to the target destination, ensuring that data is always up-to-date and available for analysis. Support of 150+ ready-to-use integrations across databases, SaaS applications, cloud storage, SDKs, and streaming services, making it a versatile tool for data integration and management. Zero-downtime scaling, autonomous data guard, Oracle database in-memory, machine learning capabilities for DWH, and analytics technologies. Bi-directional data pipeline that allows businesses to move data in and out of Oracle. Potential downsides Pricing (can be pretty steep for businesses with more extensive data integration needs). The user interface is less intuitive than other integration tools. Pricing Free, 14-day trial with a set of paid plans. Click [here](https://hevodata.com/pricing/pipeline/) for more detail. See our comparison chart: [Skyvia vs Hevo](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) AWS Glue [AWS Glue](https://aws.amazon.com/glue/) is a serverless data integration service that simplifies discovering, preparing, integrating, and modernizing ETL processes. Key benefits The tool allows to easily prepare and load data for analytics without any infrastructure management. Handles provisioning, configuration, and scaling of the resources required to run ETL jobs on a fully managed, scale-out Apache Spark environment. Potential downsides Limited support for Oracle features. Possible performance issues for large datasets. Cost considerations for the long-term usage (an hourly rate, billed by the second, for crawlers and ETL jobs). Pricing Users pay for the services they work with. There are no long-term contracts or termination fees. There\u2019s also an AWS free Tier that includes: Set of free services. Set of short-term free services. 12 months of free services. Click [here](https://aws.amazon.com/glue/pricing/) for more detail. See our comparison chart: [Skyvia vs AWS Glue](https://skyvia.com/etl-tools-comparison/aws-glue-alternative-skyvia) Oracle Data Integrator [Oracle Data Integrator](https://www.oracle.com/middleware/technologies/data-integrator.html) (ODI) seamlessly integrates with Oracle Database, being the native ETL solution. Key benefits ODI provides a unified solution for efficiently transforming large volumes of data and processing events in real-time. This automation saves time and reduces the risk of human error, ensuring that data is accurately integrated and transformed. The real-time monitoring and error handling capabilities provide easy-to-identify and resolve issues quickly for efficient data management. Potential downsides Compatibility problems with legacy systems can result in additional costs and complexity during implementation. Integrating data from multiple sources can increase the risk of data security breaches. Complexibility and high expenses (while the tool provides a fast method to bulk load data into the Oracle reporting database and access data transformation capabilities, it can also require significant investment in time, resources, and expertise to implement effectively). Pricing Flexible and depends on the resources used and time spent. There\u2019s an appropriate calculator to help count the services\u2019 prices. Click [here](https://www.oracle.com/integration/pricing/) for more detail. Informatica [Informatica Power Center](https://www.informatica.com/) software allows to execute transformation tasks using an Autonomous Database, which improves data performance by utilizing Spark clusters and auto-tuning. Key benefits Advanced pushdown optimization enables cloud data integration and ELT, improving productivity and optimizing the performance of integration tasks. Clean, governed data for Oracle, with the ability to ingest, integrate (ETL/ELT), and catalog it. Enhanced data security and governance help reduce data breach risk and simplify regulatory compliance. Potential downsides Compatibility issues with other tools and systems. The high cost of implementation and maintenance. The tool\u2019s complexity. Pricing It offers a free trial: the monthly subscription is flexible and depends on business requests. Click [here](https://www.informatica.com/products/cloud-integration/pricing.html) for more detail. See our comparison chart: [Skyvia vs Informatica](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) Apache Airflow [Apache Airflow](https://airflow.apache.org/) is an open-source Python-based workflow orchestrator that can handle large volumes of data and be easily scaled up or down based on the company\u2019s needs. Key benefits The customizable and extensible platform for building data pipelines enables data scientists to share and reuse features across different models and ensures consistency and accuracy in feature engineering. Multiple open-source tools for checking data quality from a DAG to improve quality and consistency in Oracle data pipelines. A single platform for developing, deploying, monitoring, and managing data pipelines; an intuitive user interface; plug-and-play possibilities. Apache Airflow is a flexible and scalable solution for Oracle data processing since it\u2019s simple to combine with other tools and technologies. Potential downsides The complexity of configuration and setup required to use Airflow with Oracle with the lack of appropriate documentation. Resource intensiveness and scalability concerns (Airflow requires significant resources to run, which can lead to performance issues when working with large datasets. At the same time, scaling Airflow for larger workloads can be difficult and may require significant investment in hardware and infrastructure). Limited support for Oracle-specific features and functionality: Airflow may not support all of the database\u2019s unique features and capabilities. So, it can limit the usefulness of Airflow for businesses that rely heavily on Oracle-specific functionality. Pricing The solution is free and open-sourced. IBM Infosphere Datastage Users of the [IBM Infosphere Datastage](https://www.ibm.com/products/datastage) ETL tool may create, develop, and execute data-moving and -transforming tasks. It\u2019s connected to Oracle databases, so that\u2019s an excellent option for companies whose data management requirements depend mainly on Oracle. Key benefits User-friendly UI with a range of connectivity options. An ability to work with various patterns, including extract, transform, and load (ETL) and extract, load, and transform (ELT). A high scalability level. Potential downsides The compatibility issues with Oracle, like: Compatibility issues with older versions of Oracle databases and data sources while upgrading to a new version of Oracle (users may need to configure the Oracle database compatibility level to 18.1.0 or lower to continue using the IIDR Replication Engine for Oracle). Migrating from the Oracle OCI stage to the Oracle Connector stage using the Migration Tool may cause the freezing of the Datastage. It may be complicated to optimize the performance of IBM InfoSphere DataStage tasks; such requires additional knowledge and resources. The additional expenses for maintenance and licensing. The extra resources, such as Docker containers, may be required to connect to different data sources, including Hive, Oracle, and SQL Server. Pricing The trial version is ready to use for free. Users can also select a paid plan with a flexible set of options. Click [here](https://www.ibm.com/products/datastage/pricing) for more detail. Talend [Talend Open Studio](https://www.talend.com/products/talend-open-studio/) is an open-source solution created to streamline the data integration and management process for all skill levels of users. It can manage both job configuration and target execution, which makes this tool an excellent option for orchestrating your entire ETL pipeline. Key benefits The feature monitoring and advanced scheduling provides seamless real-time data integration into dashboards. Easy management of various ETL tasks and self-service data preparation capabilities simplify things. Integration with [1,000 components and connectors](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) . Can normalize numerous dissimilar data sets collected from various sources before successfully migrating the information into multiple target systems. The functionality provides shortened integration development times from weeks/months down to just days/hours while increasing efficacy and reducing expenses. The service includes graphical profiles of the data for simplified comprehension and analysis. Potential downsides The solution is complicated technically. Challenging to debug and possible issues with custom coding. It\u2019s limited to support of the legacy and not standard systems. Processing and data integration may take a longer time for massive datasets. Pricing The solution is free and open-sourced. See our comparison chart: [Skyvia vs Talend.](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) Hadoop [Hadoop](https://hadoop.apache.org/) \u2018s storage approach uses a dispensed record gadget that maps facts anywhere on a cluster. Hadoop ETL\u2019s ability to retain raw data helps to analyze the business\u2019s current point and the path to the future. Key benefits This tool accelerates Exadata, which supports operating systems, CPU memory, and hard drives and manages many databases, including DWH and online transaction processing (OLTP). A platform combines many Oracle cloud services, allowing for seamless data mobility and uniform governance. Data cleansing, enrichment, and transformation are carried out inside the DWH due to the ELT data pipeline, which saves time and resources. Potential downsides Compatibility problems between various Oracle database versions might hurt. These might cause delays in installation and additional expenses associated with their resolution since Oracle databases and SQL Server applications operate differently in some areas. Costs and effort spent on user education might increase due to the complexity of deployment and maintenance. Inaccurate judgments and consequences may result from potential security risks and issues such as database risks or incomplete or inaccurate data. Pricing The solution is free and open-sourced. ETL tools\u2019 key considerations Compatibility Integrating current data sources and warehouses with new data destinations should be easy. For businesses, it\u2019s best to choose an ETL solution that can manage a variety of data sources and destinations, both structured and unstructured. Data replication Companies may simplify operations to increase productivity. In this case, by using flexible scheduling solutions for data replication that can be tailored to meet specific business demands. Scalability and performance Real-time data processing is possible with scalable and quick settings to serve contemporary digital applications. Still, these platforms must retain optimal levels of performance and speed while managing massive amounts of data. Ease of use and customization ETL solutions should have an intuitive user interface and capabilities allowing users to personalize and automate their data integration processes. They tend to be powerful but easy for non-technical users. Keep in balance It would be best to balance between simplicity of use, adaptability, and compatibility with the team\u2019s existing skill set. So, carefully consider your company\u2019s data integration requirements and priorities before selecting the optimal ETL technology. Conclusion Nowadays, various ETL tools exist in the market, like cloud-based, open-source, or enterprise solutions. Each is good enough or perfect, depending on what is vital for your business. So, selecting the ideal ETL tool requires careful consideration of several factors, including: In determining which tool to implement in your system, you must evaluate its flexibility and scalability based on how much data needs processing across integrated platforms. Accurate and timely delivery of data depends on the service level agreement (SLA) that an ETL tool offers. For an ETL tool to be suitable for the requirements of your organization, it must possess certain vital features like scheduling the replication of information at varied intervals, being able to work with disparate file formats (including unstructured ones), and carrying out automated workflow without requiring human direction. One should look into acquiring the top-tier ETL tools for Oracle to improve choices made by key stakeholders at every level in an enterprise based on all-inclusive facts gathered from vast data sets. So, Skyvia is the perfect software to use in such cases. Let\u2019s see why: The transparency of the UI makes it easy for all levels of technical ability. The software\u2019s ability to handle complex enterprise-level requirements related to seamless data transition through robust integration and backup options makes it the perfect choice. Eliminating expensive hardware and software installations is one of the primary benefits that cloud-based solution offers in terms of cost reduction and increased flexibility. The pricing model offers scalability, allowing you to respond effectively to changing conditions. Skyvia\u2019s browser is SQL tool based, and its Query provides easy access to data from databases and cloud applications, allowing one to retrieve and analyze it quickly and efficiently. See Skyvia in action \u2014 [schedule a demo](https://skyvia.com/schedule-demo) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-oracle%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+9+ETL+Tools+for+Oracle&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-oracle%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-oracle/&title=Best+9+ETL+Tools+for+Oracle) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/etl-tools-for-tableau/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 5 ETL Tools for Tableau By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/etl-tools-for-tableau/#respond) 3120 August 9, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-tableau%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+5+ETL+Tools+for+Tableau&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-tableau%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-tableau/&title=Top+5+ETL+Tools+for+Tableau) We think faster and analyze better if we see what we do. That\u2019s especially important for business because a wrong decision based on incorrect data will lead to huge losses and significantly reduce the chances of breaking out into the market leaders. So, we need ETL and BI tools for gathering and visualizing data and, thus, speeding up decision-making. Like Tableau, for instance. Not still familiar with it? Let\u2019s see what it can do and review the set of [ETL tools](https://skyvia.com/blog/etl-tools/) working with it. Note: If you\u2019d like to be more familiar with ETL tools in the market and their usability, go [here](https://skyvia.com/etl-tools-comparison/) to compare and find the perfect fit. You may also [review](https://skyvia.com/blog/etl-tools/) the top list of ETL tools in 2025. Table of Contents Tableau Prep Skyvia Hevo Data Integrate.io Talend Top 4 considerations criteria for top 5 ETL tools Conclusion Tableau Prep [Tableau](https://www.tableau.com/products/prep) is one of the best visual tools in the data integration and processing market. But let\u2019s go backstage a bit. Tableau helps to optimize the data quality by cleaning, reshaping, and combining it. Why do we need it? For receiving the optimal format for further analysis, of course. So, in the end, we can breathe; the data is correct, accurate, and reliable. To talk about the ETL tools used here, let\u2019s look at Tableau Prep Builder . So what can it do? Data extraction from various sources. Data transformation into a common usable format. Load data to Tableau for analysis and visualization. The tool is pretty good at helping streamline the data preparation workflow and enhance the overall effectiveness of Tableau visualizations. Pros The intuitive and easy-to-use UI. Wide range of data sources selections like databases, spreadsheets, cloud services, and a bit more. Manual impact avoiding. Centralized data management. Cons Data size and performance limitations. Customization options limit. Pricing They\u2019re starting from [70$ per user per month](https://www.tableau.com/pricing/teams-orgs) , but the model depends on your needs. So, what about the competitors in this data marathon? Let\u2019s run with a few of them and find the winner. What about Skyvia as the first runner? Skyvia If to talk about [Skyvia](https://skyvia.com/) , the first word explaining this app is Convenience . Skyvia is a cloud-based, no-code SaaS solution, providing 180+ connectors and supporting ELT, ETL, and reversing ETL scenarios. Skyvia provides several data scenarios for Tableau: [Data replication](https://skyvia.com/data-integration/replication) to the data warehouse of choice can be connected natively to Tableau later on. Data connectivity via [OData](https://skyvia.com/connect/odata-endpoint) (read more on [how to connect Tableau to an OData data source](https://help.tableau.com/current/pro/desktop/en-us/examples_odata.htm) in the documentation). The platform is the perfect choice for users preferring simple but high-quality approaches and user-friendly apps. Pros The platform is cloud-based, so you just have to open it in your browser to start. The solution is no-code and easy to use. You may select from a wide range (180+) of sources, like cloud sources, databases, and warehouses. Strong data safety and security abilities. Convenient pricing, including the free plan. Cons User docs with video sections would be helpful. Pricing The range of the [pricing plans](https://skyvia.com/pricing/) is flexible enough and suits any business size. You can also start Skyvia right now for free. Hevo Data Another ETL, ELT, and [reverse ETL platform](https://skyvia.com/blog/best-reverse-etl-tools/) is [Hevo Data](https://hevodata.com/) . It\u2019s also no-code, cloud-based, and user-friendly, but the number of connectors here is quite less \u2014 just 150+ compared to Skyvia. There are two products offered for users here: Hevo Pipeline (ETL, ELT). Hevo Activate (reverse ETL). Pros Solution\u2019s simplicity (no additional training required). Regular tasks maintenance is not needed. The technical support is qualified and works 24/7. Near real-time data synchronization is available. Cons You have to be familiar with Python to transform the data. The complex data transformation abilities are limited. Luck of features for data quality management. Pricing The Hevo [pricing](https://hevodata.com/pricing/pipeline/) model starts with the free plan and includes the Starter ($ 239 per user per month) and Business (the custom model, based on your needs and budget). Integrate.io The next pretty strong web-based ETL, ELT, and reverse ETL platform is [Integrate.io](https://www.integrate.io/) . It\u2019s focused on CDC and API management and provides users with low-code or no-code data pipelines. The intuitive drag-and-drop interface allows reviewing your data in graphical diagrams. You just have to drag appropriate items, like sources, destinations, etc., and connect them. Like Hevo Data, the platform also provides about 150+ connectors, which is still less than Skyvia offers. Pros User-friendly tool with a pretty good visual interface. The platform offers a wide choice of sources for integration, including SaaS and cloud apps. Cons Real-time data synchronization is impossible here. The price is high enough despite having two weeks free trial. If the job fails, it might be challenging to debug it. Pricing The [pricing](https://www.integrate.io/pricing/) plans are quite expensive. For instance, ETL/Reverse ETL begins at $15,000/year. And ELT/CDC will start at $199/month. But you may start with a 14-day free trial. Talend If your business is seeking some data integration platform providing both cloud and on-premise integration solutions, you may look at [Talend](https://ua.talend.com/) . There is a choice between Talend Open Studio (open source) and Talend Data Fabric (the low-code solution unifying data integration, quality, and management). The UI might be a bit complicated for non-technical users, but the drag-and-drop, pre-built templates, and about 1000 different sources and components are provided here. You can also add custom connectors with Talend Component Kit . Pros The impressive number of connectors (1000)! An ability to use the open-source solution. Browser-based GUI to design the pipeline. Cons A bit complicated pricing model. You have to use Python to extend your abilities here. Pricing The pricing model is subscription-based, and you need to contact sales to select a suitable subscription. They also offer a 14-day free trial for the cloud, or you may use Talend Open Studio for free. Top 4 considerations criteria for top 5 ETL tools Let\u2019s visually compare the top 5 ETL tools analyzed in this article and select the most competitive one. Criterion Tableau Skyvia Hevo Data Integrate.io Talend Focus ETL tool for BI Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation Data Ingestion, ETL, Reverse ETL ETL, ELT, and Reverse ETL Data integration and management Skill level No-code and user-friendly tool No-code and easy to use wizard No code solution Low-code, no-code solutions Low-code environment Connectors About 100 native connectors 180+ 150+ 150+ 1000 Pricing Volume-based and feature-based pricing. Starts from 70$ per user per month. Volume-based and feature-based pricing. Freemium model allows you to start with a free plan Event-based pricing ETL/Reverse ETL: Starts at $15,000/year. ELT/CDC: Starts at $199/month. 14-day free trial Subscription-based pricing. 14-day free trial for cloud. Free Talend Open Studio The best-balanced solution So, despite these data integration platforms\u2019 being pretty similar in their offers, tools, and useability in various integration scenarios, Skyvia keeps the best balance between simplicity, competitive pricing, and high quality. Join Skyvia now \u2014 [view the product tour](https://www.youtube.com/watch?v=41JIlV1bP9c) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-tableau%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+5+ETL+Tools+for+Tableau&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools-for-tableau%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools-for-tableau/&title=Top+5+ETL+Tools+for+Tableau) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 ETL Tools: Compare Data Integration Solutions By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/etl-tools/#respond) 5084 April 22, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+ETL+Tools%3A+Compare+Data+Integration+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools/&title=Top+10+ETL+Tools%3A+Compare+Data+Integration+Solutions) Why do modern businesses need ETL (Extract-Transform-Load) tools? They\u2019re drowning in various types of data from apps, websites, CRMs, spreadsheets, etc. The problem? It\u2019s often in different formats and hard to use. This mess needs behind-the-scenes data wranglers, pulling info from different sources, cleaning it up, and sending it where it needs to go (usually a data warehouse, lake, or some analytics solution). With the increasing demand for efficient data processing, organizations need ETL tools to quickly and accurately handle massive datasets. However, choosing the right ETL solution for your specific needs can be overwhelming, given the many options available. This guide discovers the top ETL tools for 2025 to help you compare the latest ones and choose the solution that fits your business requirements. Table of contents What Are ETL Tools? ETL in 3 steps ETL vs ELT Different Types of ETL Tools List of Best ETL Tools in 2025 Skyvia Pentaho Oracle Data Integrator Talend Open Studio Informatica PowerCenter Fivetran Stitch Airbyte Singer Xplenty (Integrate.io) ETL Tool Comparison Factors for Choosing the Best ETL Tool Key Considerations for ETL Software Selection Conclusion What Are ETL Tools? ETL tools are essential for automating and managing moving information between systems. The term ETL stands for extract, transform, and load , and these platforms: Extract information from multiple sources. Transform it into a valid format. Load it into a target system like a DWH. Instead of manually handling insights, ETL tools connect to databases, pull the information, and ensure it\u2019s ready for analysis or reporting. Such systems provide testing of data pipelines and monitoring of their performance in real-time. Once set up, such platforms can continue operating smoothly, even if something goes wrong. Think of them as the backstage crew in a theater production, working tirelessly behind the scenes to ensure that everything is in its place when the curtain rises. These platforms are the invisible hands orchestrating the journey of data from raw, scattered sources to polished, actionable insights in your analytics dashboard. You don\u2019t have to worry about stepping in every time. The entire process is automatic, saving businesses time dealing with large volumes of data. ETL in 3 steps Step 1: Extract Imagine a journalist collecting interviews, facts, and figures from a variety of databases, cloud apps, spreadsheets, or APIs. ETL tools do the same, automatically gathering data from wherever it lives. ETL extraction can mean extracting the files generated at a specific location. In such scenarios, a file is created, the information is written into it, and the ETL tool extracts the file from there. We can extract both [structured and unstructured](https://skyvia.com/blog/structured-vs-unstructured-data/) data into the DWH. Step 2: Transform Next, just as an editor refines a story, corrects grammar, clarifies meaning, and shapes the narrative, ETL solutions clean, standardize, and enrich the information. They resolve inconsistencies, fill gaps, and convert formats so everything speaks the same language and is ready for analysis. ETL transformation types include multiple methods like deduplication,\u00a0 joining/splitting, summarization, etc. Step 3: Load Finally, picture the finished article published in a newspaper or online. In the data world, this means loading the prepared data into a DWH or business intelligence system, where it\u2019s ready to drive decisions, fuel reports, or power machine learning models. It can either be loaded all at once, which is commonly called a full load , or at regular intervals, i.e., an incremental load . Example : Consider an e-commerce company juggling sales data from Shopify, payment records from Stripe, and customer info from a CRM. Manually stitching this together would be a nightmare. With ETL automation, the company can extract data from all these platforms, transform it to match business rules (like standardizing date formats or categorizing products), and load it into a central dashboard for real-time inventory and sales analytics. ETL vs ELT Regarding streamlining processing, ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are two common approaches organizations use. While both serve moving insights\u00a0 from one system to another, their workflow, architecture, and when they are best suited for use differ. In many cases, organizations use a combination of both methods depending on the type of info they\u2019re working with. Choose ETL if you need to apply complex transformations, clean and validate insights before loading them into your destination, or if your storage system can\u2019t handle heavy processing tasks. It\u2019s perfect for smaller datasets or when working with legacy systems. Choose ELT if you\u2019re dealing with large volumes of information, cloud-based systems, or need faster loading with transformations happening after the data is loaded. ELT is often more scalable and efficient for modern insights environments. ELT (Extract-Load-Transform) Data is extracted from the source system and loaded directly into the target system (usually a warehouse) without transforming it beforehand. Once it\u2019s loaded, transformations are applied within the DWH. This method allows users to store raw, untransformed information in the DWH, which is more flexible for experiments with different transformation strategies as needed. Pros Faster Loading. Since data is loaded directly into the DWH before transformation, the loading process is typically faster, making it ideal for large datasets. Flexibility in Transformation. Storing raw information gives you the flexibility to apply different transformation strategies as you go, enabling more dynamic and ad-hoc transformations. Scalable for Big Data. ELT is well-suited for cloud-based data warehouses (like Google BigQuery, Snowflake, or Amazon Redshift) and is designed to handle large-scale transformations efficiently. Simplified Automation. By separating the extraction and loading processes from transformation, automation becomes easier. You don\u2019t need to worry about transforming insights in real-time. Cost-Efficiency for Cloud Platforms. With modern cloud DWHs offering scalable compute resources, ELT can be more cost-effective for businesses with high volumes of info. Cons Higher Storage Requirements. Storing raw, untransformed data means you\u2019ll need more storage space in the DWH, which can lead to higher costs for storage. Potential Data Privacy Risks. Raw data may contain personal or sensitive information, which can pose privacy and security risks if not handled properly, especially in industries with strict data protection regulations. Slower Transformations. Although loading is faster, transformations occur after data is in the warehouse, resulting in slower performance when you need to analyze it quickly. Complexity in Data Management. Having raw data stored in the warehouse might make managing and cleaning it harder, especially if the transformation process is complex or inconsistent. Requires Powerful Computing Resources. Cloud-based warehouses or on-premise systems need robust computing power to handle large-scale transformations efficiently, adding to operational costs. Different Types of ETL Tools There are various ETL solutions available in the market. Some organizations use them in big data analysis, writing SQL scripts for the same. These ETL platforms can also be used for business intelligence. ETL tools can be categorized based on their usage and cost. Among the different types of ETL systems are the following: Cloud-based ETL tools Such systems are like ride-share apps: you don\u2019t need to own a car or worry about maintenance. Just tap a button, and a car (ETL pipeline) arrives, ready to take the data wherever it needs to go. Solutions such as Skyvia, Fivetran, or Stitch are scalable, managed, and integrate easily with cloud services. Example: An online retailer uses Skyvia to automatically extract customer data from its website, transform it for analysis, and load it into a cloud DWH. If traffic spikes during a sale, the tool scales up automatically, just like rideshare surge pricing ensures enough cars are available when demand is high. Open-source ETL tools Think of them as a community bike workshop. You get the basic tools and blueprints for free and can customize your bike (data pipeline) however you like. Tools like Talend Open Studio and Pentaho Data Integration are flexible and cost-effective, perfect for organizations with technical teams who enjoy tinkering and tailoring their solutions. Example : A startup uses Talend Open Studio to pull sales data from its e-commerce site, transform it to match its accounting system, and load it into a cloud database. The open-source nature lets them add custom connectors as their business grows, like adding new gears to their bike for different terrains. Enterprise software tools Imagine a big train moving containers across the country. Systems like Informatica PowerCenter or Oracle Data Integrator are built for heavy loads, complex routes, and strict schedules. They\u2019re robust, fast, and packed with features, ideal for large corporations moving huge volumes of info between many systems. Their top priorities are reliability and speed. Example : A global bank uses Informatica PowerCenter to integrate customer transactions from dozens of countries. The tool\u2019s parallel processing and automated failure detection ensure that even if one \u201ccar\u201d (data source) has an issue, the rest of the train keeps moving, and issues are flagged for quick repair. Reverse-ETL tools [Reverse ETL](https://skyvia.com/learn/reverse-etl-tools) refers to the process of syncing/copying the data from the DWH to the other operational tools that are being used by the business teams. In other words, it is the opposite of the ETL or ELT process, where the data flows into the warehouse. Such systems (e.g., Hightouch, Census, Polytomic, Rudderstack, Grouparoo) automate extracting curated data from your DWH, transforming it to match the needs of operational systems, and loading it into platforms like CRMs, marketing automation, or customer support tools. Example: Imagine a marketing team identifying high-value customer segments in their data warehouse. With reverse ETL, these segments are automatically synced to their email marketing platform. When a customer makes a new purchase, the warehouse updates, and the marketing system instantly knows to stop sending them retargeting ads. Real-Time ETL tools These are like an airport\u2019s air traffic control tower. They monitor, process, and direct streams of incoming and outgoing flights (data) in real time. Unlike [batch ETL](https://skyvia.com/blog/batch-etl-processing/) , which processes data in scheduled intervals, real-time ones handle data the moment it \u201ctakes off\u201d or \u201clands,\u201d ensuring immediate visibility and action. This approach is crucial for scenarios where delays could mean missed opportunities or risks, such as fraud detection, stock trading, or live customer analytics. Example: Imagine a global e-commerce company during a major sales event. As customers place orders, update carts, or request support, real-time ETL tools instantly capture these actions from websites, mobile apps, and payment gateways. The tools transform this insight into dashboards for live inventory tracking, fraud detection systems to flag suspicious transactions within seconds, and customer support platforms to provide up-to-the-moment order status. Custom ETL tools Some organizations build ETL solutions, like a custom Swiss Army knife. You design each blade and tool to meet your exact needs. These are often scripts or in-house platforms, perfect for unique requirements or when off-the-shelf tools just won\u2019t do. Example: A logistics company with legacy systems writes custom Python scripts to extract shipment data, transform it for modern analytics, and load it into a new dashboard. This approach gives them complete control but requires skilled developers to maintain and upgrade the \u201cknife\u201d as new needs arise. List of Best ETL Tools in 2025 Skyvia [Skyvia](https://skyvia.com/) is a robust cloud-based SaaS (Software as a Service) platform offering integration, replication, sync, cloud backup, [data flow, and control flow](https://www.youtube.com/watch?v=U8Zbk03E58Q) solutions. It supports [over\u00a020 0\u00a0connectors](https://skyvia.com/connectors) , making it easy to integrate data from various cloud applications, databases, file storage services, and cloud DWHs. Skyvia\u2019s [ETL tool](https://skyvia.com/data-integration/import) allows users to perform data extraction, transformation, and loading (ETL), ELT , and reverse ETL processes, all without coding. Its user-friendly interface and cloud-based nature make it an attractive option for businesses of all sizes, as it requires no local installation and works directly from the web browser. Pros Free trials and plans are available to test features before committing. A no-code interface simplifies complex tasks like data transformations and queries. Scheduled package runs and automatic monitoring for smooth, continuous operations. Failure alerts and detailed logs ensure users stay on top of their integrations and troubleshoot issues quickly. Easy setup and intuitive interface for both technical and non-technical users. Cons Limited functionality on the free plans, with some features and connectors restricted. No streaming data source support for real-time integrations. It might not meet the needs of highly complex custom integrations or very specific data processing requirements. Pricing Skyvia offers several flexible [pricing plans](https://skyvia.com/pricing) based on the features and usage requirements. A free plan with limited features and usage makes it ideal for small projects or testing purposes. Paid plans start from $79/month and scale depending on the number of integrations, data volume, and advanced features required (e.g., API access, custom connectors, etc.). Pentaho It\u2019s an open-source integration and analytics platform widely used for ETL processes, data blending, and advanced analytics. It allows users to extract, transform, and load the information from various databases, applications, and flat files. Pentaho provides enterprise-grade capabilities and an open-source community edition, giving businesses flexibility in choosing the solution that fits their needs. With its powerful integration tools, Pentaho can process data at scale and offers deep analytics capabilities, including reporting, dashboards, and predictive analytics. The platform\u2019s intuitive, drag-and-drop interface and the ability to create complex workflows make it a popular choice for technical and non-technical users. Pros Open-source version available for cost-effective data integration. Wide support for data sources and integration capabilities, including cloud services, databases, etc. Flexible transformation options for complex data manipulation and processing. Scalable architecture suited for both small and large data environments. Provides advanced analytics , including reporting, dashboards, and predictive capabilities. Cons Requires technical expertise to set up and manage, especially for more complex transformations. Performance issues when working with huge datasets. The user interface can feel less intuitive compared to some more modern tools. Limited cloud-native features compared to cloud-first platforms. Pricing Pentaho offers both free and paid options. The Community Edition is free and open-source, ideal for small to medium-sized businesses or those with a limited budget. The Enterprise Edition is priced upon request and includes additional support, scalability, and advanced features like high availability and extended connectors. For precise pricing, especially for the Enterprise Edition, contacting Pentaho directly is best to receive a customized quote based on your business requirements. Oracle Data Integrator This integration tool supports [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) processes across various data sources. It provides an open, standards-based approach to integrate data from multiple databases, cloud applications, and other sources into Oracle databases and other platforms. ODI offers high-performance transformations, real-time integration, and advanced data quality capabilities. Large enterprises widely use it for data migration, warehousing, and real-time analytics. Pros High-performance ETL and ELT processing , especially with large datasets. Real-time data integration with minimal latency is ideal for businesses requiring up-to-date data. Support for a wide range of data sources , including databases, cloud services, and big data platforms. Advanced data transformation capabilities that allow complex data manipulation. Comprehensive data quality features to ensure clean, reliable data. Cons Requires Oracle infrastructure , making it less suitable for non-Oracle environments. Complex setup and management typically require technical expertise and specialized knowledge. Higher cost compared to some open-source and cloud-based ETL tools. Limited community support compared to other more widely adopted tools. Pricing ODI has no fixed pricing model and offers custom pricing based on deployment size and business needs. The cost will typically include both licensing and support fees. For the most accurate pricing, it\u2019s best to contact Oracle directly or visit their website to request a quote tailored to your organization\u2019s needs. Talend Open Studio It\u2019s an open-source ETL and integration platform that provides businesses with a wide range of data processing and transformation capabilities. It enables users to extract, transform, and load data from multiple databases, cloud services, and flat files. Its drag-and-drop interface allows even non-technical users to design and execute data workflows. The tool also supports integration with other Talend products for more advanced features, making it a flexible option for small to medium-sized businesses or those looking to scale their data integration efforts. Pros Open-source and free to use, making it a cost-effective choice for businesses on a budget. A broad range of connectors supporting integration with various databases, cloud services, and applications. Flexibility to scale as businesses grow, with the ability to integrate additional Talend products for advanced features. Strong community support , offering extensive documentation and resources for users. Easy to use for developers familiar with the Eclipse IDE. Cons It requires technical knowledge to set up more complex transformations and workflows. Performance can lag with larger datasets or complex integrations. Limited cloud-native features , especially compared to newer, cloud-first tools. It may require additional plugins or Talend Enterprise products for advanced capabilities like big data processing. Pricing The free and open-source platform offers robust features without licensing fees. However, if you need additional features such as advanced support, big data integration, or cloud connectivity, Talend offers paid plans as part of its Talend Data Fabric suite. The enterprise-level pricing depends on the scale of the deployment and the specific capabilities users need, so it\u2019s recommended to contact Talend for a custom quote. Informatica PowerCenter This enterprise-grade integration platform handles complex ETL workflows. It enables businesses to extract, transform, and load information from various sources into DWHs, cloud platforms, and other target systems. The platform supports high-volume data processing and is used by large enterprises for data migration, quality management, and governance. With advanced transformation, scheduling, and error-handling features, Informatica PowerCenter is well-suited for organizations that require sophisticated data integration solutions at scale. It offers a graphical user interface for designing workflows and monitoring tools to track job performance and troubleshoot issues. Pros Advanced data transformation capabilities , enabling sophisticated data processing and cleansing. Comprehensive monitoring and debugging tools to ensure smooth operations. Supports many data sources , including cloud, on-premise, and hybrid environments. Strong data governance features help ensure compliance with regulations. It supports parallelism and easy job scheduling. Cons It is expensive , particularly for small businesses or those with budget constraints. Requires specialized knowledge for setup, maintenance, and complex transformations. Complex user interface might be challenging for new users or non-technical teams. Performance can suffer when handling very large datasets without sufficient hardware. Pricing Informatica PowerCenter offers custom pricing based on the deployment size, number of users, and the required features. The pricing can be higher, making it more suitable for large enterprises with complex data integration needs. For detailed pricing, contact Informatica directly to receive a tailored quote for your business needs. Fivetran This cloud-based data integration system provides automated ETL and ELT services to move data between various sources and destinations. It simplifies the process by automating data pipelines and offering pre-built connectors for a wide range of applications and databases. Fivetran is designed for seamless data extraction, transformation, and loading with minimal setup and ongoing maintenance. It\u2019s particularly known for its ability to handle high-volume, high-frequency data transfers. It is the right choice for those looking to streamline their data workflows and maintain clean, synchronized data across platforms. Pros Over 500 pre-built connectors , making it easy to integrate with various platforms. Automatic schema migrations and incremental data loading ensure up-to-date data. Minimal maintenance is required. Fivetran takes care of data pipeline management. Fast setup with no code needed, making it ideal for non-technical users. Scalable and can handle high-frequency data transfers for large datasets. Cons It is expensive compared to some other ETL tools, especially at scale. Limited transformation capabilities compared to more advanced ETL tools. Data source restrictions, not all platforms are supported. It can be difficult to troubleshoot in case of issues with data sync. Pricing Fivetran offers custom pricing , with costs based on the volume of data and connectors in use. The base plan starts at $1,000/month and can increase depending on the data usage and specific needs. For detailed pricing, contact Fivetran directly for a personalized quote. Stitch This cloud-based ETL service simplifies data integration and transfer between various data sources and destinations. It\u2019s easy to use and provides pre-built connectors for over 130 platforms, making it easy to extract, transform, and load data with minimal configuration. Stitch offers automated, reliable data pipelines with built-in scheduling, allowing businesses to focus on analytics while it handles the heavy lifting. The platform is a good solution for businesses looking to integrate and manage data across multiple cloud applications without extensive technical expertise. Pros 140+ pre-built connectors , simplifying integration with a wide range of platforms. Automated scheduling of data pipelines, reducing the manual work for data engineers. Scalable solution for both small businesses and larger enterprises with growing data needs. Easy-to-use interface with a simple setup process and minimal technical requirements. Real-time data replication for most sources, ensuring up-to-date data in the DWH. Cons Limited transformation capabilities compared to some other ETL platforms. No on-premise options, Stitch is cloud-based only, which might not work for some organizations. It can get expensive as the data volume grows or when using premium connectors. Basic error handling, some users report limited troubleshooting options when data fails to sync. Pricing Stitch offers a free plan with basic functionality for small datasets. Paid plans start at $100/month and increase depending on the data processed and the number of connectors used. Pricing is based on the number of rows synced per month, and additional features like advanced connectors or support come with higher-tier plans. Airbyte This open-source integration platform executes automated ELT pipelines along with monitoring their logs. It focuses on providing a flexible and scalable solution with over 550 pre-built connectors, making it easy to integrate with a variety of cloud applications, databases, and DWHs. One of Airbyte\u2019s key advantages is its open-source nature, which gives users the ability to customize and extend the platform according to their specific needs. It also offers automatic schema migrations, ensuring that your data pipelines are always up-to-date without manual intervention. Pros Open-source and free to use, allowing for customizations and no licensing fees. Flexibility with the ability to customize connectors and transformations. Active community support and regular updates from Airbyte contributors. Cloud-native and scalable , ideal for small and large businesses with growing data integration needs. Cons Limited support for data transformation, Airbyte focuses more on data extraction and loading, leaving transformations to other tools or custom coding. Setup and maintenance may require technical expertise, especially for customization. Still evolving as an open-source project, some features may not be as polished as those of more mature platforms. It requires monitoring, and users must manage data pipelines and ensure smooth execution. Pricing Airbyte offers free open-source access and enterprise-level features with premium support and additional capabilities.It provides custom pricing based on organizations\u2019 business requirements and usage for advanced features and dedicated support. Singer It\u2019s a Python-based open-source tool that allows data extraction from different sources and consolidation to multiple destinations. Instead of being a complete ETL tool itself, Singer focuses on offering standardized connectors called \u201cTaps\u201d (for extracting data) and \u201cTargets\u201d (for loading data). It allows users to easily create custom connectors or use pre-built ones to integrate data from a variety of sources into their target systems. Singer is a fine choice for developers who prefer a modular, code-first approach, providing great flexibility in handling and processing data. Pros Open-source and free to use , making it an affordable option for businesses. Modular design with reusable \u201cTaps\u201d and \u201cTargets\u201d for flexible data integration. Customizable connectors , developers can create their own taps and targets as needed. Active community and extensive documentation for support. Supports a wide range of sources and destinations , enabling seamless integrations. Cons It requires development expertise , as it\u2019s primarily designed for developers and data engineers. Limited built-in transformation capabilities; users need to build their own logic or integrate with other tools for transformations. It is not a fully managed solution , so it requires more effort in setup and maintenance compared to cloud-based ETL platforms. Lack of a native UI, as it\u2019s a code-first platform, there\u2019s no easy drag-and-drop interface for non-technical users. Pricing Singer is completely free and open-source , with no licensing fees or subscription costs. However, if you require additional enterprise-level support or features, you may need to integrate it with other tools or services that come at a cost. Xplenty (Integrate.io) This cloud-based ETL system offers a friendly interface that allows users to create data pipelines without the need for coding expertise quickly. It supports a wide range of integrations, from cloud sources, SaaS apps, and \u00a0databases to DWHs, making it a versatile choice for businesses looking to streamline their data workflows. Xplenty also provides real-time data processing and automated scheduling, enabling companies to ensure their data is always up-to-date for analytics. Pros A no-code interface makes it easy for non-technical users to build and manage data pipelines. Scalable for businesses of all sizes, with support for real-time data processing. Automated scheduling of ETL processes, reducing manual intervention and improving efficiency. Customizable transformation and data processing capabilities for more advanced users. Cons It can become expensive at scale, especially with high data volumes or large numbers of integrations. Limited transformation options for complex data manipulations compared to more advanced tools. It may require additional configuration for specific connectors, particularly for non-standard data sources. It is not as flexible as some open-source alternatives regarding custom integrations. Pricing The custom pricing is based on the size of the data and the specific features required. It typically starts from around $99/month for small to medium businesses and increases based on the number of data sources, destinations, and advanced features like real-time processing. ETL Tool Comparison When choosing the right ETL tool, it\u2019s essential to consider factors like: Pre-built connectors. Ease of use (no-code/low-code). Deployment options. Key features. Each tool offers a different balance depending on users\u2019 business needs, data complexity, and scalability requirements. This table compares key ETL tools based on such criteria. Tool Type Pre-built Connectors Usability Deployment Key Features Skyvia Cloud-based 200+ No-code. User-friendly, intuitive interface. Cloud Data integration, ETL, ELT, reverse ETL, backup, replication, sync,\u00a0 data flow, control flow. Pentaho Enterprise 100+ Low-code. Technical, powerful features. On-premise Data integration, big data, analytics, and real-time processing. Oracle Data Integrator Enterprise 100+ Low-code. Complex setup, high performance. On-premise High-performance ETL/ELT, real-time integration, and data governance. Talend Open Studio Open-source 100+ Low-code. Developer-focused, flexible. Cloud, On-premise Data transformation, cloud-native, batch processing. Informatica PowerCenter Enterprise 200+ Low-code. Complex, powerful, robust. On-premise High-volume data processing, real-time integration,\u00a0data governance. Fivetran Cloud-based 500+ No-code. Easy setup, minimal maintenance. Cloud Real-time data replication, automated schema migrations. Stitch Cloud-based 140+ No-code. Easy to use, suitable for small datasets. Cloud Automated data syncing,\u00a0real-time replication. Airbyte Open-source,\u00a0 cloud 550+ Low-code. Developer-centric, customizable. Cloud, On-premise Custom connectors, real-time data integration. Singer Open-source 100+ Low-code. Developer-focused, flexible. Cloud, On-premise Modular, customizable, supports custom connectors. Xplenty (Integrate.io) Cloud-based 100+ No-code. Intuitive,\u00a0scalable. Cloud Data processing, integration, real-time sync, flexible transformations. Factors for Choosing the Best ETL Tool \u200b\u200bSelecting the right ETL software is like outfitting your team for a cross-country expedition. Each factor you consider is a piece of gear that ensures the journey through the data landscape is smooth, efficient, and free of unpleasant surprises. When choosing the best ETL tool: Check how well it works with the sources and where you plan to send the information. Users without deep tech knowledge may need a system that\u2019s easy to use and intuitive. Speed and performance also matter when handling big chunks of data. Check the cost and ensure it fits the company\u2019s budget, including any hidden fees. See what kind of support and community are behind the tool. Finally, check if it scales with you as the data grows. Key Considerations for ETL Software Selection Compatibility: The Universal Travel Adapter. As you pack an adapter to connect your devices anywhere in the world, your ETL system must plug seamlessly into all the data sources and destinations: cloud warehouses, SaaS apps, or legacy databases. The more pre-built connectors and integrations it offers, the less time you\u2019ll spend wrestling with custom setups. Ease of Use: The All-Terrain Vehicle. Some expeditions require expert drivers, while others benefit from vehicles anyone can handle. A user-friendly solution with no-code or low-code options lets technical and non-technical team members build, monitor, and adjust data pipelines, so you don\u2019t need a specialist at every turn. Scalability & Performance: The Expandable Backpack. As your journey progresses, you\u2019ll collect more \u201csupplies\u201d (data). The ETL software should be able to handle growing volumes and complexity without slowing down, much like a backpack that expands to fit new gear. Look for tools that can process large datasets quickly and reliably and scale up as your business grows. Transformation Capabilities: The Swiss Army Knife. Every adventure needs a tool that can adapt to unexpected challenges. The best platforms offer robust transformation features, handling everything from simple data cleaning to complex business logic, so you\u2019re ready for whatever the data throws at you. Cost: The Trip Budget. No journey is complete without a budget. ETL tools come with a variety of pricing models, like subscription, pay-as-you-go, or open source. Ensure the costs align with your needs, and you won\u2019t be hit with surprise fees as the data grows. Support and Community: The Expedition Guide. When you hit a rough patch, it helps to have an expert guide or a community of fellow travelers. Consider the quality of vendor support, documentation, and active user communities to ensure you\u2019re never stranded when issues arise. Security and Compliance: The Sturdy Lockbox. Protecting the data is like keeping your valuables in a lockbox. Ensure the system offers strong security features, encryption, and compliance with regulations like GDPR or HIPAA, especially if you\u2019re handling sensitive information. Flexibility and Customization: The Modular Toolkit. Every expedition is unique. The right software should let you customize workflows, transformations, and scheduling to fit your organization\u2019s distinct requirements, whether you need real-time streaming, batch processing, or both. Conclusion The ETL tools are a perfect way for organizations to streamline and maintain the data pipeline, data governance, and monitor these processes daily. Choosing the right ETL tool for you depends on multiple factors like the use cases of the organizations, connection to the data sources, skill sets for using the application, ability to provide role-based access and data governance, budget, etc. The open-source ETL tools are free, but certain expertise is required to develop and maintain the workflows. In the segment of cloud-based ETL tools, Skyvia excels by providing essential features that meet the demands of modern data integration, making it an excellent choice for businesses looking to streamline their data management processes. F.A.Q. for Best ETL Tools What are the best ETL tools available in 2025? Top ETL tools in 2025 include Skyvia, Fivetran, and Talend for cloud-based solutions and Apache Airflow and Pentaho for more flexibility and scalability. Choose based on your team\u2019s expertise and integration needs. What types of ETL tools are there? ETL tools include cloud-based (Skyvia, Fivetran), on-premise (Informatica, Talend), and open-source (Apache Nifi, Airbyte), each serving different business requirements for integration, customization, and security. How do I choose the best ETL software for my business? Consider data volume, processing needs, and scalability. Cloud-based tools like Skyvia are user-friendly, while on-premise tools like Informatica offer better security control for sensitive industries. What are the top ETL tools for large-scale data migration? For large-scale migrations, Informatica PowerCenter and Talend Data Fabric handle high volumes efficiently, while Fivetran and Skyvia are ideal for cloud-based migrations with strong scalability. What is the difference between batch ETL and real-time ETL tools? Batch ETL processes data in scheduled intervals, while real-time ETL tools (e.g., Fivetran, Airbyte) offer continuous updates. Choose based on the need for immediate data consistency and speed. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+ETL+Tools%3A+Compare+Data+Integration+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fetl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/etl-tools/&title=Top+10+ETL+Tools%3A+Compare+Data+Integration+Solutions) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/export-data-from-salesforce-to-excel/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) 5 Ways to Export Data from Salesforce to Excel By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/export-data-from-salesforce-to-excel/#respond) 4598 January 24, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce-to-excel%2F) [Twitter](https://twitter.com/intent/tweet?text=5+Ways+to+Export+Data+from+Salesforce+to+Excel&url=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce-to-excel%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/export-data-from-salesforce-to-excel/&title=5+Ways+to+Export+Data+from+Salesforce+to+Excel) Salesforce has become a cornerstone for managing customer relationships and data in businesses of all sizes. However, for many users, exporting Salesforce data to Excel remains an essential step for in-depth analysis, creating custom reports, or sharing insights with stakeholders. While Salesforce provides native options for exporting data, the diverse needs of businesses often demand more flexible and efficient solutions. This article explores five practical ways to export Salesforce data to Excel, catering to scenarios ranging from simple one-time exports to automated workflows for seamless data synchronization. Whether you\u2019re a beginner or an advanced user, this guide will help you choose the right approach to streamline your data export process. Table of contents Salesforce Interface Data Export Options Salesforce Data Export Exporting Salesforce Reports Pros and Cons of Using Salesforce Interface Export Options Salesforce Data Loaders Salesforce Data Loader Skyvia Salesforce Data Loader Pros and Cons of the Salesforce Data Loaders Excel Original Interface Pros and Cons of Using the Excel Original Interface Excel Add-Ins Popular Excel Add-ins for Data Export Pros and Cons of Excel Add-ins ODBC Drivers and OData Endpoints Exporting Salesforce Data Using ODBC Drivers Pros and Cons of Using the ODBC Drivers Exporting Salesforce Data Using OData Endpoints Pros and Cons of Using the OData Endpoints for Salesforce to Excel Exports The Best Fits of Each Method Conclusion 1. Salesforce Interface Data Export Options One of the easiest ways to [export data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) to Excel is by using the Salesforce interface\u2019s built-in export options, which include data export (for large exports) and reports export (for custom report data). Here\u2019s a step-by-step guide on each method: Salesforce Data Export The [Salesforce Data Export](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/exporting_data.htm?) native method is proper when exporting large datasets from the Salesforce organization, for example, for a complete backup. This method provides flexibility, especially for basic data backup or reporting needs directly within Salesforce, and allows users to export all data from Salesforce in a structured format. Let\u2019s see how this approach works. Navigate to Setup. In the Setup search bar, type Data Export and select Data Export from the results under Data Management . Select either Export Now (for an immediate export) or Schedule Export (to export data at a specific time, such as weekly or monthly). Select the desired data format (CSV or Excel) for easier use in Excel. Choose the data objects you want to include in the export (e.g., Accounts, Contacts, Opportunities) or select Include All Data to export everything. Click Start Export or Save to schedule. Salesforce will begin preparing the data, which may take time, depending on the data size. When the export is ready, Salesforce will send an email with a download link. The exported data file will be in ZIP format with CSV files for each data object. Unzip the file, open the CSV files, and view the data in Excel. Exporting Salesforce Reports [Salesforce Reports](https://help.salesforce.com/s/articleView?id=sf.reports_export.htm&type=5) allows users to create and export specific reports in Excel-friendly formats. Navigate to the Reports tab from the Salesforce main dashboard and create a new report or open an existing one. Add filters, fields, and groupings as needed to tailor the report to include only the data you want to export. This step is especially useful if you need specific subsets of data, like monthly sales or activity reports. Once the report is ready, click the Export button. Choose Export View (Formatted or Details Only) and file format (XLS or CSV). Save the exported file to your computer and open it directly in Excel for analysis. Pros and Cons of Using Salesforce Interface Export Options Pros Simple and Accessible. No third-party tools or apps are required. Automated Export Schedules. The scheduled export option is perfect for regular data backups. Custom Reports. Using Salesforce Reports allows for highly tailored exports, pulling only the specific data you need. Cons Data Export Limits. Data export is limited to weekly or monthly frequency for non-premium users. File Size Constraints. Large exports may require manual handling due to file size limits. Formatting Limitations. Reports export may require formatting adjustments for optimal readability in Excel. 2. Salesforce Data Loaders Another popular way to export data is using data loaders. These tools allow you to [export data from Salesforce](https://skyvia.com/tutorials/how-to-export-data-from-salesforce) to Excel as CSV files, make changes to these files in Excel, and import changed CSV files back to Salesforce. Many such data loaders are available on the Internet, and many of them have free versions. Here, we\u2019ll review the [native Salesforce built-in data loader](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) and the third-party Skyvia Salesforce data loader and consider their pros, cons, and most popular use cases. Salesforce Data Loader The [Salesforce Data Loader](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/data_loader_intro.htm) is a native client application that is primarily effective for exporting data in bulk and is especially useful for large datasets. It supports various file formats, including CSV, XML, and Excel, and allows the import and export of up to 5 million records via a wizard UI or a command line (for Windows users). This tool is perfect for administrators and developers seeking precise control over their operations, offering features like field mapping and error logging to ensure a smooth workflow. The user-friendly wizard simplifies the export process, while the command-line functionality supports task automation and scheduling. Follow the steps below to start working with it: [Download](https://developer.salesforce.com/tools/data-loader) and install the Salesforce Data Loader tool. Open Data Loader, log in with your Salesforce credentials and select the object you wish to export (e.g., Accounts, Contacts). Specify the file path where the exported data will be saved. Follow the prompts to finish the export process. Skyvia Salesforce Data Loader [Skyvia Salesforce Data Loader](https://skyvia.com/data-integration/salesforce-data-loader) is a tool that perfectly works for exporting, importing, and syncing data between Salesforce and other platforms, including direct data export into Excel. Unlike Salesforce\u2019s native Data Loader, Skyvia is fully cloud-based, so there\u2019s no need for installation or manual updates. It also offers a no-code interface, so non-technical users may start exporting easily. The platform provides advanced filtering options , allowing users to export only the necessary records. Its intuitive design supports quick setup and seamless integration with [200+](https://skyvia.com/connectors#marketing) platforms, making the solution ideal for businesses of all sizes. Automated scheduling allows users to set up recurring export tasks, ensuring data is always up-to-date without requiring manual intervention. Here\u2019s a step-by-step guide to using Skyvia Data Loader to export Salesforce data to Excel. Visit [Skyvia\u2019s website](https://skyvia.com/) and sign up for a free account if you don\u2019t already have one. Once logged in, go to Integration from the Skyvia page. Select + Create New > Connection. Select Salesforce Connector in the connectors list, set the appropriate parameters, and click Create Connection . Select + Create New and click Export in the Integration column. Select Source (Salesforce) and Target (CSV Download manually). Go to the Task Editor and click Add new . Select the Object (for example, Accounts) in the Source Definition tab. Note : In this step, you can select the object\u2019s columns list. By default, all its columns will be exported, but you can deselect any columns you don\u2019t need to include. Select the data for export in the Object filter , specify the Target File Name , and click Next step . Check the data in the Output Columns tab and click Save . If necessary to add another object, repeat steps 6-9. To execute the export task immediately, click Run . The file will be generated and available for download. To automate exports, set up a Schedule . Skyvia\u2019s scheduling feature allows you to set up hourly, daily, or weekly exports, which is convenient for keeping Excel data updated with the latest Salesforce information. Once the export is complete, download the Excel file using the Monitor and Log tabs. Note: You can automatically download the Salesforce export file to the cloud storage or FTP/SFTP. Pros and Cons of the Salesforce Data Loaders Pros Bulk Data Handling. Both tools excel at managing large datasets, with the native Data Loader supporting up to 5 million records and Skyvia offering seamless bulk operations across platforms. User-Friendly Interfaces. The native Data Loader provides a wizard for guided workflows, while Skyvia features a no-code, intuitive cloud-based interface suitable for non-technical users. Automation Capabilities. Skyvia supports scheduled exports and syncing, reducing manual intervention, while the native tool allows command-line automation for Windows users. Cons Native Tool Installation. The native Data Loader requires installation and manual updates, which may not be ideal for cloud-focused workflows. Resource Intensity. Large-scale operations on the native tool can be resource-intensive, potentially affecting performance on lower-spec systems. Feature Gaps . Skyvia lacks advanced developer-centric features, like field-level error reporting, which are available in the native tool. 3. Excel Original Interface Getting data from Salesforce to Excel through the original interface is a perfect solution for Microsoft Office Professional Edition users. Unlike data loaders, in this case, the data are displayed directly in the Excel UI; you do not need to use intermediate CSV files. To export data: Open a blank Excel workbook. Ensure [Power Query](https://learn.microsoft.com/en-us/power-query/power-query-what-is-power-query) Is Available in Excel. Note: Power Query is integrated into Excel under the Data tab in versions 2016 and later. For older versions, you might need to install the Power Query add-in. Confirm that API access is enabled for your Salesforce account (available in Enterprise, Unlimited, Developer, or Performance editions). Generate a Security Token if necessary (via Settings > My Personal Information > Reset My Security Token in Salesforce). Click the Data tab, then Get Data > From Online Services > From Salesforce Objects . Select between connecting to the Production or Custom environment in the opened window and sign in to Salesforce. Having signed in to Salesforce, select the objects you want to export to Excel in the Navigator window. If you want to export more than 1 object, click the Select multiple items check box. On the right, you can preview a table. Click Load to export data to the Excel worksheet. As we had selected two objects, Account, and Attachment, the corresponding data were downloaded and looked like in the picture below. Pros and Cons of Using the Excel Original Interface Pros Flexible Data Source Options. Excel can connect to various data sources, including databases, online sources, and cloud services. Data Transformation Capabilities. Power Query lets users clean and transform data before loading it into the worksheet, making data prep easier. Direct Integration with Excel. Data loads directly into Excel without third-party tools, allowing immediate access to Excel\u2019s built-in features like charts and pivot tables. Live Data Connections. Excel allows data refresh options for some sources, so you can update data directly from the source without re-importing. Cons File Size Limitations. Excel struggles with huge datasets, which can slow down the application or result in errors if the file exceeds Excel\u2019s row and column limits. Limited Automation. While you can manually refresh connected data, Excel\u2019s native tools don\u2019t support fully automated exports or scheduling without using macros or external scripts. Data Source Compatibility. Not all data sources or formats are natively supported, and setting up connections to some databases may require technical knowledge. 4. Excel Add-Ins The next way to pull Salesforce data into Excel is to use Excel Add-Ins for Salesforce . It enables companies to work with Salesforce contacts, accounts, opportunities, leads, users, attachments, tasks, and other objects as they would with usual Excel worksheets. Add-ins also allow companies to perform data cleansing and deduplication and apply Excel\u2019s powerful data processing and analysis capabilities to this data. In this section, we\u2019ll discover the useful Excel Data Add-ins for data export, their pros, cons, and use cases, and go through the step-by-step example of the most popular one to see how such connections work. Popular Excel Add-ins for Data Export [Skyvia Query](https://docs.skyvia.com/skyvia-query-excel-add-in/) connects with various cloud services like Salesforce, HubSpot, and Google Sheets to pull data into Excel using SQL queries. [Power BI Publisher](https://powerbi.microsoft.com/it-ch/blog/share-your-excel-data-with-power-bi-publisher-for-excel-preview/) allows users to export Power BI reports and visuals directly into Excel. [XL-Connector](https://appsource.microsoft.com/en-gb/product/office/WA104382023?tab=Overview) enables seamless export and import of Salesforce data directly within Excel, including support for large data sets and bulk operations. Let\u2019s consider how to realize such a kind of export with Skyvia Query Excel Add-In. This add-in is one of the most popular solutions in this area, pulling data directly into Excel from many data sources, such as Salesforce, HubSpot, Google Sheets, MySQL, etc. With it, users can use SQL queries to select and filter data or a visual query builder. Open Excel, go to the Insert tab, and select Get Add-ins . Search for Skyvia Query , then click Add to install the add-in. The add-in will appear in the right sidebar, connecting you with various data sources. Open the Skyvia Query pane in Excel. Log in with your Skyvia account or create a new one if you don\u2019t already have one. Go to Skyvia and click New Connection. Select your data source (e.g., Salesforce, HubSpot, MySQL). Enter the credentials for the selected source and authorize access. Once connected, the data source and available objects (like Contacts and Accounts) will appear in the add-in pane. Use the Visual Query Builder if you prefer a code-free experience or write an SQL query for precise data control. Select the desired fields and apply filters to refine your dataset. Click Run Query to execute your selection. The data will load directly into your Excel worksheet, which you can refresh or modify. To keep your data up-to-date, click the Refresh button. Skyvia will pull the latest data from the source, making it ideal for live updates. Pros and Cons of Excel Add-ins Pros Seamless Integration. Tools like Skyvia Query, Power BI Publisher, and XL-Connector make it easy to directly connect external platforms such as Salesforce or Power BI to Excel, streamlining workflows. Real-Time Data Access. These add-ins synchronize up-to-date data, allowing users to use the latest information without manual exports. Customization Options. Advanced filtering, aggregation, and visualization features let users tailor their data views to specific needs. Time-Saving Automation. Automated updates and syncing capabilities reduce the time spent on repetitive tasks. Cons Performance Limitations. Handling large datasets may cause Excel to lag, impacting the efficiency of all three tools. Learning Curve. Tools like Power BI Publisher and XL-Connector may require training or prior familiarity, especially for non-technical users. Cloud Dependency. Skyvia Query and Power BI Publisher rely on stable internet connections for their functionality. Cost Implications. Advanced Skyvia Query and XL-Connector features may require premium plans, adding to operational expenses. 5. ODBC Drivers and OData Endpoints ODBC drivers and OData endpoints offer dynamic methods for connecting Salesforce data to Excel. While ODBC drivers enable a direct connection between applications like Excel and databases, OData endpoints allow data access via a standard REST API protocol. Both methods support real-time data interactions but best suit different use cases. Let\u2019s consider each method in detail. 1. Exporting Salesforce Data Using ODBC Drivers Open Database Connectivity ( ODBC ) drivers act as intermediaries between Excel and Salesforce, allowing Excel to pull Salesforce data in a structured format. You may use one of the top three ODBC drivers , like Progress DataDirect Salesforce ODBC Driver, CData Salesforce ODBC Driver, or Skyvia\u2019s Universal ODBC Driver, to export data from Salesforce to Excel. Let\u2019s consider how this method works in the example of Skyvia\u2019s. This driver supports [200+](https://skyvia.com/connectors) data sources via [Skyvia Connect SQL endpoints](https://skyvia.com/connect/sql-endpoint) , including Salesforce. Unlike most ODBC drivers on the market, this driver offers a pay-for-traffic [pricing mode](https://skyvia.com/pricing) l, which can be helpful if you don\u2019t need to load large data volumes. Steps to Use ODBC Drivers for Salesforce to Excel Exports Note: Before installing an ODBC driver, ensure you use a matching Excel and ODBC Driver; e.g., if you use the 64-bit version of Excel, you must install a 64-bit ODBC Driver. Being installed for a specific data source, such as Salesforce, ODBC drivers can work with Excel and many other tools\u2014that is a great plus for them. Let\u2019s consider step-by-step how this approach may be useful. Open Skyvia, click + Create New, and [create a Salesforce connection](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) . Choose SQL Endpoint from the Connect category in the menu and create it. After you create an SQL endpoint, download the ODBC driver to continue. After you install the driver, you can register your SQL endpoint as an ODBC data source. Read more about [registering data sources here](https://docs.skyvia.com/connect/sql-endpoints/odbc-driver.html#registering-data-source) . In Excel, go to the Data tab, choose Get Data > From Other Sources > From ODBC , and select the Salesforce ODBC connection. You can now use SQL queries in Excel to pull specific Salesforce data tables, columns, or filtered records. Run the query to load data directly into Excel. The data can then be refreshed periodically for updated records. Pros and Cons of Using the ODBC Drivers Pros Direct Data Loading . There is no need for intermediate files; data loads directly into Excel. SQL Query Flexibility . SQL is used to select and filter data, giving users control over the data they retrieve. Real-Time Updates . An ability to refresh data to reflect current records without re-exporting. Broad Compatibility . Support for ODBC connectivity across multiple third-party products. Universal Data Mapping . Salesforce data types and functions are mapped to ODBC data types and functions, which all ODBC-compatible tools understand. Cons Complex Setup . Configuration may require technical knowledge, especially for defining SQL queries. API Usage Limits . Frequent queries may exceed Salesforce API limits, impacting data retrieval. Performance Impact . Large or complex queries can slow down Excel performance. One-Way Data Flow . There is no possibility to load modified data back to the source. Third-Party Reliance . There is no Salesforce original ODBC driver; i.e., only third-party ODBC drivers are available. 2. Exporting Salesforce Data Using OData Endpoints Open Data Protocol (OData) is an OASIS standard that defines the best practice for building and consuming queryable and interoperable RESTful APIs in a simple and standard way. Unlike ODBC, OData allows creating an endpoint to receive Salesforce data via OData or use the already created one by simply pasting its URL into Excel. However, creating an OData endpoint yourself is a challenging task. To reduce time, costs, and effort, you can use third-party solutions like [Skyvia Connect](https://skyvia.com/connect/) to simplify the use of OData. It is a cloud solution that doesn\u2019t require downloading, hosting, or maintenance. You don\u2019t need to develop any API. Just do a few steps: Generate endpoint. Configure settings. Set the data access options. Steps to Use OData Endpoints for Salesforce to Excel Exports Set up a connection with the Skyvia Connect endpoint Click the Data tab, then Get Data > From Other Sources to set up a connection with the Skyvia Connect endpoint. Click From OData Feed . In the pop-up window, select Basic , paste the required endpoint URL, and click OK. Specify a link to the Salesforce endpoint in Skyvia Connect and fill out the User Name and Password fields. Click Connect . Microsoft Excel will attempt to connect to the specified URL and display the list of Salesforce objects added to the endpoint in Skyvia. Note: Please note that Microsoft Excel requires an OData connection to be authenticated through Windows authentication or HTTP basic authentication. You need to use HTTP basic authentication. Exporting Data Select the Contacts and ContactFeeds tables and click Load . Optionally, you can click the Transform Data button. It allows you to change the data being loaded to Excel, i.e., to filter something, add some columns, etc. However, this applies only to the data being loaded into Excel. The data in Salesforce will not be changed anyhow. The worksheet is filled out with the data from the specified table. Pros and Cons of Using the OData Endpoints for Salesforce to Excel Exports Pros Real-time Data Access . Data is accessed in real-time from Salesforce without creating duplicates. Wide Compatibility . Many tools, including Excel and BI applications, support OData, making it a versatile integration choice. Easy Setup . Minimal technical setup compared to SQL or custom API connections is required. Secure Access . Endpoint authentication provides access to endpoint data only to authenticated users, with the option to limit IP addresses for endpoint data access. No Credential Sharing . There is no need to share Salesforce credentials. Cons API Usage Limits . High-frequency queries may consume API usage, impacting performance for real-time updates. Compatibility Issues . Some Salesforce field types or custom data structures may not be compatible with OData. Manual API Work . Need to build API manually or use paid services. Update Limitations . It\u2019s impossible to update Salesforce data from Excel. The Best Fits of Each Method The table below summarizes which method aligns best with specific business needs, helping companies choose the most suitable option for exporting Salesforce data to Excel. Method Business Cases to Fit Salesforce Interface Data Export Options Suitable for one-time bulk exports of complete Salesforce datasets. Perfect for backups or archiving purposes. Fits businesses requiring manual exports on a scheduled basis (weekly or monthly). [Salesforce Data Loaders](https://skyvia.com/blog/salesforce-best-data-loaders/) Fits for large-scale data migrations or transfers (up to millions of records). Best for businesses needing custom SOQL queries or incremental updates . Works well for data cleansing and advanced mapping requirements before export. Excel Original Interface Useful for quick ad-hoc data imports for analysis and reporting. Fits cases where users want to manually process or visualize smaller datasets . Suitable for businesses needing basic manipulations without advanced automation. Excel Add-Ins Fits businesses needing real-time connectivity to Salesforce data within Excel. Good for dynamic reporting and interactive dashboards . Best for users who prefer no-code solutions with easy configuration. ODBC Drivers and OData Endpoints Excellent for businesses requiring continuous, automated data syncing between Salesforce and Excel. Fits advanced use cases involving SQL queries or live updates for dashboards. Best for organizations with hybrid multi-platform integration needs . Conclusion Exporting data from Salesforce to Excel offers multiple approaches, each catering to different business needs and technical expertise levels. Choosing the perfect export method depends on your team\u2019s needs for data frequency, customization, and technical resources. With the right approach, exporting data from Salesforce to Excel can empower teams with up-to-date insights, streamline reporting, and enhance decision-making. FAQ for Export Data from Salesforce to Excel What is the easiest way to export data from Salesforce to Excel? The\u00a0Salesforce Interface Export\u00a0option is the easiest for quick and simple exports. You can use either the\u00a0Data Export Wizard\u00a0for bulk data or\u00a0Salesforce Reports\u00a0for specific, filtered data exports. Both methods provide Excel-compatible formats (like CSV) and don\u2019t require additional tools or technical expertise. Can I automate exports from Salesforce to Excel? Yes, several methods support automation.\u00a0Third-party add-ins\u00a0like\u00a0Skyvia\u00a0offer scheduling capabilities, allowing you to automate regular data syncs without manual intervention. The\u00a0Salesforce Data Loader\u00a0can also automate tasks if configured with command-line options, though it requires more technical setup. Which method is best for real-time data access in Excel? ODBC Drivers\u00a0and\u00a0OData Endpoints\u00a0offer the closest to real-time data access in Excel. With\u00a0ODBC drivers, you can run SQL queries to retrieve and refresh data directly, while\u00a0OData endpoints\u00a0allow for on-demand data access without storing data duplicates, keeping Excel sheets up-to-date. Are there limits on data volume when exporting from Salesforce to Excel? Yes, different methods have volume considerations. For instance,\u00a0Salesforce Data Loader\u00a0can handle bulk exports effectively but is subject to\u00a0Salesforce\u2019s API usage limits. Third-Party Add-ins\u00a0may have volume limitations based on subscription levels. Salesforce Reports\u00a0may limit the number of rows per report. What\u2019s the best option for a non-technical user who needs frequent data exports? Third-party add-ins\u00a0like\u00a0Skyvia\u00a0and\u00a0Salesforce Reports Export\u00a0are excellent for non-technical users. Skyvia provides a no-code interface with automated scheduling, and Salesforce Reports allow users to customize and export data without complex setup, making both accessible and user-friendly. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce-to-excel%2F) [Twitter](https://twitter.com/intent/tweet?text=5+Ways+to+Export+Data+from+Salesforce+to+Excel&url=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce-to-excel%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/export-data-from-salesforce-to-excel/&title=5+Ways+to+Export+Data+from+Salesforce+to+Excel) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/export-data-from-salesforce/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Export Data from Salesforce into Excel, CSV and Google Sheets By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/export-data-from-salesforce/#respond) 3379 March 19, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=Export+Data+from+Salesforce+into+Excel%2C+CSV+and+Google+Sheets&url=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/export-data-from-salesforce/&title=Export+Data+from+Salesforce+into+Excel%2C+CSV+and+Google+Sheets) Salesforce is an excellent CRM platform that provides many business solutions, offering its services to companies of various sizes, from small startups to large enterprises in different industries. However, Salesforce data management sometimes spreads outside the platform. Businesses often use multiple tools that work on the same information. Marketing automation platforms, e-commerce systems, data analytics, and financial software \u2013 keeping all this information in sync manually is expensive and error-prone. To simplify this process, this article will help you explore the five most common Salesforce data export tools, their key differences, and how to set them up effectively. Table of Contents Why Salesforce Data Export is Important? How does Data Export Work in Salesforce? Method 1: Export Salesforce Data Using Skyvia Skyvia Query: The Fastest Way to Export Salesforce Data Skyvia Export: For Scheduled and On-Demand Salesforce Data Export Export Salesforce Data Using Skyvia Query Excel Add-In Export Salesforce Data Using Skyvia Query Google Sheets Add-on Method 2: Export Salesforce Data Using Data Export Wizard Method 3: How to Export from Salesforce as CSV Using Data Loader Method 4: Dataloader.io for Salesforce Data Export Method 5: How to Export from Salesforce Using API Comparing Salesforce Data Export Tools Conclusion Why Salesforce Data Export is Important? Data export in Salesforce allows users to extract their data from the Salesforce platform and save it externally for various purposes such as: Data Backup . Users often back up Salesforce data outside Salesforce because of the relatively expensive and limited storage capacity in Salesforce itself. And for an additional security layer for backed-up data. Analysis and reporting . Although Salesforce offers reporting and analysis tools, companies use third-party apps designed for these purposes and offer more flexible reporting services. Data Integration enables efficient data exchange across multiple platforms. Compliance . Export allows managing data according to data governance policies and regulations requirements. Collaboration . Users export data to share it with colleagues, stakeholders, and partners. How does Data Export Work in Salesforce? Salesforce stores a large amount of information, which can be downloaded using both internal and external tools. Native solutions Almost all tools (like [Data Export Wizard or Data Loader](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) ) work in a similar manner: users select the objects to export, choose whether they want to include attachments and files, and then start the process. The tool gathers all required records and, after finishing the process, allows downloading the result in the specified format. Although all the tools export data from Salesforce, the details that matter are not all of them. Not every tool can handle different file formats, and scheduling or other specifics are limited. Third-party solutions Software that connects to Salesforce via API can be considered as an external tool. Like internal tools, they collect records based on the specified settings and then let the user download the result in the specified format. External tools offer different levels of flexibility and allow fine-tuning of how objects will appear in the result file. Method 1: Export Salesforce Data Using Skyvia If you\u2019re looking for a quick way to export from Salesforce to spreadsheets, BI tools, or data warehouses \u2013 [Skyvia](https://skyvia.com) can help. The platform also offers features like import, synchronization, replication, or backup. There are numerous instances when exporting information might be necessary. For example, running an email marketing campaign for customers may require obtaining a contact list from the CRM. To export contacts from Salesforce using this method, you will need to create a Skyvia account if you do not have one. Skyvia Query: The Fastest Way to Export Salesforce Data Best for: Quick Salesforce data export with minimal effort. Perfect for small businesses seeking instant, one-time records in CSV or PDF format without a complicated setup. [Query](https://skyvia.com/query) is a simple yet powerful cloud-based SQL editor and query builder. Skyvia obtains records from sources using connections, and they are required for every method. Before proceeding, you will need to [create a connection to Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html#establishing-connection) . To export Salesforce data using Query, perform the following steps: Login to Skyvia and click + Create New . Under Query, select Builder . Choose the Salesforce connection from the list. If you don\u2019t have one, click +New Connection to add it. Drag and drop the objects from the Tables list into the Result Fields . To filter by object fields, click on the object and drag and drop the field onto the Filters . Then, specify the filtering condition in the Filter section. Click Execute to export from Salesforce. You can download your results in CSV or PDF format by clicking the corresponding buttons in the Result block. Skyvia Export: For Scheduled and On-Demand Salesforce Data Export Best for: Automated export of records to CSV files, either for manual download or direct storage in cloud file services. Ideal for small and medium businesses that need recurring information copies or have large datasets. For example, a consulting firm that manages client\u2019s accounts needs a recurring set of information to work on. The exported records are stored in the cloud storage and will be used later to support targeted outreach and strategic planning. The firm can easily do this using Skyvia Export. With [Skyvia Export](https://skyvia.com/data-integration/export) , you can download your files manually or automatically, placing them in a predefined folder on a supported storage service. To export from Salesforce using the Skyvia Export, perform the following steps: Login to Skyvia and click + Create New > Export . In the Source , select Salesforce connection if it exists, or click +New Connection to add a new Salesforce connection. Set the Target Type : Select between CSV download manually or CSV to storage service . Set other available [integration options](https://docs.skyvia.com/data-integration/export/export-package.html#:~:text=result%20file(s).-,Additional%20Options,-If%20necessary%2C%20specify) and, in the Tasks , \u0441lick Add new on the right. Select the object and fields to extract. Set filters and sorting, optionally. Click Next Step to change the names and order of the columns, if needed, and save the task. Click Schedule on the Model tab to set the recurrence, time, and other time settings for automatic integration launch. When the integration is ready, you can run it and observe the results on the Monitor or Log tabs. To get the result file, click on the exported records number. Skyvia offers different opportunities for various Salesforce data export and integration scenarios. Other Skyvia Features Skyvia\u2019s capabilities can help you in multiple scenarios: [Import](https://skyvia.com/data-integration/import) : Quickly and easily import processed entries back into Salesforce for your sales team to improve lead follow-ups and close deals faster. [Replication](https://skyvia.com/data-integration/replication) : Automatically replicate your records to a cloud warehouse to benefit from unified reporting, deep analytics, and real-time insights. [Backup](https://skyvia.com/backup) : Keep copies of your information to protect it from data loss and human error. For example, the [Cirrus Insight case study](https://skyvia.com/case-studies/cirrus) shows how they automated their Salesforce flow with Skyvia to streamline integrations and cut down on manual work. Their solution reduced costs and boosted efficiency by seamlessly syncing information across systems. See the video to learn more: Export Salesforce Data Using Skyvia Query Excel Add-In Best For: Seamless Salesforce data export into Excel with minimal setup. Ideal for startups or small businesses who might want to do a manual analysis of the records. Want to get your Salesforce records right into your Excel spreadsheet? Skip all the boring steps of setting things up with our [Skyvia Query Excel Add-In](https://skyvia.com/excel-add-in) . To use this Add-In, follow the steps below: Get the Skyvia Query Excel Add-In from the [Microsoft AppSource](https://appsource.microsoft.com/en-us/product/office/WA200001764?tab=Overview) . Click Get it now and sign in to your Microsoft account. Follow the steps on the screen. Open Excel and select the Home tab. Click Add-Ins and select Skyvia Query . Sign in to Skyvia. After logging in, you will see a gallery of various predefined queries organized by different connections. To use the predefined query, select the connection from the list and choose the query you want to run. After choosing the query, select the Salesforce connection. If you don\u2019t have the connection created, click New . You will be redirected to Skyvia, where you will be able to create a connection to Salesforce via OAuth. Click Run . After running the query, your records will appear in your spreadsheet. You can also create your own query by clicking + Query . Add-in offers two ways of building a query: using a visual builder or creating your own SQL query. Select Salesforce as a connection in the drop-down list. Select the object to perform the query on. You can choose the fields that will be added to your spreadsheet. If needed, add a filter by clicking + Condition . Run or save the query. After the run, your records will appear on your spreadsheet. Note: Visit [5 Ways to Export Data from Salesforce to Excel](https://skyvia.com/blog/5-ways-to-export-data-from-salesforce-to-excel/) to get more details on this method. Export Salesforce Data Using Skyvia Query Google Sheets Add-on Best for: Startups and small businesses that need an easy way to export Salesforce data directly into Google Sheets for quick reporting and analysis. [Skyvia Query Google Sheets Add-On](https://skyvia.com/google-sheets-addon/) is a robust tool for building reports in your spreadsheets directly from supported cloud apps and databases. You can instantly copy and transform records from [Salesforce to Google Sheets](https://skyvia.com/blog/how-to-connect-salesforce-to-google-sheets/) using a visual query builder or custom SQL command editor. The Add-on lets you create your own queries and execute predefined ones stored in the [Skyvia Query Gallery](https://skyvia.com/gallery) . To use the add-on, perform the following steps: Install the [Skyvia Query Add-on](https://workspace.google.com/marketplace/app/skyvia_query/134536098526) . Open Google Spreadsheets and, in your file, click Extensions > Skyvia Query > Login . After logging in, click Extensions > Skyvia Query . You can pick between Query and Gallery : the first option will open a query builder, while the second one will show you predefined queries from our Gallery. Choose Query and then select a Salesforce connection. Choose the object to import into the spreadsheet. If needed, add filter conditions by clicking + . Click Run to add records to your file. If you want to use a predefined query, click Extensions > Skyvia Query > Gallery . Choose the query you would like to use from the list and select a Salesforce connection. Click Run to add query results to your spreadsheet. More information about Skyvia Query Google Sheets Add-on is available [in our documentation](https://docs.skyvia.com/skyvia-query-google-sheets-add-on/) . Method 2: Export Salesforce Data Using Data Export Wizard Best for: Small businesses or startups looking for a simple, built-in Salesforce solution for occasional exports in CSV format. Ideal for those who don\u2019t need frequent or large-scale information extraction. The [Data Export Wizard](https://help.salesforce.com/s/articleView?id=xcloud.admin_exportdata.htm&type=5) is a built-in web tool offered by Salesforce. It allows users to perform manual ad-hoc export or schedule weekly/monthly extraction of small information volumes to CSV files. You can select all the available Salesforce objects or only specific ones, for example, a list of accounts. Salesforce also allows you to include images and file attachments. To use this method, perform the following steps: Log in to [Salesforce](https://login.salesforce.com/) . Click on the gear icon in the top-right and select Setup . In the Quick Find field, type Data Export. Under Data , click Data Export . If you want to get your records now, select Export Now . If you want to schedule your exports, select Schedule Export . Select file encoding. If you need to, include attachments, images, and documents. Select objects under Exported Data . Click Start Export . Salesforce will send you a download link via email when the process finishes. This method works great if you need a quick and native solution to download records once in CSV format. However, this feature is limited. You can request your records on a weekly basis for Enterprise, Performance, and Unlimited Salesforce plans. Monthly exports are available for [all plans](https://www.salesforce.com/pricing/) . Method 3: How to Export from Salesforce as CSV Using Data Loader Best for: Medium-sized businesses that require a tool for bulk data import and export in Salesforce. Ideal for handling large datasets via the command line or UI. [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) is a client application for bulk import and export, available in the Enterprise, Performance, and Unlimited editions. It is a downloadable tool that supports up to 5 million records at a time. You can work with data via wizard UI or a command line (for Windows users). Data Loader exports data in CSV format. To download Data Loader, follow the steps below: [Download Data Loader](https://developer.salesforce.com/tools/data-loader) and install it on your computer. Make sure that you have installed [JRE](https://www.java.com/en/download/manual.jsp) and [JDK](https://www.oracle.com/java/technologies/downloads/) : you can\u2019t run Data Loader without them! Launch the Data Loader application. Click Export . A wizard will open, prompting you to log in. Select your environment from the drop-down list and click Log in . Follow the authentication steps on the screen to continue. Select an object. By default, your objects will be exported to the folder in which you installed your loader. To change the result file location, click Browse . After that, click Next . Choose fields to export. Optionally, select conditions to filter your records. If you do not select any conditions, all the records to which you have read access will be included. Click Finish and confirm the export. You can learn more about Salesforce Data Loader in the official [guide](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/data_loader.htm) . Method 4: Dataloader.io for Salesforce Data Export Best for: Small to medium-sized businesses that need a web-based solution for Salesforce data export. Ideal for users who require scheduled exports without installing software. [Dataloader.io](https://dataloader.io/) is a Mulesoft web-based application that performs bulk import and export. With Dataloader.io, you can extract large volumes of entries, schedule exports, and perform automated imports. To use Dataloader.io, visit the website and click Login with Salesforce . After logging in, do these steps: Click NEW TASK . Select Export . Select your connection and object. Click Next to confirm your choices. Select the required fields. If needed, specify the filters and order . Review your settings. If everything is good to go, click Save or Save & Run . You can learn more about all available features in [Dataloader.io official documentation](https://dataloader.io/exporting-data-salesforce) . Method 5: How to Export from Salesforce Using API Best for: Medium to large businesses and enterprises with development resources that need a fully customizable Salesforce data export. Salesforce provides a comprehensive [set of APIs](https://developer.salesforce.com/developer-centers/integration-apis) that allow developers to operate with platform objects programmatically. The APIs, such as SOAP API, REST API, Bulk API, and others, can be used to query and retrieve records from Salesforce objects. Developers can write custom code or integrate the platform with other systems to extract entries as needed. This method is highly customizable and allows you to precisely set every aspect of your export process. Because this approach requires advanced skills to set everything up, we will not go into detail about these APIs here. If you\u2019re interested in exploring this option, refer to the [Salesforce documentation about APIs](https://developer.salesforce.com/developer-centers/integration-apis) . Comparing Salesforce Data Export Tools Each tool comes with its own strengths and limitations depending on your business needs, data volume, and technical expertise. Below is a table comparing each tool: Method / Benefit Ease of Use Scheduling & Automation Integration Capability Data Transformation Skyvia Intuitive, no-code, cloud-based interface for effortless setup Fully automated scheduling with flexible recurrence options Seamless integration with 200+ cloud apps, data warehouses, and BI tools Advanced data transformation Data Export Wizard Native and straightforward, but manual process Limited to weekly/monthly exports Basic, standalone export functionality No Salesforce Data Loader Requires installation and some technical know-how Manual or command-line scheduling Focused on bulk export only Minimal transformation capabilities Dataloader.io Cloud-based, relatively user-friendly Built-in scheduling features Integrates with some third-party systems, like cloud storages or FTP No Salesforce APIs Requires developer skills and custom coding Custom scheduling through code Custom integration possible Transformation via custom scripts Conclusion Salesforce is a robust platform covering needs for different business functions; however, companies may need Salesforce entries outside the platform for: backup reporting data integration compliance collaboration There are multiple ways to retrieve records from Salesforce. You can manually export data using built-in tools like the Data Export Wizard or Data Loader, use cloud-based solutions like Skyvia or Dataloader.io, and access data programmatically through the REST API. Skyvia is a cloud-based platform that helps users to perform various data-related tasks without coding. Skyvia offers various methods of Salesforce data export. You can: export Salesforce data to CSV build a report from Salesforce directly in Excel or Google Sheets integrate Salesforce data with other apps with different record structures You can use Skyvia to implement any data-related scenario online without coding and for reasonable [prices](https://skyvia.com/pricing/) . If you are still deciding what plan to choose, try any paid subscription free for two weeks. Sign up with Skyvia now and elevate your data management capabilities. FAQ for Salesforce to Excel Can I export from Salesforce without using third-party tools? Yes, Salesforce offers native tools like Data Export Wizard and Data Loader for exports. Can I schedule automatic Salesforce data export? Yes, tools like Skyvia, Dataloader.io, and Data Export Wizard allow you to set up a schedule for your export tasks. Can I export Salesforce data in real-time? No, but third-party tools like Skyvia or APIs can achieve near real-time export frequency. Can I export from Salesforce to Google Sheets or Excel? Yes, you can use Skyvia\u2019s Excel and Google Sheets add-ons to export your data. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=Export+Data+from+Salesforce+into+Excel%2C+CSV+and+Google+Sheets&url=https%3A%2F%2Fblog.skyvia.com%2Fexport-data-from-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/export-data-from-salesforce/&title=Export+Data+from+Salesforce+into+Excel%2C+CSV+and+Google+Sheets) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/financial-data-analytics/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) Unlock the Power of Financial Data Analytics for Your Business By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/financial-data-analytics/#respond) 1127 October 23, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ffinancial-data-analytics%2F) [Twitter](https://twitter.com/intent/tweet?text=Unlock+the+Power+of+Financial+Data+Analytics+for+Your+Business&url=https%3A%2F%2Fblog.skyvia.com%2Ffinancial-data-analytics%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/financial-data-analytics/&title=Unlock+the+Power+of+Financial+Data+Analytics+for+Your+Business) Any data reveals its full potential when analyzed. It\u2019s like IKEA furniture \u2013 while pieces are sole-standing, they make no value, but once assembled, they serve people for decades. As for financial data, it is of particular value to all companies, especially large corporations and banks. Analyzing this data contributes to smart recommendations for increasing company profitability or investments. However, it\u2019s necessary to collect and organize financial data first using the right software described in this article. Table of Contents What Is Financial Data Analytics Comparing Financial Data Analytics Software Skyvia\u2019s Role in Financial Data Analysis Choosing the Right Data Analytics Tool Conclusion FAQ What Is Financial Data Analytics The primary purpose of financial data analytics is to help organizations gain insights from their financial data. It also promotes informed investment decisions, helps in financial planning and budget allocation, advances risk management, and assures regulatory compliance. Day-to-day financial data analysis typically involves the following processes: Data collection. It\u2019s possible to perform high-quality analysis if enough data is available. Analysts gather data from internal company sources and external resources. Research. Companies need to understand what\u2019s going on outside their boundaries. It\u2019s worth looking into macroeconomic trends and exploring industry changes. Data organization . Data for financial analysis is often unorganized and unstructured. Analysts use data cleansing tools and programming languages like Python and C++ to organize data. Forecasting. Financial professionals use historical data to predict what might happen in the future. Building financial models. Typical [financial models](https://corporatefinanceinstitute.com/resources/financial-modeling/types-of-financial-models/) are the discounted cash flow model, leverage buyout model, mergers and acquisition model, etc. These models are intensively used in investment banking or corporate finance. Analyzing financial results. This is the core activity defining such metrics as gross profit margin, net profit margin, cost-to-sales ratio, and others. Providing recommendations. Everything a financial analyst has done until now serves as a base for recommendations for investors or executives. At this point, it\u2019s possible to suggest reducing costs, tapping into new markets, increasing profits, etc. Presenting results. Finance professionals use numbers and jargon that not everyone can understand. That\u2019s why it\u2019s crucial to use [BI and visualization tools](https://skyvia.com/blog/top-bi-visualization-tools/) to present findings in an easy-to-understand manner by creating charts and graphs. Benefits and Limitations of Financial Data Analysis Benefits Organizations can make better business decisions . Predicting trends and aligning business strategies with them. Analyzing historical datasets for improved planning . Detecting fraudulent transactions by comparing current and past activities. Challenges Gathering and structuring data scattered across organizational resources. Accessing sensitive financial data isn\u2019t always possible. Incomplete data impacts the accuracy of analysis . Selecting the right security mechanisms to prevent data leaks and breaches properly. Employing a specialist with essential technical and analytical skills . Comparing Financial Data Analytics Software A tool kit needed for financial analysts depends on the industry and the company\u2019s size. Here, we present software products dedicated particularly to gathering, processing, and elaborating on financial data that can be classified into different categories depending on the access type, data processing speed, and purpose for analysis. On-Premise vs. Cloud-Based Solutions Each business has its own data stack, which is based either in the cloud, on-premises or in a hybrid environment. Therefore, many modern [data analysis tools](https://skyvia.com/blog/top-data-analysis-tools/) are available in both online and offline versions to fit different businesses. Power BI, Tableau, Excel, and other solutions can be installed as desktop apps as well as accessed on the web. Note that online and offline versions slightly differ in functionality. Here are some major distinctions between them: Since desktop apps are older, they have a more ample set of features. For instance, the Excel desktop app has macros, Power Query, and Power Pivot, which aren\u2019t available on the online version. Online apps usually have limited [data management](https://skyvia.com/blog/best-data-management-tools/) , cleansing, and modeling options. Desktop apps have [limited sharing capabilities](https://learn.microsoft.com/en-us/power-bi/fundamentals/service-service-vs-desktop) , which deprives teams of report sharing. Meanwhile, web-based tools are excellent for collaboration. At the same time, there are plenty of BI tools available exclusively online. For instance, Looker Studio, Domo, and Qlik Sense can be accessed only from your browser and can\u2019t be installed as desktop apps. Real-Time Analytics vs. Batch Processing Solutions BI tools mentioned above can collect data either using built-in connectors or with [data integration tools](https://skyvia.com/blog/data-integration-tools/) . In any case, they rely on [batch processing](https://skyvia.com/blog/batch-etl-processing/) , where data is gathered and transferred over time intervals. Real-time analytics implies instant processing and analysis of new data as soon as it becomes available. This is critical for transaction management, IoT monitoring, and other instances requiring instant insights. Here are some tools suitable for real-time data collection, processing, and analysis: [Apache Hadoop](https://hadoop.apache.org/) is an open-source framework designed to operate large amounts of data. It\u2019s also very flexible since it supports both [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) of any format. [Apache Kafka](https://kafka.apache.org/) is a distributed stream-processing platform used for data integration, real-time analytics, and mission-critical applications. Its primary objective is to provide a high-throughput and low-latency platform for handling real-time data needs. [Amazon Kinesis](https://aws.amazon.com/kinesis/) is a service for collecting, processing, and analyzing real-time streaming data. It works well with user behavior monitoring and fraud identification. This platform can also capture gigabytes of data per second and perform predictive analytics operations. [Altair RapidMiner](https://altair.com/altair-rapidminer) is an open-source, cloud-based application for streaming data analytics. It also embeds machine learning models used for predictive analytics. Financial data analytics usually relies on applications that support real-time streaming analytics. BI tools with batch data processing would be sufficient for generating ad-hoc reports. Prescriptive vs. Predictive vs. Descriptive Analytics Tools We have already touched upon predictive analytics above. Now, let\u2019s explore different types of analytics and tools for them. Category Description Tools Descriptive analytics It implies loading the previous data to calculate the needed metrics. Power BI, Tableau, and Looker Studio are the most popular solutions. Predictive analytics It aims to find patterns from the given data by applying [data mining techniques](https://skyvia.com/blog/data-mining-tools/) and statistics models to predict future outcomes on business performance. Kinesis, RapidMiner, KNIME, SAP, and IBM SPSS are popular tools for predictive analysis. Prescriptive analytics At this step, a financial data analyst uses descriptive and/or predictive analytics outcomes to develop recommendations that would help a company achieve its financial goals. All above-mentioned tools. Skyvia\u2019s Role in Financial Data Analysis You already know that analysts have to collect and structure data as a part of their daily workflows. However, they often face challenges since data is dispersed across different sources and is of different formats. Therefore, it\u2019s necessary to consolidate data in a centralized location and make it ready for analysis. This process could be automated by using dedicated [data integration tools](https://skyvia.com/blog/data-integration-tools/) like Skyvia. [Skyvia](https://skyvia.com/) is a universal cloud data platform designed for a wide set of data-related tasks. It contains five principal products: Data Integration, Query, Connect, Automation, and Backup. The [Data Integration](https://skyvia.com/data-integration/) product, in particular, has everything needed to gather and transform data, making it analysis-ready. Data Integration comprises the following tools: [Import](https://skyvia.com/data-integration/import) is the ETL-based tool for migrating data between cloud applications, databases, and data warehouses. It allows users to apply transformations on data, like lookup and expression, and perform mapping to match data structures. [Export](https://skyvia.com/data-integration/export) extracts data from cloud apps into CSV files that can be saved on a computer or cloud storage. [Replication](https://skyvia.com/data-integration/replication) is the ELT-based tool for copying data from cloud apps into data warehouses and databases. It uses the CDC approach to capture data differences when making incremental updates on schedule. [Synchronization](https://skyvia.com/data-integration/synchronization) performs the bi-directional data sync between two sources. It also uses the CDC to identify which data was changed in one source and make the appropriate changes in another. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) allows users to build more complex data pipelines, including several sources, and apply multistage data transformations. This tool also supports error processing logic. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) creates logic for task execution, performs preparatory and post-integration activities, and configures automatic error processing logic. Take a look at how a Skyvia user has successfully set up the Salesforce to QuickBooks integration, which streamlined their financial reporting. Overall, Skyvia could be a good choice to assist you with financial data analysis because it: Has an extremely understandable user interface, allowing even non-tech professionals to set up complex integration scenarios. Connects to more than [200+ sources](https://skyvia.com/connectors) , including databases, data warehouses, apps, storage services, etc. Grants web-based access via a web browser. Covers a wider range of scenarios for collecting and processing financial data. Offers powerful data transformation and organization capabilities. Provides different pricing plans suitable for various companies. Choosing the Right Data Analytics Tool To select the proper tool for analyzing financial data in your organization, it makes sense to consider a number of factors. Business goals and data needs. Companies that aim to get instant insights about financial performance should implement real-time analytics tools. If occasional and non-urgent reports are fine, then batch-processing tools would work fine. Scalability and flexibility. Explore whether a tool of interest has limits on the data volumes. See how these numbers correlate with your expected data load, and decide whether this tool is flexible and scalable enough for you. Ease of use. Pay attention to the user interface and the overall complexity of the analytics service. If it\u2019s too complex, be prepared to spend some time mastering your skills. Integration with other finance data sources. See how the analytics tool connects to other sources where your financial data is stored. Consider using data integration tools that support both the analytics service of your interest and apps containing financial data. Cost and budget. Decide whether a specific analytics tool aligns with your budget. Pricing details are provided on each tool\u2019s official website. Advanced analytics and machine learning. If you need these functions, check whether the chosen solution has them. Security and compliance . Make sure that an analytics tool corresponds to modern security standards and protocols, which is particularly crucial when working with financial data. Community and support. Check whether the preferred solution provides customer support and community forums. This might be helpful at the implementation stage and further stages of the product use. Conclusion Financial data is a diamond in the rough that needs to be well-refined to become a high-cost gem on the shop front. To convert it from the raw state to valuable information, the process of collection, cleansing, structuring, and analysis takes place. So, basically, you will need [data preparation](https://skyvia.com/blog/data-preparation-tools/) and analysis tools. The choice of software for data analysis depends on your business needs and resources. You may select from online or offline solutions, batch processing or real-time services, and so on. Despite the analytics tools chosen, it\u2019s worth using data integration tools like Skyvia to aggregate your financial data and prepare it for analysis. FAQ for Financial Data Analytics What is the approximate cost of the toolkit for financial data analysis? If you decide to select Skyvia for gathering, processing, and consolidating financial sources, it might cost you $0 up to a certain amount of data. Other spending depends on the chosen BI and analytics tools. Overall, you can operate financial data at zero cost, but it the spending will certainly grow as you need more data to be processed and additional features to use. What is the difference between financial data analytics and data analytics? Data analysis is an umbrella term used across various sectors of the economy. Financial data analysis is focused on invoices, payments, budgeting, and other operations related to corporate finance. It helps businesses make informed decisions and develop weighted investment strategies. What are the data sources for financial analysis? There are many different sources of financial data, though there are several most important ones. Those are cash flow statements, income statements, and a balance sheet. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ffinancial-data-analytics%2F) [Twitter](https://twitter.com/intent/tweet?text=Unlock+the+Power+of+Financial+Data+Analytics+for+Your+Business&url=https%3A%2F%2Fblog.skyvia.com%2Ffinancial-data-analytics%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/financial-data-analytics/&title=Unlock+the+Power+of+Financial+Data+Analytics+for+Your+Business) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/google-drive-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce Google Drive Integration: The Complete Guide By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/google-drive-salesforce-integration/#respond) 4270 February 19, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fgoogle-drive-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Google+Drive+Integration%3A+The+Complete+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fgoogle-drive-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/google-drive-salesforce-integration/&title=Salesforce+Google+Drive+Integration%3A+The+Complete+Guide) Salesforce solves numerous customer-related problems for businesses worldwide. However, its storage capabilities may darken the whole impression of using Salesforce. Its storage capacity is limited, forcing users to buy additional space or manually keep track of data volumes by deleting or transporting data elsewhere. Fortunately, external storage services solve this problem. Direct Salesforce Google Drive integration enables the unimpeded storing and sharing of files. This article describes three different ways to connect Salesforce and Google Drive: native Files \u0421onnect feature, third-party applications, and custom integration solutions. Table of contents: What is Salesforce? What is Google Drive? Benefits of Google Drive Salesforce Integration 3 Methods to Connect Salesforce and Google Drive Method 1: Salesforce Files Connect Method 2: Third-party Applications Method 3: Custom Integration Solutions Conclusion What is Salesforce? [Salesforce](https://skyvia.com/connectors/salesforce) is a global platform providing customer relationship management services for Sales, Marketing, Customer Support, and other business functions. Together with CRM services, Salesforce offers a boost of additional opportunities for its users worldwide. You can create apps, integrate data from other systems, visualize data, and even study with Salesforce. What is Google Drive? [Google Drive](https://skyvia.com/connectors/google-drive) is a user-friendly storage service that allows users from all over the world to safely store their files on Google Cloud servers. It allows storing files of any possible type and size. Google Drive is a part of the global Google environment, offering almost unlimited collaboration opportunities for its members. It also provides extra benefits like automatic file synchronization with any device, integrating your files with other tools, and the interactive help of Gemini, the AI assistant from Google. Benefits of Google Drive Salesforce Integration When working in Salesforce, users may create and process files, such as photos from events, product pictures, screenshots, and ticket attachments. Salesforce supports all audio, video, image, and archive formats. All these files have to be stored somewhere and accessed directly from Salesforce. Direct Salesforce Google Drive integration lets users store their files, synchronize them across devices, and share these files with other users or teams. Let\u2019s look at several key benefits of integrating Salesforce with Google Drive. Cost-efficiency and Storage Scalability Salesforce Google Drive integration helps to avoid additional costs on storage scalability: Feature Salesforce Google Drive Base Storage Allocation [Varies by edition](https://help.salesforce.com/s/articleView?id=xcloud.admin_monitorresources.htm&type=5) : 5 MB for Developer, 10 GB per organization for Enterprise. 15 GB of free storage shared across Google Drive, Gmail, and Google Photos. Additional Storage Additional storage can be purchased Paid plans available through Google One, starting at 100 GB for $1.99/month. Utilizing Google Drive\u2019s cloud storage capabilities within Salesforce provides a cost-effective solution for managing large volumes of documents while maintaining seamless access to the files directly from Salesforce. Centralized Document Management Integrating Salesforce with Google Drive helps organizations to centralize all their documents and files in one easily accessible location acting as a [single source of truth](https://skyvia.com/learn/single-source-of-true) . This approach allows teams to quickly retrieve and manage files stored in one place from within Salesforce or from external apps. It means that everyone works with the same files. Integration eliminates discrepancies and risks of version control issues, keeping teams on the same page. Real-time Collaboration Google Drive files are linked to Salesforce, thus, once a file is edited, the changes are caught by Salesforce. The most up-to-date information is available to everyone, improving consistency and accuracy. Streamlined Workflows and Automation Salesforce Google Drive integration impacts how sales teams manage and share documents. You can link files or folders to specific Salesforce records, minimizing manual data entry and eliminating the risk of human errors. Integration simplifies daily workflows by automating file synchronization and ensuring that documents are consistently updated across platforms. You can automate various processes, increasing overall productivity and data accuracy: Notifications Version control Collaborative sharing Document approval Enhanced Security and Compliance Connecting Salesforce with Google Drive helps to keep your data safe. With centralized access controls, audit trails, and permission settings, businesses can protect sensitive data according to industry standards and compliance requirements. 3 Methods to Connect Salesforce and Google Drive There are several ways to integrate Salesforce and Google Drive. You can use a native Salesforce tool, third-party tools like Skyvia, or others offered in [AppExchange](https://appexchange.salesforce.com/) , or implement your custom integration. Let\u2019s briefly overview all of them before diving deep into each one. Salesforce Files Connect . This native tool allows users to share, search, and work with external data within Salesforce UI. It supports connection to storage services like Quip, SharePoint, Box, and Google Drive. It enables users to link Google Drive files to Salesforce records, providing direct access without duplicating data. Third-Party Applications . This no-code method uses third-party application features to integrate Google Drive and Salesforce. [Google Drive Connector for Salesforce File Storage](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000Ecv16UAB) .\u00a0 It\u2019s a free tool from AppExchange. It enables direct integration between Salesforce and Google Drive. [Drive Connect](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000FMlVWUA1) is another tool from the Salesforce marketplace. It enables the seamless Salesforce Google Drive integration, offering various useful features. [Skyvia](https://skyvia.com/data-integration/integrate-salesforce-google-drive) is a cloud platform that enables loading data from Google Drive files to Salesforce or other supported connectors by importing and extracting data into CSV files and placing these files in Google Drive storage using Export. Custom Integration Solutions . If other available methods don\u2019t meet your specific business needs, you can build a custom integration using Salesforce\u2019s APIs and Google Drive\u2019s APIs. This fully flexible approach requires technical expertise and resources for implementation and maintenance. Let\u2019s look at each of these methods in detail. Method 1: Salesforce Files Connect Salesforce keeps the market pace and enables direct integration with the most popular external data storage using the native Salesforce Files Connect tool. It provides a direct connection to external data storage, including Google Drive. This tool makes your files available directly through Salesforce UI, but its implementation is complex. Let\u2019s look through the whole process below. To Connect Google Drive and Salesforce, you have to perform the following steps: STEP 1 Enable Salesforce Files Connect for your organization Sign In with Salesforce. On the Setup page, search Files Connect in the Quick Find box. Click Edit and select the Enable Files Connect checkbox. Choose file-sharing options applicable to your scenario. More details are available in Salesforce [documentation](https://help.salesforce.com/s/articleView?id=sf.admin_files_connect_enable.htm&type=5) . STEP 2 Let users and administrators access Files Connect data sources After enabling Salesforce Files Connect, you must grant your users and administrators [permission](https://help.salesforce.com/s/articleView?id=experience.admin_files_connect_perm.htm&type=5) to access the external data sources from Salesforce. Search Permission Sets in Setup Quick Search. Create a new permission set and enable Files Connect Cloud. STEP 3 Create an Authentication Provider for Google Drive To use Google Drive as an external data source, you must [create an authentication provider](https://help.salesforce.com/s/articleView?id=experience.admin_files_connect_google_auth.htm&type=5) . Go to [https://console.cloud.google.com](https://console.cloud.google.com) and sign in using your Google App for the Work admin account. Click Create Project , enter the project name and location, and click Create . Navigate to APIs & Services in the project dashboard. Then, go to the Library tab and search for Google Drive API. Select Google Drive API and click Enable API . Click Credentials on the left. Click Create Credentials, select OAuth client ID , select Web application , and click Create . Note: You need the client ID and secret values to create an authentication provider in Salesforce. Thus, we recommend saving them to a text file. STEP 4 Create an Authentication Provider in Salesforce In Salesforce, search for Auth. Providers in the Quick Find box, select it and click New . Select OpenID Connect for Provider Type, and specify the following parameters: Name \u2014\u00a0 the name to display in Salesforce. URL Suffix \u2014 Suffix at the end of the URL path will be generated automatically. Consumer Key \u2014\u00a0 the client ID obtained from the Google project. Consumer Secret \u2014\u00a0 the client secret obtained from the Google project. Authorize Endpoint URL : [https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force](https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force) (Use this URL when editing the Google project). Token Endpoint URL : [https://accounts.google.com/o/oauth2/token](https://accounts.google.com/o/oauth2/token) Default Scopes: [https://www.googleapis.com/auth/drive](https://www.googleapis.com/auth/drive) After you save the Auth. Provider, find the Callback URL at the bottom of the page and copy it to a text file. Edit the Project in Google Developer Console. In the API Manager, click Credentials and select the previously created web app. In the Authorized Redirect URIs section, add the Callback URL copied when creating the authentication provider in Salesforce and click Save . STEP 5 Define an External Data Source for Google Drive. To access Google Drive files directly from Salesforce, [define the external data source](https://help.salesforce.com/s/articleView?id=sf.admin_files_connect_google_xds.htm&type=5) . In the Salesforce Setup menu, enter External Data Sources in the Quick Find box, then select External Data Sources . Click New External Data Source . Then, set the following options: Label, Name, Type, Identity Type, Authentication Protocol, Authentication Provider, Scope, and Start Authentication Flow on Save. Sign in with Google and allow it to access your Salesforce data. Now, you can access your Google Drive files from the Files tab in Salesforce apps. Salesforce Files Connect Method Pros Access to External Files. It helps to access, share, and search external data from multiple storages like Google Drive directly within Salesforce. Real-Time Collaboration. The files are embedded in Salesforce, allowing all involved team members to access the most up-to-date documents directly from Salesforce. Salesforce Files Connect Method Cons Complex Setup Process. The setup process for Files Connect may be quite complex as it requires multiple configuration steps. Limited customization. Files Connect provides basic integration capabilities, but it may lack advanced features offered by third-party connectors, like automated folder creation, advanced document generation, or others. Method 2: Third-party Applications There are lots of tools that solve different tasks and cover various business needs. They can help you integrate Salesforce and Google Drive, involving minimum effort and resources. Let\u2019s look at one of them. Skyvia is a cloud platform offering various services to perform data-related tasks. This method involves loading data from CSV files on Google Drive to Salesforce or any other supported connectors or extracting Salesforce data into CSV files and placing these files in Google Drive storage. The integration is possible in both directions, and it doesn\u2019t require coding. Note: To connect Salesforce and Google Drive using Skyvia, you need an active Skyvia account. Register now and get a two-week trial for any of its paid plans. Below, we will show you two integration scenarios: Salesforce data export to Google Drive . Importing data from Google Drive to Salesforce . Salesforce Data Export to Google Drive Data loss is a significant risk that may cause groundbreaking consequences. You can avoid data loss by timely and safe backup. The example below shows how to perform daily backups of your Salesforce contacts to an external file on Google Drive. To [export data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) to Google Drive, perform the following steps. STEP 1 Create Connections to Data Source 1. For the [Salesforce connection](https://skyvia.com/connectors/salesforce) , you have to select the Environment and the Authentication type and specify other connection parameters and settings. 2. When creating a [connection to Google Drive](https://skyvia.com/connectors/google-drive) , you need only the access token, which is automatically generated when you sign in with Google. STEP 2 Create Export When you need to export Salesforce data to Google Drive, you have to use [Skyvia Export](https://docs.skyvia.com/data-integration/export/) . Skyvia reads Salesforce data, downloads this data to a CSV file, and places this file in the needed folder on Google Drive. To set up the Export, you have to do the following: Create a new Export: Click +Create New \u2014> Export in the Data Integration block Select Salesforce connection as a Source. Select the Target type CSV to storage service and select your Google Drive connection as a Target. Optionally set the target folder. Click More options to set the additional CSV options for the target file, if needed. STEP 3 Configure Export Task Click Add new on the right to open the package task editor. Select the Salesforce object to be exported from the dropdown list. By default, all the fields of the table are selected for Export. You can clear the fields checkboxes to exclude some fields from the Export. Add filters if needed. Optionally define the target file name. On the Output Columns tab drag and drop the fields to change their order in the result CSV file. STEP 4 Scheduling Automatic Integration Run Skyvia allows scheduling integrations for automatic execution. After saving the task, click Schedule on the top left corner of the package editor page. You can set the integration to run on specific weekdays and recur every few hours or minutes. STEP 5 Integration Execution and Results You can see the result of the last five integration runs on the Monitor tab. The Log tab shows the integration results for different periods. Click the certain run to check the number of successfully processed records or the failed ones. Businesses may need to export data from multiple objects at once in a single integration. To do that, just add more tasks to the same integration and enjoy. Data Import from Google Drive to Salesforce Fast and safe data exchange is an important process for businesses. With automated data import you can avoid human errors and exchange data in a few clicks. Apart from Backup Skyvia, you can load data in the opposite way, from Google Drive to Salesforce. To do that, you will need Import integration. Skyvia will open the Google Drive file, extract the data, and load this data to the needed object in Salesforce. To set up the Import integration, perform the following steps. STEP 1 Create a New Import Integration Click +Create New \u2014> Import in the Data Integration block. Select CSV from the storage service Source type. Select your Google Drive connection as Source and Salesforce connection as Target. STEP 2 Add an Import Task Click Add new on the right to create a new task. Select the source file in the Source Definition tab. Skyvia will read the file and display the list of the file fields. Adjust the CSV options if necessary. Select the target object and choose the operation. Insert operation creates new Salesforce records from the source file.\u00a0 Update operation modifies the existing records. The Upsert operation inserts the new records and updates the existing records. The Use Primary Key and Use External ID marks indicate how Skyvia detects what records to change. The source file and target table structure may be different sometimes. Thus, to perform smooth Import, you must map source and target fields.\u00a0 Skyvia maps the fields with equal names automatically. STEP 3 Integration Execution and Results After you configure your import, you can run the integration manually or set the schedule . When the run is completed, you can check its results as described above . Third-Party Tools (Skyvia) Method Pros Ease of use . Skyvia offers user-friendly wizard-based and designer-based tools. You can set up an integration easily from scratch with no coding. Cost-effectiveness . Basic Skyvia features are available for free. Variety of Solutions. Besides Export and Import demonstrated above, Skyvia offers other various [products](https://skyvia.com/solutions) that help users solve different data-related tasks. Data transformation capabilities . With Skyvia you can transform source data to fit the target data structure that helps to bring in line the data formats or discrepancies between source and target. Diversity. Skyvia provides connectors for various data sources and allows their integration. Third-Party Tools (Skyvia) Method Cons File Formats. Skyvia supports loading data from and to CSV files only. It doesn\u2019t support direct access to Google Drive from Salesforce UI. API Limitations. Skyvia relies on Salesforce\u2019s API limits, meaning high-frequency data syncs or large data volumes could potentially exceed Salesforce\u2019s quotas, impacting performance and accessibility. Method 3: Custom Integration Solutions If your organization has very specific requirements and none of the described methods meets your business needs, you can build a custom integration for Salesforce and Google Drive. With this method, you can send web requests from Google Drive to Salesforce and operate the files directly with a coding approach. To create a custom integration, perform the following steps. STEP 1 Create a Project in the Google Developers Console Go to [https://console.cloud.google.com](https://console.cloud.google.com) and sign in using your Google App for the Work admin account. Click Create Project , enter the project name and location, and click Create . STEP 2 Create an Authentication Provider for Google Drive Navigate to APIs & Services in the project dashboard. Then, go to the Library tab and search for Google Drive API. Select Google Drive API and click Enable API . Click Credentials on the left. Click Create Credentials, select OAuth client ID , select Web application , and click Create . STEP 3 Create an Authentication Provider in Salesforce In Salesforce, search for Auth. Providers in the Quick Find box, select it and click New . Select OpenID Connect for Provider Type, and specify the following parameters: Name \u2014\u00a0 the name to display in Salesforce. URL Suffix \u2014 Suffix at the end of the URL path will be generated automatically. Consumer Key \u2014\u00a0 the client ID obtained from the Google project. Consumer Secret \u2014\u00a0 the client secret obtained from the Google project. Authorize Endpoint URL : [https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force](https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force) (Use this URL when editing the Google project). Token Endpoint URL : [https://accounts.google.com/o/oauth2/token](https://accounts.google.com/o/oauth2/token) Default Scopes: [https://www.googleapis.com/auth/drive](https://www.googleapis.com/auth/drive) After you save the Auth. Provider, find the Callback URL at the bottom of the page and copy it to a text file. Edit the Project in Google Developer Console. In the API Manager, click Credentials and select the previously created web app. Click Save . Scroll down the page to the Salesforce Configuration section and copy the Callback URL entry to a text file. Add this URL as the Authorized Redirect URIs in Google Console for the web client created on STEP 2 . STEP 4 Create Named Credential in Salesforce Go to Setup and search for Named Credentials. Specify Label and Name . Use [https://www.googleapis.com/drive/v2](https://www.googleapis.com/drive/v2) for the URL. For Identity Type, select Named Principal . For Authentication Protocol, choose OAuth 2.0 . Select the Authentication Provider created on the STEP 3 . Enable the Start Authentication Flow and click Save . After that, you will be redirected to the Google login page to sign in with Google. Give your permission to access Google Drive. After that, you will be redirected to the Named Credential detail page. The Authentication Status will become Authenticated . STEP 5 Make a Callout Now, you can make calls to [Salesforce API](https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/sforce_api_objects_list.htm) from Google Developer Console to operate with Google Drive files. Custom Integration Method Pros Customization. The coding approach allows the design of an integration for the needs of the specific organization or team. Control. This method allows you to fully control your integration and manage its capabilities and limitations, data flow, and security protocols. Custom Integration Method Cons Complexity. This method is quite complex to set up and maintain; it requires programming skills and technical expertise. Cost. You may need to involve a specialist or a team to set up and maintain such an integration, which may lead to additional costs. Conclusion We explored several methods of Salesforce Google Drive integration. All of them have different features and solve different business tasks. Native Files Connect solution is free and available but quite complex to set up. Third-party tools vary in capabilities and costs and are limited to customization. The custom integration method offers full customization but requires resources for setup and maintenance. To decide what method is the best for your organization, you have to evaluate the requirements, compare them to method capabilities, and match all of these to the available budget. The best way to decide what tool to choose is to try it. Skyvia\u2019s no-code features and wizard-based solutions make it one of the [easiest ETL tools on the market](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . So hurry to try it for free. FAQ for Salesforce Google Drive Integration How to integrate Salesforce and Google Drive? There are several methods. You can use the native Salesforce File Connect tool, a third-party application, or build a custom integration . Can I integrate Google Drive with Salesforce without coding? Yes. Third-party applications like Skyvia and Drive Connect allow for seamless integration without requiring coding skills. Can I automate file management between Salesforce and Google Drive? Yes. Third-party tools like Skyvia can automate file synchronization, manage version control, and schedule data exports. Integrations built using the coding approach can also provide tailored automation features. How do I choose the best integration method for my business? Consider factors like budget, required features, ease of implementation, maintenance efforts, and customization needs. If you need advanced automation and features, third-party tools are a great option. If you need complete control, a custom integration may be the best choice. Is it safe to integrate Google Drive and Salesforce? Yes. Both Google Drive and Salesforce are compliant with security regulation standards. If you use a custom integration method, you can control access and user permissions on your own. If you use third-party applications for integration, use the tool with strong security measures. More Articles About Salesforce Integrations See other capabilities of data integration with Skyvia: [HubSpot and Salesforce Integration: Best Practices for Connection](https://skyvia.com/blog/hubspot-salesforce-integration/) [Asana and Salesforce Integration: 2 Easiest Methods](https://skyvia.com/blog/asana-salesforce-integration/) [Snowflake to Salesforce Integration: Best Code-Free Connectors](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [How to connect Shopify and Salesforce](https://skyvia.com/blog/shopify-salesforce-integration) [Salesforce QuickBooks Integration: 3 Different Ways](https://skyvia.com/blog/salesforce-quickbooks-integration-3-ways/) [Simple ways to connect Jira and Salesforce](https://skyvia.com/blog/jira-salesforce-integration/) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fgoogle-drive-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Google+Drive+Integration%3A+The+Complete+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fgoogle-drive-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/google-drive-salesforce-integration/&title=Salesforce+Google+Drive+Integration%3A+The+Complete+Guide) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/healthcare-data-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Healthcare Data Integration: Streamline Data from Numerous Channels By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/healthcare-data-integration/#respond) 639 January 30, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhealthcare-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Healthcare+Data+Integration%3A+Streamline+Data+from+Numerous+Channels&url=https%3A%2F%2Fblog.skyvia.com%2Fhealthcare-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/healthcare-data-integration/&title=Healthcare+Data+Integration%3A+Streamline+Data+from+Numerous+Channels) The healthcare industry requires fast reactions and instant decision-making. To make well-informed choices, medical workers need to access relevant data. However, medical information may be stored in fragmented systems, including legacy and custom-built ones, and bringing it all together is the first and highly complex challenge. This is where healthcare data integration comes in, providing clinicians with the information they need. In this article, we thoroughly explore the concept of data movement in clinical areas. We also provide some tips for implementing a high-performing integration system. And finally, we focus on the obstacles related to data exchange between medical systems. Table of Contents What Is Healthcare Data Integration? Key Benefits of Healthcare Data Integration Steps to Implement Healthcare Data Integration Skyvia\u2019s Role in Healthcare Data Integration Use Case Examples of Healthcare Data Integration Future Trends in Healthcare Data Integration Conclusion FAQs What Is Healthcare Data Integration? Healthcare data integration stands for collecting and blending data from different medical systems, such as: EHR (Electronic Health Records) systems with individual patient records. Broader health information systems (HIS). Medical devices. The primary purpose of data integration is to merge information from multiple systems into a unified dataset, also known as a [single source of truth (SSOT)](https://skyvia.com/learn/single-source-of-true) . In practice, this means sending data into databases or cloud data warehouses, from which it\u2019s later extracted into BI and analytics tools for reporting. That way, medical workers can obtain a holistic view of each patient in real time. Data integration also supports interoperability, enabling effective communication between different systems through data standardization. Moreover, it facilitates medical data analytics, which helps to drive public health patterns. Key Benefits of Healthcare Data Integration Data consolidation also delivers an enormous potential to change patient care for the better. It refines numerous processes across different healthcare organizations, such as hospitals, laboratories, urgent care spots, etc. Here are some other tangible benefits that come with data integration. Improved Patient Care . Since data is aggregated in one place, clinicians have access to each patient\u2019s medical history. Lab analysis results, prescriptions, previous disease history, and other details enable doctors to design comprehensive treatment plans. Streamlined Administrative Processes. [Data integration](https://skyvia.com/blog/data-integration-trends/) takes the weight off administrative personnel. It reduces manual data entry of healthcare records into multiple systems. As a result, the likelihood of human error is minimized, and the data inaccuracies in medical registries are nearly eliminated. Enhanced Decision-Making and Analytics . Keeping all data in a single system (database, data warehouse, etc.) assures a strong foundation for [data analysis](https://skyvia.com/blog/top-10-data-analysis-tools/) . That way, clinicians can identify public health trends and manage medical programs more efficiently. Unified data storage is also a base for predictive analytics. It helps medical workers to anticipate patient needs and manage unwanted risks. Principles of a Successful Data Integration System . In fact, it must guarantee data integrity and security with zero leaks and breaches. Other aspects that ensure successful data integration are: Standardization Interoperability Sharing protocols Data Standardization Data-related standards expose rules for data collection, storage, exchange, and retrieval. They also provide terminology that applies to data in the healthcare sector. Here are some examples of data-related standards: Interchange formats provide standards for electronic data encoding. These standards also include structure types for medical documents. Terminology is a set of medical terms and concepts used to describe data elements. It also provides the syntax for explaining relationships among terms and concepts. Knowledge representation includes standardized methods for delivering medical literature, clinical guidelines, and other documents. Data standardization is a fundamental part of data management in any sector of the economy. It enables medical workers to get a clear vision of a patient\u2019s clinical history. Interoperability between Systems [Interoperability](https://www.oracle.com/it/health/interoperability-healthcare/) enables dispersed health data systems to communicate and share data. This helps health workers access and utilize regardless of its structure, format, system of origin, etc. National healthcare systems impose data processing standards on electronic exchange. For instance, the US federal government, together with the Centers for Medicare & Medicaid Services (CMS), have adopted the [rules for secure data exchange](https://www.cms.gov/tra/Data_Management/DM_0060_Data_Sharing_Governance.htm) and interoperability between different sources. Secure Data Sharing Protocols Since most healthcare data is often sensitive, it requires enhanced protection and security. As many countries have already adopted data security standards, medical institutions have to strictly follow them. Additionally, they can implement enhanced safety measures and regulations to ensure data privacy and integrity. Here are some standard practices used to ensure data safety in transit and at rest: Develop written policies and procedures for controlling data security and review them regularly. Define role-based access levels to data for each employee and set associated permissions. Make sure to respect the national regulations on data collection for research purposes. Limit physical access to hardware with confidential information only to authorized personnel. Apply data encryption mechanisms to send healthcare data to other systems. Steps to Implement Healthcare Data Integration Before creating a data pipeline, you need to take some preparatory steps. You\u2019ll first need to audit your current data management systems. Then, explore [data integration platforms](https://skyvia.com/blog/data-integration-tools/) and current healthcare regulations in your country. Assess Current Data Management Systems Observe the actual architecture and infrastructure of your healthcare data management system. To understand whether it\u2019s good or not enough, pay attention to the following factors: Real-time data access. Make sure that medical professionals or patients have access to the most recent information. Data sharing. See whether medical workers can share their professional insights with others and retrieve information about patients. Data security. Make sure that your healthcare data management system implements role-based access control. This includes encryption mechanisms and other expected measures to ensure data security. Patient identity privacy. Explore whether a patient can access their account in the system at a guaranteed level. Ensure Compliance with Healthcare Regulations The regulations for data access and management may vary from country to country. Here, we review some fundamental ones required for patient data protection. HIPAA (Health Insurance Portability and Accountability Act). [HIPAA](https://www.hhs.gov/hipaa/index.html) defines strict standards for patient data confidentiality and security. The guidelines provided in this act must be implemented in all healthcare entities. HITECH Act (2009). The [HITECH Act](https://www.hhs.gov/hipaa/for-professionals/special-topics/hitech-act-enforcement-interim-final-rule/index.html) complements HIPAA with recommendations for patient data protection on the electronic information exchange. GDPR (General Data Protection Regulations) . [GDPR](https://gdpr-info.eu/) is a set of regulations that imposes strict rules on data protection, including health data. It refers to the citizens of all countries within the European Union (EU). Companies that reside outside the EU but deal with EU citizens need to make their data management systems GDPR-compliant. CCPA (California Consumer Privacy Act). [CCPA](https://oag.ca.gov/privacy/ccpa) allows California residents to request personal data deletion from healthcare systems. Choose the Right Data Integration Platform Select data integration solutions that are compliant with the fundamental regulations mentioned above. Examine how these tools connect to the required healthcare systems. See whether a platform offers real-time data exchange or [batch processing](https://skyvia.com/learn/what-is-batch-processing) with brief update intervals for your data update needs. Then, focus on tools that fit your budget. Skyvia\u2019s Role in Healthcare Data Integration [Skyvia](https://skyvia.com/) is a universal cloud data platform designed for various data-related tasks. Skyvia\u2019s [Data Integration](https://skyvia.com/data-integration) product has a number of tools that enable seamless data flow between different healthcare systems. [Import](https://skyvia.com/data-integration/import) is a wizard-based solution suitable for crafting [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) and Reverse ETL integration scenarios without coding. This solution can be useful for loading heterogeneous data from one system to another. [Replication](https://skyvia.com/data-integration/replication) is a wizard-based solution suitable for creating [ELT pipelines](https://skyvia.com/learn/what-is-elt) without coding. This solution is good for aggregating data within a database or a data warehouse that comes as an SSOT. [Data Flow](https://www.youtube.com/watch?v=Opc1W9lq-Es&list=PLE66g1kd4iq3OOjHBEEbhEaddZXfX2JB1&pp=iAQB) is a powerful visual data pipeline designer that allows you to move data across multiple data sources and build complex multistage transformations. This tool is useful when the integration scenario involves many different systems. Key benefits of Skyvia: Intuitive GUI that makes this tool easy to set up and use. Pre-built connectors to over [200 sources](https://skyvia.com/connectors) . Support of ETL, Reverse ETL, and ELT pipelines. Powerful [data transformation capabilities](https://skyvia.com/learn/what-is-data-transformation) . Detailed logs of errors. Email notifications are sent to inform about integration status, limits exceeded, etc. Use Case Examples of Healthcare Data Integration Case Study 1. Large Hospital Networks One of Skyvia\u2019s clients is a [global provider of online patient recruitment services](https://skyvia.com/case-studies/healthcare-company) . This company handles patient surveys, feasibility studies, and site selection services for clinical trials. The main challenge of this company was to collect data from dispersed systems. They had to extract data from the backend system, Facebook, CRM system, Excel files, and other sources. Another challenge was associated with preparing data for reporting in Tableau. Thanks to Skyvia, our client configured data extraction from all data systems and created a strong base for business intelligence. This company has also reduced resource spending on data operations owing to Skyvia. Case Study 2. Telehealth Services Another client of Skyvia is AHCA (American Health Care Association), a non-profit federation of affiliated state health organizations. [AHCA was looking for a solution to bring together their on-premises databases with Dynamics 365](https://skyvia.com/case-studies/ahca) . AHCA used Skyvia\u2019s pre-built connectors to databases and Dynamics CRM to easily configure integrations. The company has also automated data replication and reporting for analytical purposes. [Explore Skyvia](https://app.skyvia.com/register) Future Trends in Healthcare Data Integration While the industry is quick to embrace the latest trends and technologies, data integration has only recently gained action. This delay was due to the lack of a robust regulatory framework for handling sensitive patient data, which has slowed down the adoption of integration tools. Thanks to the recent laws and standards, data integration solutions can be implemented safely across healthcare institutions. The IT industry continuously evolves, shaping new trends that have the potential to transform workflows and enhance efficiency. These advancements will likely impact the medical sector in the near future. Let\u2019s have a look at some of the most promising ones. Growing Importance of Big Data Data volumes increase exponentially each year across industries. Wearable electronic devices and medical record digitization generate big data, making it essential for medical facilities to adopt solutions capable of handling such increasing data volumes. Predictive Analytics for Better Outcomes Predictive analytics tends to explore current and historical healthcare data. Analytics uses data from surveys, patient registers, EHRs, and other relevant medical sources. In fact, predictive analytics helps various medical professionals improve patients\u2019 experiences. It also allows clinicians to predict public health trends and even manage disease spread. Expansion of IoT The Internet of Medical Things (IoMT) is the fastest-growing sector in healthcare. It comprises smart blood pressure devices, fitness watches, ECG monitors, and other devices. With [data integration tools](https://skyvia.com/blog/data-integration-tools/) , data from IoMT is transferred to a centralized storage. Such an approach assists researchers in assessing treatment effectiveness. Also, consolidated medical data enhances advanced analytics, which helps identify national public health trends. This information can also be used to apply changes in the industry. Conclusion The healthcare sector has always been open to new technological advancements. However, the situation with data integration solutions was different until recently. The main concern related to the lack of standardization and interoperability issues. Luckily, data integration is widely adopted across medical institutions nowadays. This became possible due to data encryption, privacy regulations, and interoperability standards. Modern data integration platforms, such as Skyvia, have further facilitated the process. Feel free to try a fully-featured version of Skyvia at no cost with a freemium tier. FAQs What Is Data Integration in Healthcare? Data integration in healthcare combines data from different healthcare systems. These include but are not limited to EHR (Electronic Health Records) systems with individual patient records, broader health information systems (HIS), and medical devices. The main objective of healthcare data integration is to provide a holistic view of each patient. This, in turn, helps medical workers to improve patient care and decision-making. What Are Some Data Integration Challenges in Healthcare? The most tangible challenges associated with the healthcare data integration are: \u2013 Data privacy concerns \u2013 Interoperability issues \u2013 Cost and complexity of the integration systems How Does Healthcare Data Integration Improve Patient Care? Bringing data from dispersed sources together enables healthcare providers to offer more personalized and efficient care to their patients. This is possible due to a comprehensive view of each patient and real-time access to the most recent data. What Are the Best Practices for Healthcare Data Integration? The most commonly used practices for integrating data in the healthcare sectors are the following: \u2013 Outline the objectives for which data integration is needed. \u2013 Take advantage of modern cloud-based solutions and APIs. \u2013 Train your team on how to use electronic systems. \u2013 Inform your personnel about the national regulatory standards for data processing. \u2013 Use analytic tools to derive patterns and make predictions based on medical data. \u2013 Make sure that role-based access to patient data is adopted in your healthcare institution. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhealthcare-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Healthcare+Data+Integration%3A+Streamline+Data+from+Numerous+Channels&url=https%3A%2F%2Fblog.skyvia.com%2Fhealthcare-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/healthcare-data-integration/&title=Healthcare+Data+Integration%3A+Streamline+Data+from+Numerous+Channels) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-ai-transform-data-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How AI Transforms Data Integration By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-ai-transform-data-integration/#respond) 2080 March 22, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-ai-transform-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+AI+Transforms+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-ai-transform-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-ai-transform-data-integration/&title=How+AI+Transforms+Data+Integration) We live in the AI era, starting each day with many questions, like ChartGPT, tell me, please, what is this, how to do that, is it possible to solve something else, etc., and receiving answers: \u201cCertainly! I can help you. Look\u2026\u201d Sure, someone likes it, someone is afraid; Hollywood, for decades, created scary movies about horrible AI trying to kill our planet. But as usual, the truth is that people often fear something new. At the same time, businesses must remain competitive, requiring a new driving force to improve operational efficiency, automate processes, and [increase customer satisfaction](https://survicate.com/customer-satisfaction/tips/) . Emerging AI technologies are this kind of magic: [McKinsey](https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai) predicts that generative AI may contribute from $2.6 trillion to $4.4 trillion to the annual economy. [Forrester](https://www.forrester.com/blogs/predictions-2024-artificial-intelligence/) says that 85% of enterprises will use open-source AI models by 2025. According to [Gartner\u2019s](https://www.gartner.com/en/newsroom/press-releases/2023-10-16-gartner-identifies-the-top-10-strategic-technology-trends-for-2024) top 10 strategic technology trends for 2024, implementing Trust, Risk, and Security Management (TRiSM) may eliminate 80% of faulty and illegitimate information and raise decision-making to a new level. [Forbes](https://www.forbes.com/sites/moorinsights/2023/12/31/2023-the-year-generative-ai-transformed-enterprise-data-management/?sh=4ceb16ea63ae) says that 2023 had become a year when generative AI revolutionized data management. In this article, we\u2019ll explore how emerging AI impacts data integration and how it may benefit modern businesses. Table of Contents Applications of AI in Data Management and Integration Benefits of AI Challenges and Considerations Skyvia\u2019s Role in AI-Driven Data Integration Conclusion Applications of AI in Data Management and Integration According to [Gartner](https://www.gartner.com/en/digital-markets/insights/2024-global-software-buying-trends) \u2018s recent research, global software buying trends in 2024 are about increasing technology investment (61% of businesses) and considering AI-powered software (92%). The selection criteria for the perfect tool are its price (49%) and security compliance (48%). These statistics say the world is ready to change its approaches and try something new. So, let\u2019s dive a bit deeper and see specific areas within data management and integration where AI is applied, including functions and business impact. Area Function Impact Automated Data Cleansing AI algorithms can automatically detect and rectify inaccuracies, inconsistencies, or missing values in datasets. They identify duplicate records, correct typographical errors, and fill in gaps where data might be missing. This automation : \u2013 Ensures that datasets are accurate and reliable for analytics and decision-making processes. \u2013 Reduces the manual labor typically associated with data cleaning, making the process more efficient and less prone to human error. Integration Process Automation AI facilitates the seamless merging of disparate data sources, automating the recognition and reconciliation of differences in data formats, schemas, and structures. It can handle complex mappings and transformations, simplifying and integrating varied data into a coherent whole. The integration process automation: \u2013 Accelerates data consolidation from various sources, improving businesses\u2019 agility in using their data for analytics, reporting, and operational purposes, enabling a more dynamic and flexible data infrastructure. Predictive Analytics AI analyzes historical data through machine learning models to identify patterns and predict future trends or outcomes. These models can be applied to various business activities, from forecasting customer behavior to anticipating market shifts. Predictive analytics allows businesses to: \u2013 Make proactive decisions. \u2013 Optimize operations. \u2013 Enhance customer experiences. By anticipating future trends, companies can devise strategies that capitalize on opportunities and decrease potential risks. Real-time Data Processing AI technologies are particularly adept at processing and analyzing real-time data streams. This capability allows for the immediate analysis of incoming data, like transactions, social media feeds, or sensor outputs, to identify trends, anomalies, or opportunities as they happen. Real-time data processing allows: \u2013 The ability to react promptly to critical information. \u2013 Adjust strategies on the fly. \u2013 Provide timely customer responses. This agility is crucial in fast-paced environments where delays can lead to missed opportunities or escalated problems. Enhanced Data Security and Compliance AI-driven security systems can monitor data access patterns and anomalies to identify potential security threats or breaches. AI can aid in classifying sensitive data and ensuring compliance with data protection regulations by automating the enforcement of governance policies. Implementing AI in data security: \u2013 Helps companies to protect sensitive information and respond to threats quickly. \u2013 Ensures compliance with increasingly complex regulatory environments, reducing the risk of legal penalties and reputational damage. Benefits of AI 40% of the last [McKinsey annual Global Survey](https://www.mckinsey.com/featured-insights/mckinsey-global-surveys) respondents are ready to invest in AI because of mitigating the inaccuracy risks. But that\u2019s just one reason why implicating AI into data handling is a good idea. So, how can AI solutions benefit data integration platforms and their users? Data Quality Improvement AI algorithms automatically clean and verify data. They identify and correct errors, remove duplicates, and enrich data sets. The result is more reliable data that can be trusted for analysis and decision-making. Example Banks and financial companies use AI to automatically cleanse and reconcile transaction data from multiple sources, reducing customer account errors and improving financial reporting accuracy. Watch a real-life example with the NISO\u2019s Data Integration Breakthrough case study: Integration Processes Acceleration AI automates data recognition and reconciliation from disparate sources. It eliminates the bottlenecks associated with manual data mapping and transformation. AI allows real-time or near-real-time data integration. Example Online retailers use AI to quickly integrate customer data from various touchpoints, like websites, mobile apps, customer service, etc., to create a unified view of the customer, enhancing personalized marketing and customer service. Manual Labor Reduce AI automates routines like repetitive data management tasks, freeing human resources for more complex and strategic activities. This practice reduces the time and cost associated with data management tasks and minimizes human error. Example Hospitals and healthcare providers use AI to automate patient data integration from various systems, including EMR, lab systems, billing, and more, into a single patient record, reducing administrative overhead and improving patient care. Here\u2019s how Skyvia\u2019s data ETL pipeline designer may help in such tasks. Enhanced Decision-Making AI-driven [data integration tools](https://skyvia.com/blog/data-integration-tools/) allow access to cleaner, more comprehensive data sets enriched with insights derived from AI\u2019s predictive analytics and pattern recognition capabilities. Such solutions empower decision-makers with actionable intelligence, facilitating more informed, data-driven decisions. Example Manufacturers integrate AI with their data systems to predict machine failures and maintenance needs, enhancing operational efficiency and reducing downtime through predictive maintenance strategies. Scalability and Flexibility AI technologies can handle increasing volumes and varieties of data without a proportional cost increase. This scalability ensures that businesses can adapt to changing data needs and capitalize on new opportunities without being constrained by their data infrastructure. Example Startups often use AI tools to quickly scale their data infrastructure as they grow to handle increasing volumes of data without raising costs or complexity. Real-Time Insights AI allows for processing and analyzing data in real-time, enabling immediate insights into operations, customer behavior, and market trends. This real-time analysis supports agile decision-making and helps businesses respond proactively to emerging challenges and opportunities. Example Retailers use AI to analyze sales, inventory, and customer data in real time, allowing for dynamic pricing, optimized inventory management, and personalized marketing campaigns that respond quickly to changing market scenarios. Enhanced Security and Compliance AI algorithms can monitor data transactions and access patterns to identify potential security threats, unauthorized access, or compliance violations. By automating data governance and compliance processes, AI helps ensure that data management practices meet regulatory standards and protect sensitive information effectively. Example Banks employ AI to monitor transactions in real time for signs of fraudulent activity, improving security and compliance with financial regulations by identifying and addressing issues as they arise. Cost Efficiency By automating data integration and management tasks, AI helps reduce operational costs used for manual data handling and correction. This cost efficiency extends to better resource allocation, as staff can focus on higher-value tasks that contribute more directly to business goals. Example Logistics companies integrate AI into their data management systems to optimize routes and deliveries based on real-time data, reducing fuel costs and improving delivery times through more efficient operations. Challenges and Considerations AI implementation in the data integration area may bring many benefits to businesses. The challenges it brings are another side of the coin. So, let\u2019s walk through the issues companies must navigate carefully. Data Privacy and Security Concerns AI systems often require access to vast amounts of data, some of which can be sensitive or personal. That causes privacy concerns, especially under strict data protection regulations like GDPR in Europe or CCPA in the US. Ensuring that AI solutions comply with these regulations without compromising the utility of the data they process is a delicate balance. Solution Implement robust data governance frameworks and use AI models trained with anonymized data to ensure compliance and protect individual privacy. Data Quality and Quantity AI and ML models require large, high-quality datasets for training to ensure accuracy and reliability. Poor data quality or insufficient data can lead to biased or inaccurate models, undermining AI\u2019s effectiveness in data integration. Solution Implement comprehensive data cleaning, validation, and augmentation strategies to improve data quality. Techniques like data augmentation or synthetic data generation can be helpful in cases of insufficient data. Complexity of AI Models AI models, especially deep learning networks, can be incredibly complex, making them difficult to understand and interpret. AI\u2019s \u201cblack box\u201d nature can be an issue in industries where explainability and transparency are critical, such as healthcare and finance. Solution Use explainable AI (XAI) techniques and tools that provide insights into how AI models make decisions, improving transparency and trust. Need for Skilled Personnel The development, deployment, and maintenance of AI-driven data integration systems require a workforce with unique skills, including expertise in AI, machine learning, data science, and domain-specific knowledge. Solution Invest in training and development programs to upskill existing staff and collaborate with academic institutions to ensure a steady pipeline of skilled graduates. Companies may also consider leveraging managed services or consultancy firms specializing in AI-driven data integration. Integration with Existing Systems Integrating AI into existing data management and integration infrastructure is challenging, especially when discussing legacy systems. They don\u2019t interact with AI technologies, leading to potential compatibility and performance issues. Solution Adopt a phased approach to integration, starting with pilot projects to identify and address potential issues. Use APIs and middleware data integration solutions like Skyvia, as bridges between AI technologies and existing systems. Cost Considerations Implementing AI in data integration is often about upfront costs, including technology, personnel, and training investments. For small to medium companies, these costs can be prohibitive. Solution Select cost-effective AI solutions offering scalable cloud-based AI services, allowing you to pay for usage only. Prioritize projects with clear ROI to justify the investment. Skyvia\u2019s Role in AI-Driven Data Integration As mentioned, data integration with existing data ecosystems, especially legacy, can cause trouble for many businesses. Skyvia is the platform that helps solve this. It\u2019s a good choice for companies looking for a universal ETL, ELT, and reverse ETL cloud-based data integration platform that\u2019s [easy to use](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , doesn\u2019t need coding or specific AI skills, and provides a set of capabilities for an honest [price](https://skyvia.com/pricing/) . [Skyvia\u2019s data integration](https://skyvia.com/data-integration/) tools fit businesses of any size, from startups to enterprises, and help navigate the challenges of implicating AI into data integration. Here\u2019s how Skyvia addresses common challenges in this area: Data Flow Automation Data flow automation allows users with minimal tech background to easily set up and manage data pipelines. It enables the seamless flow of data between sources and destinations, which is essential for training AI models and performing analytics. Capability: Skyvia offers users [180+](https://skyvia.com/connectors/) connectors for popular data sources and destinations, including CRM systems, cloud storage, and databases, facilitating the automation of data workflows with just a few clicks and providing flexibility to adapt to new data sources and destinations as businesses evolve and adopt new technologies. Ensuring Data Accuracy Data accuracy is the backbone of AI-driven systems because the output quality depends heavily on the input data quality. Skyvia offers data validation and cleansing tools as part of the integration process. Capability: The platform includes features for data quality management, allowing users to define rules for data validation and employ transformations to clean data before it enters the AI models. So, the data used for AI and analytics is reliable and high-quality. Simplifying Complex Data Integration Tasks One of the most common hurdles in AI-driven data integration is the complexity of integration tasks, which often require strong coding skills. Skyvia\u2019s environment is code-free and simplifies complex tasks through a graphical interface. Capability: Users can create, manage, and modify data integration tasks using a drag-and-drop interface, which makes it easier for companies to integrate AI into their data strategies without needing dedicated specialists. Conclusion AI isn\u2019t our future; it has become our today. Businesses aiming to be competitive in any area cannot ignore it. The mix of AI capabilities and data integration practices and approaches guarantees that your AI models have the most accurate and timely data for analysis and learning. You don\u2019t need to invent the things that have already been invented and may focus on new challenges to ensure your business is going the right way. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-ai-transform-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+AI+Transforms+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-ai-transform-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-ai-transform-data-integration/&title=How+AI+Transforms+Data+Integration) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Security](https://skyvia.com/blog/category/data-security/) How to Archive Data in Salesforce and Reduce Storage Costs By [Tim Combridge](https://skyvia.com/blog/author/timc/) [0](https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs/#respond) 4487 December 9, 2021 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-archive-data-in-salesforce-and-reduce-storage-costs%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Archive+Data+in+Salesforce+and+Reduce+Storage+Costs&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-archive-data-in-salesforce-and-reduce-storage-costs%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs/&title=How+to+Archive+Data+in+Salesforce+and+Reduce+Storage+Costs) Table of contents Introduction What Is Data Archiving? Data Archiving Solutions Differences Between Data Backup and Archive What about Salesforce\u2019s Backup and Restore Tool? Summary Introduction As your business grows, you\u2019re going to be gathering more and more clients, which means you\u2019re collecting more and more data about them as well \u2013 new Opportunities to do business with them, new Activities every time they send an email or give you a phone call, and new Cases to help them out on their journey with your business. It\u2019s no secret that [Salesforce](https://www.salesforce.com/) doesn\u2019t have the cheapest cloud data storage solution on the market, and with your data growing rapidly as your business grows, your Salesforce data bill is likely to grow as well. All that being said, your data is valuable and deleting it is not something you\u2019d want to do. Your business worked hard to collect customers and organise data related to them \u2013 why throw it away because you\u2019re facing additional storage charges? This is where Data Archiving comes in \u2013 a way to keep your old, less relevant data without having to remove it entirely or pay to have it kept on-platform. What Is Data Archiving? Imagine it this way: you have a pig-shaped money box, like the one you likely had as a child. Every day, you diligently save your hard-earned coins by putting them in the moneybox. Eventually, there comes a point in time where your money box is full \u2013 which is great! The only problem is that you want to keep saving. You have a few options: Stop putting money in the moneybox. This isn\u2019t ideal, as you want to keep saving and acquiring wealth. Keep buying bigger and bigger money boxes. Sure, you\u2019re able to keep building wealth but it comes at a cost (and for some reason, the money boxes in this example are more expensive than most!) Partially empty your moneybox, and put the money that you emptied into the bank. It\u2019s not as easy to access, but you don\u2019t really need it all the time anyway \u2013 that\u2019s what the moneybox is for. This option allows you to keep your additional money, even though it\u2019s not as easy to access, and continue to save and work with the money in your moneybox. The third option is essentially how Data Archiving works \u2013 you\u2019re still keeping all your money/data, but putting some of it in a different storage space (one that is cheaper to maintain, but does mean that it\u2019s not as easily accessible). There are a number of data archiving tools available that will help you manage your file storage limits in Salesforce by offloading data to a supporting system for data archiving. Data Archiving Solutions There are a number of data archiving methods and best practices that you\u2019ll need to consider when creating a data archiving strategy and selecting the right Salesforce data archiving tool for your business. The first and biggest thing to consider is what data is taking up the most space in your Salesforce org, and what data could potentially be shifted to another storage location and accessed less frequently. You\u2019ll also need to consider if you need a standard UI (ie. a Lightning Page and Page Layout) to view the data, or if you\u2019re able to develop one using Lightning Components, or if you don\u2019t need to access the data within the Salesforce User Interface at all. Here\u2019s an example: you\u2019ve got an integration set up that pulls in all EFTPOS transaction records from numerous locations and you may be seeing thousands or even tens of thousands of records created per day, with no major need to have all of these transactions available on-platform. This is a perfect example of when you may want to archive your transaction data off the Salesforce platform. There are numerous ways to do this, depending on what your business needs: Big Objects within Salesforce. Technically speaking, this is still on-platform, but there are a number of differences between Big Objects and Custom Objects. The main difference: Big Objects are designed to store billions of records, while Custom Objects aren\u2019t. Big Objects also don\u2019t have a standard UI and need to have one built using Lightning Components (or Visualforce, if you\u2019re still into that). Check out this [Trailhead Module](https://trailhead.salesforce.com/en/content/learn/modules/big_objects) to learn more about the basics of Big Objects. Salesforce also allows [External Objects](https://help.salesforce.com/s/articleView?id=sf.external_object_define.htm&type=5) to be used to help make data archiving easy. External Objects are similar to Custom Objects in that they support Lightning Pages and Page Layouts, and don\u2019t require a custom UI to be built like Big Objects. The difference is that External Objects hold data off-platform in another database and are exposed in the Salesforce User Interface via a connection. This makes it easy to replace your internal Custom Object data as they do behave similarly and have similar functionality. You can see in the image below an example of an External Object that connects to Google Drive to display files inside the Salesforce UI: Simply export your data from Salesforce to create a data archive in another database. This one is less functional, but requires a lot less setup and maintenance effort. Unfortunately, if you use this method to create a Salesforce data archive, you\u2019ll need to manually search off-platform for records when you do need them. This is really only recommended when the data is only required for regulatory or legislative retention purposes and will only need to be accessed in extreme circumstances (probably not a good option for creating a Transaction data archive like the example above). Use a third party tool like [Skyvia Data Replication](https://skyvia.com/data-integration/replication) to retain a full replica of your Salesforce data as a data archive. Tools like this are extremely helpful as they can be set up with ease and configured to automatically copy your data according to a set of rules. Skyvia, for example, supports automatic schema creation and incremental updates feature, allowing to create a full replica during the first run and upload only changed records during the subsequent runs. Additionally, Skyvia can be used to get data back from database to Salesforce either by physical data loading, using [Skyvia Import,](https://skyvia.com/data-integration/import) or by a combination of tools, like [Skyvia Connect](https://skyvia.com/connect/) and [Salesforce Connect](https://developer.salesforce.com/docs/atlas.en-us.234.0.apexcode.meta/apexcode/platform_connect_about.htm) , to display data back as Salesforce external objects via OData endpoint. Skyvia even has a dedicated [Salesforce data backup](https://skyvia.com/backup/salesforce-backup) product. However, there is a difference between a backup and archive, and you can read more about it below. Differences Between Data Backup and Archive The difference between a data backup and data archive is quite simple: when you backup your data, you\u2019re taking a copy that can be accessed if things go wrong. This will quite often be a full backup of all of your data within Salesforce that can be used to restore your org to a previous state, or allow users to access important data in the case of an outage. This is different to a data archive \u2013 a data archive is when you\u2019re removing older or less relevant data from your live Salesforce org in order to reduce the amount of data you\u2019re paying for or reduce the amount of data that your users need to sieve through to access what they need. A Salesforce data backup tool is definitely something you\u2019ll want to consider as well, but this is a topic for another article as there are a number of different things to consider and look out for when preparing your Salesforce data backup solution and strategy. What about Salesforce\u2019s Backup and Restore Tool? Salesforce recently announced their own entry into the Backup and Restore space at Dreamforce 2021. Their new Platform Tool called [Backup and Restore](https://www.salesforce.com/products/platform/solutions/data-security/backup-restore-data-recovery/) was recently made available for purchase. I haven\u2019t yet had a chance to have hands-on experience with Salesforce Backup and Restore just yet, but I have done some research and discovered quite a bit about the new tool. Again, Salesforce\u2019s new tool is a backup tool, not an archiving tool, so there are a number of key differences but it\u2019s still worth taking a look. Salesforce states that their Backup and Restore tool will help prevent loss of data (due to system or human errors), recover from any incidents in a small amount of time (and with only a few clicks), and make data management simple while still maintaining CCPA and GDPR compliance. Salesforce Backup and Restore runs as a native application within your Salesforce org, and allows you to backup your data in a separate physical location to where your production data exists. Salesforce Backup and Restore allows you to select which data to restore and when (i.e. if you\u2019re looking to just restore a record that was wrongfully deleted, you don\u2019t need to restore your entire database). As an admin, you\u2019ll be able to set up permissions to control who can create, manage, and restore backups in your org, and how frequently your data backups should be run. There\u2019s also an automated purging feature that will clear out older backups when they\u2019re no longer needed. Summary So, there you have it! Now you know all about Salesforce Data Archiving, methods of how to archive your Salesforce data off the platform, as well as some best practices in doing so. You\u2019re also now aware of the difference between data backup and archiving, and how to put together a data archiving strategy that best suits your business. Now, you are also aware of third-party tools that can help you copy and store your old Salesforce data safe and with minimum efforts. Hopefully you\u2019re confident in your understanding of the above concepts, and are able to make a decision around what is best for your business. If you\u2019d like to learn more, please don\u2019t hesitate to [contact us](https://skyvia.com/company/contacts) for more information about Skyvia Data Replication and how it can benefit your business. More Articles about Salesforce [7 Best Free and Paid Salesforce Data Loaders](https://skyvia.com/blog/salesforce-best-data-loaders/) [9 Best ETL Tools for Salesforce](https://skyvia.com/blog/best-etl-tools-for-salesforce/) [Salesforce Data Migration Best Practices](https://skyvia.com/blog/salesforce-data-migration-best-practices/) [Top 3 Ways to Mass Update Salesforce Records](https://skyvia.com/blog/top-3-ways-to-mass-update-salesforce-records) [Salesforce Data Migration Best Practices](https://skyvia.com/blog/salesforce-data-migration-best-practices) [5 Ways to Export Data from Salesforce to Excel](https://skyvia.com/blog/5-ways-to-export-data-from-salesforce-to-excel/) [How Salesforce Connect uses OData Options & API Calls](https://skyvia.com/blog/salesforce-connect-guide/) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-archive-data-in-salesforce-and-reduce-storage-costs%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Archive+Data+in+Salesforce+and+Reduce+Storage+Costs&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-archive-data-in-salesforce-and-reduce-storage-costs%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs/&title=How+to+Archive+Data+in+Salesforce+and+Reduce+Storage+Costs) [Tim Combridge](https://skyvia.com/blog/author/timc/) Salesforce Solutions Engineer" }, { "url": "https://skyvia.com/blog/how-to-connect-google-sheets-to-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Google Sheets to MySQL: Sync and Automate Data By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-connect-google-sheets-to-mysql/#respond) 85 May 21, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-google-sheets-to-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=Google+Sheets+to+MySQL%3A+Sync+and+Automate+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-google-sheets-to-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-google-sheets-to-mysql/&title=Google+Sheets+to+MySQL%3A+Sync+and+Automate+Data) We\u2019ve all been there: you start managing everything in Google Sheets . At first, it works fine. But as the team grows, the cracks show manual data updates, mismatched numbers, and multiple versions of the truth floating around. Reporting turns into a scavenger hunt, and when insights are pulled together, they\u2019re already outdated. Why does this happen? Google Sheets is great for lightweight tasks, but it starts to fall short when you need scalability, rock-solid data integrity, complex queries, or centralized reporting . That\u2019s where we need MySQL, a robust, structured database built for growth, analytics, and integration across the entire stack. You don\u2019t have to choose between flexibility and power. Integrating Google Sheets to MySQL allows companies to: Automate data flows. Improve efficiency. Eliminate manual busywork. Make faster, smarter decisions. There\u2019s no one-size-fits-all approach here. There are plenty of ways to bridge the gap, from manual exports and simple scripts to fully automated no-code solutions. In this guide, we\u2019ll walk through the best options so you can pick the method that fits your team, workflow, and future growth. Table of Contents Why Move Data from Google Sheets to MySQL? Methods for Moving Data Between Google Sheets and MySQL Method 1: The Manual Approach Method 2: Using Google Apps Script Method 3: Custom Scripts with Programming Languages Method 4: No-Code/Low-Code Integration Platforms Conclusion Why Move Data from Google Sheets to MySQL? As soon as your business picks up steam, once-smooth workflow becomes a patchwork of manual updates, copy-pasting marathons, and \u201cwho-has-the-latest-version?\u201d headaches. Moving data from Google Sheets to MySQL isn\u2019t just a technical upgrade; it\u2019s a business unlock . It helps organizations scale up, clean up, and smarten up their operations so users can focus on growth, not spreadsheet babysitting. Scalability and Performance Google Sheets wasn\u2019t built to handle massive datasets or complex queries. It slows down, formulas break, and collaboration becomes a mess. With MySQL, you can crunch big data, run lightning-fast queries, and keep performance sharp as you grow. Real-life pain point: Think of a retail company managing thousands of SKUs across multiple stores in Sheets. As sales ramp up, the system chokes, reporting lags behind, and stockouts creep in, leading to lost revenue and frustrated customers. Data Integrity and Structure MySQL is a relational database , which enforces rules that spreadsheets can\u2019t touch, like data types, relationships, and constraints. No more worrying about accidental text in a numbers-only column or rogue duplicates throwing off your reports. Frustration you avoid: A finance team scrambling to explain mismatched figures to auditors because someone tweaked a spreadsheet formula without telling anyone. Centralization Instead of juggling multiple sheets, MySQL lets you pull everything into a single source of truth . You get one clean, centralized hub, whether it\u2019s sales, marketing, inventory, or customer data. No more jumping between tabs or spreadsheets to piece the story together. Common scenario: A startup\u2019s marketing and sales teams working off separate Sheets, leading to mismatched customer records and missed follow-ups. MySQL helps pull it all together. Advanced Analytics and Reporting When your data\u2019s in MySQL, plugging into business intelligence (BI) tools like Tableau or Power BI is a breeze. You unlock powerful dashboards, real-time reporting, and advanced data analysis. Stuff Sheets simply can\u2019t keep up with. Real-life example: A SaaS company that wants to track churn, lifetime value, or customer segmentation across millions of records. MySQL makes it seamless, while Sheets can\u2019t keep up. Integration Capabilities MySQL plays nicely with tons of backend systems, CRMs, ERPs, and cloud platforms. Whether you\u2019re automating workflows, connecting customer touchpoints, or syncing e-commerce data, MySQL opens the door to a much broader integration ecosystem. Real-life example: A fast-growing DTC brand wanting to connect Shopify, Stripe, and its fulfillment system can tap into MySQL as the central hub to keep everything running smoothly. Methods for Moving Data Between Google Sheets and MySQL Now we\u2019re clear on why this integration matters, let\u2019s talk about how to pull it off. Moving data between Google Sheets and MySQL doesn\u2019t have to feel like a maze, but the best path depends on your technical comfort, your team\u2019s resources, and how much automation you want baked in. Here\u2019s a quick look at the main approaches. Method 1: The Manual Approach . The classic \u201cexport as CSV, import into MySQL\u201d routine. It\u2019s dead simple and requires zero coding, but it\u2019s only suitable for occasional updates. Do this daily or weekly, and you\u2019ll quickly run into version headaches and human error. Method 2: Using Google Apps Script . Google\u2019s built-in scripting tool lets you set up automated syncs straight from Sheets. It\u2019s great for semi-technical teams wanting to automate without deep diving into a codebase. You get more control over what moves and when, but you\u2019ll need some JavaScript know-how. Method 3: Custom Scripts with Programming Languages . For the dev-savvy, Python, Node.js, or JavaScript scripts open the door to complete customization. You can tailor the sync to fit even the weirdest edge cases. But you\u2019ll need the time, skill, and resources to build, monitor, and maintain it. Method 4: No-Code/Low-Code Integration Platforms . This is the \u201cset it and forget it\u201d option. Tools like Skyvia, Zapier, or Integromat let you connect Google Sheets to MySQL with just a few clicks, no code, no fuss. It\u2019s perfect for business teams that want results fast without waiting on developers. The comparison table below summarizes all four methods for quick reference, keeping it engaging and easy to scan. Method Best For Pros Cons Manual Approach (CSV Export/Import) Small teams, one-off jobs, quick fixes. Simple, no coding, no extra tools or costs. Fully manual, error-prone, not scalable for regular syncs. Google Apps Script Semi-technical teams wanting light automation. Free, customizable, runs inside Google Sheets. Requires JavaScript knowledge, limited performance, needs upkeep. Custom Scripts with Programming Languages Dev teams needing full control and customization. Highly flexible, integrates deeply with APIs, tailored to specific needs. Requires strong coding skills, more development time, ongoing maintenance. No-Code/Low-Code Integration Platforms Businesses wanting fast, scalable, and automated integration. Easy to set up, no coding needed, user-friendly, scalable, comes with support. May involve subscription costs, less flexibility for highly custom or niche use cases. Method 1: The Manual Approach Let\u2019s start with the simplest option: the good old manual method . This approach is where you download data from Google Sheets as a CSV file and then upload it into MySQL using a tool like phpMyAdmin, MySQL Workbench, or the SQL command line. It\u2019s not fancy, but sometimes we need something that works right now. Best for Small teams with occasional data updates. One-off migrations or quick backups. People who are testing things out before investing time in automation. Teams without developer resources who need a quick fix. Step-by-Step Guide How to Export CSV from Google Sheets Open your Google Sheet. Click File \u2192 Download \u2192 Comma-separated values (.csv, current sheet) . Save the file to your computer. How to Import CSV into MySQL You\u2019ve got a few ways to bring that CSV into MySQL: Option 1: phpMyAdmin Log into phpMyAdmin. Select your target database. Go to the Import tab. Upload your CSV file and set the format to CSV. Map the fields if needed, then hit Go. Option 2: MySQL Workbench Open MySQL Workbench and connect to your database. Use the Table Data Import Wizard to select your CSV and map the columns. Configure the import settings. Run the import and check the results. Option 3: SQL Command Line If you\u2019re comfortable with the command line: LOAD DATA INFILE '/path/to/your/file.csv' \nINTO TABLE your_table \nFIELDS TERMINATED BY ',' \nENCLOSED BY '\"' \nLINES TERMINATED BY '\\n' \nIGNORE 1 ROWS; Pros Simple (no coding or extra tools required). No extra costs (you\u2019re using tools you already have). Suitable for occasional one-time jobs. Cons Completely manual (you\u2019ll need to repeat the process every time). Risk of human error (one wrong file or a missed step can cause issues). No real automation (not scalable for teams that need frequent updates or real-time sync). Limited control (no data transformations or conditional logic during import). Method 2: Using Google Apps Script Google Apps Script is like giving your Google Sheet a little automation brain. Google\u2019s built-in scripting tool (based on JavaScript) lets you write custom functions, automate workflows, and push data into MySQL automatically. This method is great if you want more control and automation without jumping fully into heavy coding or external tools. Best for Teams with someone comfortable dabbling in JavaScript. Businesses that want basic automation without investing in complete dev resources. Cases where you need lightweight, scheduled syncs (but not super complex ones). Anyone wishing to reduce manual effort without leaving the Google environment. Step-by-Step Guide Step 1: Open Google Apps Script In your Google Sheet, go to Extensions \u2192 Apps Script . You\u2019ll land in the script editor, where you can start writing custom code. Step 2: Write a Script to Push Data to MySQL Use Google Apps Script to loop through your sheet rows. Connect to your MySQL database using JDBC (Java Database Connectivity). Insert or update records using SQL queries inside the script. Tip : You\u2019ll need to allow external connections and configure your MySQL server to accept connections from Google, so ensure your database firewall settings are ready. Script example: /** \n * Pushes Google Sheet data to MySQL database \n */ \nfunction pushSheetDataToMySQL() { \n // MySQL connection details (replace with your credentials) \n const server = \"your-mysql-host\"; \n const port = 3306; \n const db = \"your-database-name\"; \n const user = \"your-username\"; \n const pwd = \"your-password\"; \n const jdbcUrl = `jdbc:mysql://${server}:${port}/${db}`; \n // Google Sheet setup \n const sheet = SpreadsheetApp.getActiveSpreadsheet() \n .getSheetByName(\"YourSheetName\"); \n const dataRange = sheet.getDataRange(); \n const data = dataRange.getValues(); \n try { \n // Establish MySQL connection \nconst conn = Jdbc.getConnection(jdbcUrl, user, pwd); \n conn.setAutoCommit(false); // Start transaction \n // Prepare SQL statement \n const stmt = conn.createStatement(); \n // Process each row (skip header row) \n for (let i = 1; i < data.length; i++) { \n const [id, name, email] = data[i]; \n // Parameterized SQL query \n const sql = ` \n INSERT INTO users (id, name, email) \n VALUES (?, ?, ?) \n ON DUPLICATE KEY UPDATE \n name = VALUES(name), \n email = VALUES(email) \n `; \n const pstmt = conn.prepareStatement(sql); \n pstmt.setInt(1, id); \n pstmt.setString(2, name); \n pstmt.setString(3, email); \n pstmt.execute(); \n } \n conn.commit(); // Commit transaction \n conn.close(); \n console.log(\"Successfully pushed\", data.length-1, \"records!\");\n } catch (e) { \n console.error(\"Error:\", e.message); \n if (conn) conn.rollback(); // Rollback on error \n throw e; \n } \n} Step 3: Schedule It (Optional) If you want to run it regularly, go to Triggers in Apps Script. Set up a trigger to run your script on a schedule (hourly, daily, etc.). Pros Automated (no more manual exports or imports). Flexible (you can customize logic to fit your business needs). Free (built right into Google, no extra software or costs). Cons It requires scripting knowledge (you need a professional comfortable writing and maintaining the code). Limited performance (not ideal for very large datasets or super complex operations). Setup complexity (configuring database permissions, handling errors, and managing credentials can get tricky). Ongoing maintenance (if something breaks, someone has to fix the script). Method 3: Custom Scripts with Programming Languages Now we\u2019re in developer territory . Writing custom scripts with languages like Python, Node.js, or JavaScript gives you maximum control over how the info moves between Google Sheets and MySQL. You (or your dev team) can set the exact rules for when data syncs, how it\u2019s transformed, and where it lands; no off-the-shelf tool can match that level of flexibility. But fair heads-up: this route isn\u2019t for the faint of heart. You\u2019ll need coding chops, time, and the willingness to maintain everything long-term. Best for Teams with experienced developers ready to dive in. Businesses with unique or complex workflows that no standard tool covers. Companies needing high customization or tight integration with internal systems. Projects where performance, security, or data handling require full control. Pros Maximum flexibility (you decide exactly how, when, and what to sync. Handles complexity (perfect for edge cases, custom transformations, and niche business logic). Integrates deeply (can connect with other internal systems or APIs beyond Google Sheets and MySQL). Cons Requires developer time (this isn\u2019t something you can set up and walk away from). Higher maintenance (scripts break, APIs change, and someone has to keep it all running). Takes longer to build (compared to no-code solutions, this approach needs more upfront investment). Not beginner-friendly (non-technical teams will struggle without dev support). Method 4: No-Code/Low-Code Integration Platforms Here\u2019s the option many teams get excited about: the no-code/low-code route . Tools like [Skyvia](https://skyvia.com/data-integration/replicate-google-sheets-to-mysql) provide a clean visual interface where you map fields, set up schedules, and automate workflows with just a few clicks. This is the \u201cI want power and simplicity\u201d choice, perfect for busy teams that want results without waiting on developers or building from scratch. Best for Non-technical teams that need data integration but don\u2019t want to code. Companies looking for speed and simplicity without sacrificing features. Teams that want to automate regular syncs without maintaining scripts. Businesses using multiple tools that need an integration hub. Step-by-Step Guide Step 1: Sign Up and Connect Head over to [Skyvia](https://skyvia.com/) and create a free account. Click +Create New \u2192 Connection. Connect your Google Sheets and MySQL accounts using the built-in connectors. Click the Create Connection button. In the screenshot below, you can see an example of a Google Sheets connection. Repeat the same steps for MySQL. Step 2: Set Up the Integration Once the connection is established, choose what you want to do: import, export, or set up a continuous sync. To do it, click +Create New, go to the integration column, and select the scenario type. Select the source (Google Sheets) and target (MySQL), name the connection, and click the Create button. Schedule your integration to run hourly, daily, or at custom intervals. Step 3: Monitor and Manage Use Skyvia\u2019s dashboard to monitor runs, check logs, and troubleshoot any hiccups. You\u2019ll get alerts if something fails, so you can stay ahead of issues. Pros No coding is required (it is super accessible for non-technical users). Fast setup (you can get up and running in minutes). Built-in automation and monitoring (reduces manual work and keeps things smooth). Handles multiple platforms (great if you need to connect more than just Sheets and MySQL). Cons Less customization (while flexible, no-code tools may not cover every niche scenario). Dependence on a third party (you\u2019re trusting the tool\u2019s reliability and updates). Conclusion We\u2019ve walked through several ways to sync Google Sheets to MySQL, from good old manual CSV imports to Google Apps Script hacks , custom-coded solutions , and no-code integration platforms like Skyvia. Each method has its place, depending on what you\u2019re working with. The best approach depends on: How technical your team is. Your budget. How often do you need to sync? How much data are you moving around. Manual might be fine if you\u2019re just doing a quick one-off update. If you\u2019ve got in-house developers, custom scripts can get the job done, though they come with maintenance headaches. But if your goal is an automated, reliable, and no-code solution, Skyvia is a good selection that lets you set up powerful, scheduled integrations without the hassle, so you can focus on the work that matters, not chasing down data sync issues. F.A.Q. for Google Sheets to MySQL Can I sync Google Sheets to MySQL in real-time? Not exactly. Most tools offer scheduled syncs at regular intervals, which keep data updated frequently but not instantaneously. How do I handle data type differences between Google Sheets and MySQL? Make sure to map and format your data carefully so that columns align with MySQL\u2019s data types, preventing mismatches or errors. Is connecting Google Sheets to MySQL secure? Yes, as long as you use secure connection methods with encryption to protect your data during transfers between systems. What if my Google Sheet has multiple tabs? You can typically choose which tab to sync , allowing you to target only the specific sheet you need without pulling everything. Can Skyvia handle updates and deletes, not just new rows? Yes, Skyvia supports inserting, updating, and deleting records, letting you fully manage your data between Google Sheets and MySQL. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-google-sheets-to-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=Google+Sheets+to+MySQL%3A+Sync+and+Automate+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-google-sheets-to-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-google-sheets-to-mysql/&title=Google+Sheets+to+MySQL%3A+Sync+and+Automate+Data) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect HubSpot to MySQL In Various Ways By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/#respond) 558 February 18, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+HubSpot+to+MySQL+In+Various+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/&title=How+to+Connect+HubSpot+to+MySQL+In+Various+Ways) There\u2019s a trick in extending HubSpot\u2019s reporting and automation capabilities beyond what\u2019s available. It can be extended even beyond HubSpot\u2019s highest pricing tier. The trick? Linking HubSpot to MySQL! It could be the game-changer you\u2019ve been looking for to manage customers, sales, and marketing. This article will teach you the basics of integrating HubSpot with MySQL. Table of Contents What is HubSpot? What is MySQL? What is HubSpot to MySQL Integration, and How Does it Benefit Businesses? Use Cases for HubSpot Integration with MySQL Methods to Connect HubSpot and MySQL Method 1. HubSpot Manual Import & Export Method 2. Native HubSpot Tools (Operations Hub Data Sync) Method 3. API-based Integration Method 4. No-Code Integration Conclusion Let\u2019s dive in. What is HubSpot? HubSpot positions itself as an AI-powered customer platform with all the software needed to connect marketing, sales, and customer service. Its features include automating marketing, managing content, closing deals, scaling support, automating billing, and more. Businesses new to HubSpot can start quickly and easily because of its clean and simple interface and fast setup. If you are still skeptical, you can try the free tier first. It\u2019s a free CRM with email tracking, deal tracking, reporting, and live chat. These are available without forcing you to upgrade. Since we are on the topic of integrating HubSpot to MySQL, it has native import and export tools, too. It is also developer-friendly as it offers you a developer account. What is MySQL? MySQL is a popular open-source relational database that powers many websites and systems. It is part of the top 5 in databases based on the rankings in [StackOverflow](https://survey.stackoverflow.co/2024/technology#1-databases) and [DB-Engines](https://db-engines.com/en/ranking) . MySQL is best for structured data and complex queries. Your business can benefit by using MySQL as a data warehouse or data store for your operational system. Its large community is already an assurance that you will have the support you need to use MySQL to the fullest. What is HubSpot to MySQL Integration, and How Does it Benefit Businesses? Integrating your customer data in HubSpot to a relational database like MySQL means you will link the two to drive your business further with new features. If you\u2019re integrating into an operational system powered by MySQL, you can streamline some of your operational procedures and save on costs. What can you get out of this integration? Unified view of customer data. Improved marketing automation. Enhanced data management for better decision-making. The following features are not available at the time of writing, even if you\u2019re in the highest pricing tier in HubSpot: Deep Customer Segmentation & Lifetime Value (CLV) Analysis : HubSpot lets you segment customers, but\u00a0not at a granular level\u00a0across multiple data sources. MySQL allows complex joins across\u00a0sales, marketing, support, and finance data\u00a0for\u00a0real CLV analysis. Real-Time Alerts & Automation Based on SQL Queries : HubSpot has\u00a0workflow automation, but you can\u2019t trigger alerts based on\u00a0complex SQL conditions. MySQL allows real-time alerts (e.g.,\u00a0if lead engagement drops suddenly). Churn Prediction & AI-Based Analysis: There\u2019s no built-in AI churn prediction in HubSpot. MySQL enables\u00a0machine learning models\u00a0to analyze churn risk based on behavior and sentiment data. The above should give you an idea of why integrating the two systems can level up your business with more features for decision-making. Use Cases for HubSpot Integration with MySQL You might be wondering what more useful stuff you can do with integrating MySQL with HubSpot. From [application integration](https://skyvia.com/learn/application-integration) to [process automation](https://skyvia.com/learn/what-is-process-automation) to business intelligence, you will find this treasure nugget worth the effort for your business. Below are some of them: Enhanced Data Synchronization You might be using HubSpot and another system using MySQL to run your day-to-day business. If they have copies of the same customer information, it\u2019s a good idea to sync them and avoid inconsistent copies. Data synchronization will make sure both have the same copies of customer records. This is also automated to avoid discrepancies and human errors. This will ensure both systems have up-to-date customer information. It can be done with regular or real-time syncs, depending on your requirements. Example : Real-time updates of customer profiles between HubSpot and a legacy, custom-built system. Advanced Reporting and Analytics MySQL can host a data warehouse or data marts for customer data, sales, marketing, and more. You can periodically update them from HubSpot using your tool of choice. From there, you can create a dashboard and other analytical reports. Example: Monitor customer loyalty and churn through a Power BI dashboard that connects to a MySQL data warehouse. A daily integration job runs to update the data warehouse from HubSpot. Streamlined Sales Processes Sync HubSpot sales data to MySQL using an integration tool. Then, using MySQL queries, you can automate processes that will handle custom customer alerts and communication. Example: Automated lead scoring and follow-up reminders based on synchronized data. Methods to Connect HubSpot and MySQL Next, we\u2019ll discuss the tools you can use to connect HubSpot to MySQL. There are tools built within HubSpot and APIs are available for extra flexibility. Third-party tools, on the other hand, can offer easy but flexible integrations. Below is a comparison table for the different methods we\u2019re going to discuss: Method Group Method Best For Skill Level Customizability Native HubSpot Manual Import/Export Simple, occasional data transfers Beginner Low Operations Hub Syncing with supported cloud apps and services Beginner Low Custom Coding API-Based Full control over data transformation and flow Advanced Very High Third-Party No-Code (e.g., Skyvia, Talend, Apache Nifi) Quick implementation and companies without dedicated developers. Intermediate Medium to High Let\u2019s discuss each below. Method 1. HubSpot Manual Import & Export These are native methods to move data in and out of HubSpot. It will help you populate data manually from flat files like CSV. You can also make a quick backup of contacts, deals, and more or share them with another application for reporting and analysis. You can share flat files from and to MySQL by using an external tool to import and export. Pros Ease of Use: It\u2019s plain and simple. You just upload or download a file. No heavy coding is required. Quick Data Migration: If you\u2019re switching systems or want a quick backup, it works fast. Control Over Data: You decide what to import or export. It\u2019s like choosing which files to keep or share. Flexibility: You can use exported data in any tool that supports CSV, Excel, or similar formats. Cons Manual Process: It\u2019s not automated by default. You often have to run imports or exports manually, which can be time-consuming if done frequently. Since HubSpot Export produces flat files, you need an external tool to import these files to MySQL. Importing to HubSpot requires the external tool to export the records to flat files first. You also can\u2019t operate on contacts, deals, and other records at once. You have to do it one by one. Data Mapping Challenges: The fields in your file need to match HubSpot\u2019s fields. If they don\u2019t line up, you might see errors or missing info. Limited Real-Time Sync: Since you\u2019re manually handling the files, it doesn\u2019t provide real-time updates between systems. Data Size Constraints: Very large datasets might require extra care or chunking to avoid errors. Best For The HubSpot Import and Export is best for ad hoc import and export requirements. For more details, check out HubSpot\u2019s documentation for [Import](https://knowledge.hubspot.com/crm-setup/import-objects) and [Export](https://knowledge.hubspot.com/data-export/export-your-data-from-hubspot) . How to Manually Import Data to HubSpot Importing records from MySQL to HubSpot using the manual HubSpot Import needs two stages: Export records from MySQL to a flat file like CSV. Import the files to HubSpot. There\u2019s no direct import. Let\u2019s say you want to import a list of new contacts. The following are the steps to do it: Open your MySQL tool (e.g., dbForge Studio for MySQL) or an operational system using MySQL that will export to a CSV file. Note the location of the CSV file. Sign in to HubSpot and point your mouse to CRM in the Navigation pane. Click\u00a0Contacts,\u00a0then click Add Data from the upper right corner of the page. On the next page, click Import a File . Click Start Import on the next page. The next page will ask you what kind of data is in your file. Mark checked the Contacts box, then click Next . Click Choose a File , locate your CSV file, and click Open . Map the CSV columns to HubSpot contacts columns. Specify Don\u2019t import column if you don\u2019t want to import one or more columns. The rest should map to the correct HubSpot column or create a new HubSpot property. Click Next to proceed. Mark checked the Create a contacts list for this import ,\u00a0and the I agree that all contacts in this import are expecting to hear from my organization\u2026 Click Finish Import . Once the import is done, you will see how many rows were imported. How to Manually Export Data from HubSpot Exporting HubSpot data to MySQL also needs two stages, such as manual import. First, export the data from HubSpot, then import the flat file in MySQL using a tool. Follow the steps below: Sign in to your HubSpot account and navigate to CRM -> Contacts . Click Export , and in the pop-up window, click Customize . Mark checked All properties on records . This will avoid associated columns outside of Contacts. Click Export . An email will be sent to your HubSpot registered email address. Download the CSV file from your email client and using a MySQL tool, import the CSV file to your MySQL table. Use a tool with powerful column mappings like [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/) . Method 2. Native HubSpot Tools (Operations Hub Data Sync) Data Sync from HubSpot\u2019s Operations Hub is an out-of-the-box, code-free integration package that allows two-way and historical syncing. HubSpot realizes that teams can use different tools, and to avoid silos, they need to sync. So, this tool emerged as a better option for the manual import and export tool. Data Sync keeps HubSpot and external systems up to date in auto-pilot. Pros Ease of Use: It\u2019s built right into HubSpot. There\u2019s no need for heavy coding or installation of another app. Time Saver: Unlike HubSpot\u2019s Import and Export, this automates the sync process, so you don\u2019t spend hours on manual updates. Reliability: Being a native feature, it\u2019s designed to work smoothly with HubSpot. Security: Data Sync sits in HubSpot\u2019s strong security standards to keep your data safe. Cons Limited Customization: It might not let you tweak every little detail. For highly specific needs, you might need an API-based solution or a third-party tool (discussed later). Plan Restrictions: The full range of features may only be available on paid tiers. The free version might give you only the basics. Complex Setups: While basic sync is simple, setting up advanced scenarios can be a bit tricky. Best For HubSpot\u2019s Data Sync is best for basic syncing requirements with supported apps and services. For complex scenarios, you may need an efficient third-party ETL tool like [Skyvia](https://skyvia.com/data-integration/integrate-hubspot-mysql) that allows automated syncs to MySQL. For more information, check out HubSpot\u2019s [Operations Hub Data Sync documentation](https://knowledge.hubspot.com/integrations/connect-and-use-hubspot-data-sync) . How to Create a Sync in HubSpot Using HubSpot Data Sync requires an app from the HubSpot Marketplace. It depends on your selected app if it will support which data source, including MySQL. HubSpot can do one-way or two-way syncs. Follow the steps below: Install the app in HubSpot. You can get the app from the HubSpot Marketplace or from Data Management -> Import & Export. In your HubSpot account, navigate to Data Management > Integrations . Click the app you installed. The app should be capable of connecting to your MySQL database. Click Set up your sync . On the Select the data you want to sync page, choose the object to sync to or from in the app and the [HubSpot object](https://knowledge.hubspot.com/get-started/manage-your-crm-database?hubs_content=knowledge.hubspot.com/integrations/connect-and-use-hubspot-data-sync&hubs_content-cta=object#understand-objects-records-and-properties) you want to sync to or from in HubSpot (e.g., Contacts, etc.). Select whether the sync is bi-directional or one-way between the app and HubSpot or vice versa. Map the HubSpot columns to your MySQL database. Set up the sync rules. By default, only records with a valid email address will sync. You have the option to sync even those without an email address. You can also limit the sync if you want to update only the existing contacts in HubSpot or add new ones. Review the rules you configured, then click Save and Sync . The initial sync will begin. Method 3. API-based Integration Integration using HubSpot APIs uses a programming language like Python or TypeScript to call HubSpot APIs and MySQL data access libraries to sync data. This allows full flexibility in syncing HubSpot to MySQL but requires coding skills from your team. You can build tailored solutions that native and third-party tools cannot provide. Pros Flexibility: You can fully customize what data to move and when. Automation: Reduce manual work by scheduling regular API calls. Scalability: Handle large datasets with a flexible programming solution. Cons Complexity: Requires coding, proper error handling, and best practices. If you don\u2019t have someone in your team with these skills, you\u2019re better off using native or third-party tools. API Rate Limits: You need to design around HubSpot\u2019s call limits. Maintenance: Code needs updates if APIs change or if there\u2019s a new requirement. Best For Using coding methods and API-based integration is best for requirements tailored to your organization. How to Import Data to HubSpot from MySQL Using API-based Integration Below is a sample code for importing HubSpot customer data to MySQL. Let\u2019s say you have a MySQL table for customer profiles with a structure like this: CREATE TABLE customers (\n id INT AUTO_INCREMENT PRIMARY KEY,\n hubspot_id VARCHAR(50) NOT NULL,\n email VARCHAR(255),\n firstname VARCHAR(50),\n lastname VARCHAR(50),\n created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP\n); The above table already uses a primary key and a HubSpot ID that connects each table row to a HubSpot customer record. The following is a Python code for the import process: import requests\nimport mysql.connector\n\n# HubSpot API key\nHUBSPOT_API_KEY = 'your_hubspot_api_key_here'\n\n# MySQL connection details\nMYSQL_CONFIG = {\n 'user': 'your_mysql_username',\n 'password': 'your_mysql_password',\n 'host': 'localhost',\n 'database': 'your_database'\n}\n\ndef fetch_customers_from_mysql(cursor):\n cursor.execute(\"SELECT hubspot_id, email, firstname, lastname FROM customers\")\n return cursor.fetchall()\n\ndef update_customer_in_hubspot(customer):\n hubspot_id, email, firstname, lastname = customer\n\n # Access HubSpot API for Contacts\n url = f\"https://api.hubapi.com/crm/v3/objects/contacts/{hubspot_id}?hapikey={HUBSPOT_API_KEY}\"\n payload = {\n \"properties\": {\n \"email\": email,\n \"firstname\": firstname,\n \"lastname\": lastname\n }\n }\n headers = {\"Content-Type\": \"application/json\"}\n response = requests.patch(url, json=payload, headers=headers)\n if response.status_code == 200:\n print(f\"Updated customer {hubspot_id} successfully.\")\n else:\n print(f\"Failed to update customer {hubspot_id}. Status code: {response.status_code}\")\n\ndef main():\n # Connect to MySQL\n cnx = mysql.connector.connect(**MYSQL_CONFIG)\n cursor = cnx.cursor()\n \n customers = fetch_customers_from_mysql(cursor)\n \n for customer in customers:\n update_customer_in_hubspot(customer)\n \n cursor.close()\n cnx.close()\n\nif __name__ == '__main__':\n main() The code above is a simple example of an import from MySQL to HubSpot. The code uses the requests and mysql.connector libraries to interact with HubSpot and your MySQL database. You must replace the HUBSPOT_API_KEY with yours and the MYSQL_CONFIG with your MySQL database credentials and configuration. The function\u00a0fetch_customers_from_mysql\u00a0uses a SELECT statement from your MySQL customer table using the table structure earlier. You may need to modify this statement, like adding a WHERE clause to filter rows. Or an ORDER BY clause to sort. Note that the sample above will update HubSpot customer records. You need to code further to update the\u00a0hubspot_id\u00a0column in MySQL after inserting new customer records into HubSpot. The update_customer_in_hubspot function will call the HubSpot API to update customer records. The main function will call the fetch_customers_from_mysql to query your customer records from MySQL, loop through the rows, and call update_customer_in_hubspot for each row. Note that this does not handle API limits yet. How to Export Data from HubSpot to MySQL Using API-based Integration The next code sample will export data from HubSpot to MySQL. It will use the same MySQL table structure sample earlier. import requests\nimport mysql.connector\n\n# HubSpot API key\nHUBSPOT_API_KEY = 'your_hubspot_api_key_here'\n\n# MySQL connection details\nMYSQL_CONFIG = {\n 'user': 'your_mysql_username',\n 'password': 'your_mysql_password',\n 'host': 'localhost',\n 'database': 'your_database'\n}\n\ndef fetch_customers_from_hubspot():\n # Access HubSpot API for Contacts\n url = f'https://api.hubapi.com/crm/v3/objects/contacts?hapikey={HUBSPOT_API_KEY}'\n response = requests.get(url)\n if response.status_code == 200:\n data = response.json()\n return data.get('results', [])\n else:\n print(\"Failed to fetch data:\", response.status_code)\n return []\n\ndef insert_customer_into_mysql(cursor, customer):\n # Extract sample customer fields\n customer_id = customer.get('id')\n properties = customer.get('properties', {})\n email = properties.get('email')\n firstname = properties.get('firstname')\n lastname = properties.get('lastname')\n sql = \"\"\"\n INSERT INTO customers (hubspot_id, email, firstname, lastname)\n VALUES (%s, %s, %s, %s)\n \"\"\"\n cursor.execute(sql, (customer_id, email, firstname, lastname))\n\ndef main():\n # Fetch customer data from HubSpot\n customers = fetch_customers_from_hubspot()\n \n # Connect to MySQL\n cnx = mysql.connector.connect(**MYSQL_CONFIG)\n cursor = cnx.cursor()\n \n for customer in customers:\n insert_customer_into_mysql(cursor, customer)\n \n cnx.commit()\n cursor.close()\n cnx.close()\n\nif __name__ == '__main__':\n main() The code above uses the same Python libraries. It also uses the same HubSpot API key and MySQL configuration that you need to replace. The fetch_customers_from_hubspot function will fetch HubSpot customers and return them in JSON format. Meanwhile, the insert_customer_into_mysql will insert a row in the MySQL table using an INSERT statement. The main function will handle the call to fetch_customers_from_hubspot , loop through each customer, and call insert_customer_into_mysql . Method 4. No-Code Integration No-code integration uses third-party tools with a graphical user interface. Some tools have a drag-and-drop builder for your data \u2014 no deep coding is required. It lets you connect HubSpot to MySQL with a few clicks and minimal fuss. It usually follows an ETL process where you extract, transform, and load records to your target. If you don\u2019t have a developer team, this will be your go-to solution if the native tools don\u2019t fit your requirements. Many tools offer an intuitive interface where you can map HubSpot fields (like email, firstname, lastname) to your MySQL columns with a simple drag-and-drop or dropdown lists. It will also let you schedule the integration runtime so you can sit tight and have the integration package do its thing. The next morning, you can check the logs and see if things went well or not. Popular tools include: Skyvia : A 100% cloud data integration platform that offers simple to complex integrations using a graphical, drag-and-drop interface. Integrations include HubSpot and MySQL. Apache Nifi : An open-source data integration tool that offers a client app with a graphical interface to design data pipelines. Talend : Another cloud integration platform with various commercial offerings for different data management tasks. Some use case examples include marketing teams needing custom segmentation and targeted campaigns. To do this, the setup should include a MySQL data warehouse. Sales teams, on the other hand, can use it to monitor pipeline performance and forecast sales. Pros Ease of Use: No coding required. Perfect for non-techies. Quick Setup: Get your integration running in minutes with guided, visual workflows. Reduced Manual Errors: Automated syncs cut down on human mistakes from manual imports/exports. Lower Maintenance: Once configured, the system runs on its own without constant developer intervention. Cons Limited Customization: Depending on the tool, advanced, complex workflows may be hard to configure compared to custom API integrations. Scalability Concerns: Some no-code tools might struggle with very large datasets or highly intricate data structures. Third-Party Dependency: You rely on the no-code platform\u2019s uptime and features. If they change or go down, your integration could be affected. Potential Additional Costs: Many no-code platforms work on a subscription model. Costs might add up as your needs grow. Best For No-code integration is best for quick implementation and companies without dedicated developers. How to Integrate HubSpot to MySQL Using Skyvia Skyvia is a no-code data platform that offers data syncs, import, export, backup, and more. You can make your integrations in Skyvia conveniently and easily with its graphical user interface with drag-and-drop. It can connect to hundreds of data sources, including HubSpot and MySQL. It also doesn\u2019t matter if your MySQL database is hosted in the cloud or on-premises. Skyvia can handle simple imports and exports with flat files like CSV. Meanwhile, Skyvia has [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) to handle complex integrations. This section will present Skyvia Data Flow to connect your HubSpot to MySQL. This integration task only needs two Skyvia Connections and one Data Flow. Let\u2019s begin. STEP 1: Create a HubSpot Connection The first connection is for HubSpot. Creating it in Skyvia is easy with the simple steps below: From the upper left of your current Skyvia workspace, click + Create New , then click Connection . From the list of Connectors, type hub or hubspot in the Filter box, then select HubSpot . Configure the connection on the next page by signing in to HubSpot. Click Test Connection to check if Skyvia can connect to your HubSpot. See a successful connection below: Rename the connection from Untitled to HubSpot or any name you like, then click Create Connection (for a new connection) or Save Connection (for changing an existing connection). Note the name of this HubSpot connection because you\u2019re going to need it later. STEP 2: Create a MySQL Connection Creating a Skyvia MySQL connection is easy as well. Follow the steps below: Again, from the upper left of your current Skyvia workspace, click + Create New , then click Connection . From the list of Connectors, type my or mysql in the Filter box, then select MySQL . Configure the connection on the next page by entering your MySQL server credentials. Click Test Connection . Below is a successful connection I made: Rename the connection from Untitled to MySQL or any name you like. Then, click Create Connection (for a new connection) or Save Connection (for changing an existing connection). The configuration above is a cloud MySQL database from FreeDB, so I named it MySQL FreeDB . Note the name of this MySQL connection because you\u2019re going to need this too later. From this server, I have the same customer table we used earlier, as seen below: STEP 3: Setup Data Flow Integration A Skyvia Data Flow allows you to link two or more data sources. In this case, our source is HubSpot, and our target is MySQL (export). If we want to import, we can reverse the order. In the following steps, we will export HubSpot contacts to our sample MySQL customer table. From the upper left of your current Skyvia workspace, click + Create New , then click Data Flow . Drag a Source component into the Data Flow canvas and configure it as a HubSpot source. Use the HubSpot connection created earlier. Choose Execute Query action. Click Open Query Builder . Double-click the Contacts table and drag the\u00a0Id,\u00a0Email,\u00a0Last Name,\u00a0First Name, and\u00a0Last Modified Date\u00a0fields to the Result Fields box. Click Apply . Drag a Target Component onto the canvas and configure it as a MySQL target. Connect the Source to the Target by dragging the small circle under the Source component into the Target . An arrow should now point from the Source to the Target . Use the\u00a0MySQL FreeDB\u00a0connection created earlier for the Connection box. Choose an action to perform. For this example, we will use Insert . Select a MySQL table. In this case, the customers table. Scroll down further in the configuration pane until you see the Parameters section. Click the pencil icon to start mapping the source to target columns. Click the Auto Mapping button to map the same name columns. Otherwise, map the columns manually. To do that, click the MySQL column. Then, on the Value box, type the corresponding HubSpot column. Optionally, you can do minor transformations by using a function like upper() . Do that until all columns are mapped. Click Apply . Rename your Data Flow from Untitled to something descriptive of your purpose. For example, Hubspot-to-MySQL . Click Create to save your Data Flow. STEP 4: Create a Schedule You can run your Data Flow manually. But if you want to run it on autopilot, you need to create a schedule. Follow the steps below to create one: In your new Data Flow, click Schedule in the upper left corner of the Skyvia workspace. Set your preferred schedule, then save it. Below is a daily schedule that will run at midnight: STEP 5: Run and Monitor Integration Let\u2019s try to run it manually. Click Run from the upper right corner of the Skyvia workspace. Then, click Monitor in the upper center of the page and wait until it\u2019s done. You will see the following similar results once it\u2019s successful: But let\u2019s check by comparing the records from HubSpot and MySQL. Here are 201 records in HubSpot, as seen below: Now, using dbForge Studio for MySQL, used a SELECT statement and confirmed the 201 records, as seen below: The first visible records are also the same compared to the one in HubSpot. If you need another HubSpot object like Companies to integrate to MySQL, you need another Data Flow. Then, if you want them to run in sequence or parallel, you need to include each of the Data Flows in a Skyvia Control Flow. The above integration is one-way (from HubSpot to MySQL). If you need the reverse, create another Data Flow and simply switch the source and target. There\u2019s no code involved in the Data Flow package, just a few clicks and a little typing. That\u2019s how simple it is when using a no-code integration. Conclusion To recap, we discussed what is HubSpot and MySQL. Integrating the two will extend reporting and analysis for data-driven decisions. You saw how it is done using HubSpot\u2019s own import and export tools and its Operations Hub Data Sync. We also utilized HubSpot APIs using Python code and used a no-code tool like Skyvia. You have several options to choose from. Study them carefully and see which of the methods are perfect for your requirements. FAQ for HubSpot to MySQL What are the main benefits of integrating HubSpot with MySQL? It provides a unified data view, better reporting, and streamlined sales processes. Why should I choose HubSpot MySQL integration over manual data management? Integration automates data flow, reduces errors, and saves time. What are the key features of HubSpot MySQL integration platforms? Real-time data syncing, customizable reporting, and automation capabilities. What are the potential challenges of integrating HubSpot with MySQL? Data mapping complexities, maintenance of integrations, and potential technical skill requirements. Can I integrate HubSpot with MySQL if I have limited technical expertise? Yes, using no-code integration tools like [ETL platforms](https://skyvia.com/blog/etl-tools/) can simplify the process. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+HubSpot+to+MySQL+In+Various+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/&title=How+to+Connect+HubSpot+to+MySQL+In+Various+Ways) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-hubspot-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Hubspot to SQL Server: A Comprehensive Guide By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/how-to-connect-hubspot-to-sql-server/#respond) 3231 March 25, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Hubspot+to+SQL+Server%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-hubspot-to-sql-server/&title=How+to+Connect+Hubspot+to+SQL+Server%3A+A+Comprehensive+Guide) A growing number of tools available today makes it harder for businesses to put the disintegrated data pieces together into a coherent picture. Establishing a single source of truth is vital for teams to get advanced analytics, accurate reporting, and breaking down data silos. This is precisely what HubSpot SQL Server integration does. This guide will take you through various integration methods, highlighting their pros and cons to help you choose the best approach for your business needs and technical skills. Table of Contents What is Hubspot? What is SQL Server? Why Do You Need to Connect HubSpot to SQL Server? SQL Server for HubSpot Integration Overview Method 1. Third-Party Cloud Integration Tools Importing Data to Hubspot Data Replication to SQL Server Method 2. ODBC Drivers Method 3. Custom API Integrations Conclusion What is Hubspot? As one of the leading Customer Relationship Management (CRM) platforms, [HubSpot](https://www.hubspot.com/) lives up to its name: it is a pivot of core business activities in marketing, sales, and customer service. The platform includes a suite of six premium products, commonly referred to as \u201chubs\u201d that span six main directions: Marketing Sales Service Content Operations Commerce The hubs are interconnected through HubSpot\u2019s Smart CRM, an AI-powered system that unifies customer data across teams.\u200b The Benefits of Hubspot HubSpot\u2019s prominence in the CRM and marketing automation landscape leaves no doubt. [According to Statista](https://www.statista.com/statistics/1134686/marketing-automation-solutions-market-share-world/) , the platform commanded nearly 35% of the global marketing automation software industry in July 2024. [HubSpot officially declares](https://www.hubspot.com/newsroom) to serve 238.000 customers across more than 135 countries, reflecting its trust among businesses worldwide. One of the key reasons for HubSpot\u2019s popularity is its affordability \u2013 offering essential tools for free, with payment required only for advanced features. But behind such a success must be something more than just cost-effectiveness. And it is undeniable benefits that the platform brings to businesses: Marketing automation . HubSpot provides mechanisms to design strategies based on the target audience\u2019s interests and customize marketing approaches. Customer insights & behavior tracking . The platform collects data on customer engagement and interactions with the brand. This helps businesses build detailed buyer personas, track the customer journey, and measure overall performance. Scalability . HubSpot is suitable for all kinds of businesses, from small startups to large enterprises. Comprehensive reporting & analytics . The platform provides built-in reporting tools to track the performance of lead generation, sales campaigns, and email marketing. What is SQL Server? [SQL Server](https://www.microsoft.com/en-us/sql-server) is a relational database management system (RDBMS) developed by Microsoft. It is designed to store, retrieve, and manage data efficiently. The support for various data types, combined with a robust set of features and built-in security mechanisms, makes it a bedrock for a variety of applications. The Benefits of SQL Server High performance & scalability . SQL Server is optimized for speed, offering fast data retrieval and processing. It supports scale from small applications to enterprise-level solutions. Comprehensive data handling . The support for structured, semi-structured, and unstructured data within a single platform provides flexibility in data management and integration processes. \u200b Advanced storage options . SQL Server offers powerful [data mining](https://skyvia.com/blog/data-mining-tools/) , disk partitioning, and [data management tools](https://skyvia.com/blog/best-data-management-tools/) to efficiently store and organize vast amounts of information.\u200b Enhanced security . The platform employs enterprise-grade security mechanisms to protect database files, such as transparent data encryption (TDE) and role-based access control (RBAC). Backup & recovery . SQL Server\u2019s data corruption-proof capabilities include automated backups, point-in-time recovery, and full database restoration. Why Do You Need to Connect HubSpot to SQL Server? Given the strengths of both HubSpot and SQL Server, it\u2019s clear that both excel in their respective domains. Integrating these powerful tools exposes their strong points,\u00a0leading to enhanced data management and more informed decision-making, including: 360-degree vision across multiple channels . Consolidating data within a single SQL Server database provides an inclusive vision of customer interactions, sales activities, and marketing campaigns. Scalability . SQL Server easily accommodates increasing data volumes, allowing businesses to scale their data infrastructure as they grow. Enhanced analytics and deeper insights . Leveraging integrated HubSpot data allows for more sophisticated analyses of customer behavior and business performance. SQL Server for HubSpot Integration Overview There are numerous ways to integrate HubSpot with SQL Server. Traditional SQL-based methods, such as the Import Wizard or BULK INSERT, remain trustworthy and work for one-time migrations or manual data transfers. However, they lack the flexibility, automation, and cloud compatibility required in modern integrated ecosystems. The methods explored in this article offer a versatile and scalable approach, enabling businesses to: Conduct continuous, bidirectional synchronization between HubSpot and SQL Server. Choose an option that fits their technical capabilities. Integrate HubSpot with cloud-based SaaS tools. Method Best For Complexity Customizability Third-party integration tools Businesses needing quick, no-code integration with automation. Low to medium Medium ODBC-based integration SQL Server users & DB admins needing direct access to HubSpot data. Medium Low to medium HubSpot REST API integration Companies requiring full control over data sync and real-time updates. High High Method 1. Third-Party Cloud Integration Tools This method involves integrating data through cloud platforms such as Skyvia, Hevo Data, Fivetran, Stitch, and Coefficient. While their functionalities may vary, these tools share common core principles: pre-built connectors, a pipeline designer,\u00a0automated workflows, a no-code approach, and continuous synchronization between data sources. We\u2019ll demonstrate the advantages of this method using the Skyvia platform. Best For Users without technical expertise. Businesses seeking a quick and easy setup. Teams that require automated data syncing. Businesses managing multiple data sources. Importing Data to Hubspot When evaluating key sales metrics, it is important to have a holistic view across multiple sources. For example, active sales are recorded in HubSpot CRM, while passive sales from an online shop are stored in SQL Server. Analyzing them separately can lead to a fragmented vision. That\u2019s why it\u2019s crucial to integrate all sales information into a single source of truth \u2013 with HubSpot CRM as the central system. In this scenario, we\u2019ll transfer passive sales data from SQL Server to HubSpot using [Skyvia Import](https://skyvia.com/data-integration/import) \u2013 a wizard-based ETL tool for bidirectional, no-coding integrations between supported sources. Step 1. Create Connections [Sign in](https://app.skyvia.com/) to Skyvia, or, if you don\u2019t have an account yet, create it for free. Click +Create New , select Connection , and choose SQL Server. Note : Skyvia supports two connection methods for SQL Server: direct and with an agent. Use a direct connection if your SQL database is accessible via the internet. If it is on a local computer or network, you\u2019ll need to install the [Skyvia Agent](https://skyvia.com/agent) application to make a secure connection. Provide SQL Server credentials: server address, user ID, password, and database name. Click Create Connection . To connect Skyvia to HubSpot, perform the following steps: In Skyvia, go to Connections and click +Create New . On the Select Connector page, choose HubSpot. On the connection configuration page, click Sign in with Hubspot . Log in with your HubSpot credentials. Note : The connection is enabled via OAuth authentication, meaning that you don\u2019t have to share your HubSpot credentials with Skyvia. Instead, Skyvia generates a secure authentication token, which is bound to the connection. Click Create Connection . Step 2. Create an Import Integration Once both connections are ready, let\u2019s create an integration to import sales information from SQL Server to HubSpot. In the top menu, click +Create New and select Import . Set the corresponding source type. For the SQL Server to HubSpot import, choose Data Source database or cloud app . Select the SQL Server connection as Source and the HubSpot connection as Target. Click Add new to create an import task. This will open the task editor window. On the Source Definition page, select a table to import data from (dbo.Deals for this example). On the Target Definition page, select an object to import your data to and one of the available operations. Since we\u2019re importing information about deals for the first time, we\u2019ll choose the Insert operation. Click Next step and proceed to mapping. On the Mapping Definition page, map target to source columns. Make sure to map all required columns, otherwise, the integration will fail. Click Save task . Step 3. Run the Integration On the import configuration page, click Create to set up an integration. Click Run to start transferring deal information from SQL Server to HubSpot. You can schedule the integration to run at a specific time. To do this, click Schedule and configure the execution timing. Data Replication to SQL Server Such integrations are commonly conducted to ensure data consistency for strategic planning and forecasting. Large volumes of data are gathered from multiple sources and then processed with ML and AI algorithms to generate predictions. SQL Server\u2019s extensive integration capabilities make it a popular choice for data consolidation with a further connection to ML/AI tools. For this scenario, we\u2019ll use Skyvia to replicate HubSpot data to SQL Server. This approach is convenient and straightforward. It requires no manual mapping plus offers [incremental updates](https://docs.skyvia.com/data-integration/replication/incremental-replication-and-schema-updates.html) to keep the data continuously up to date. Follow the steps below to configure the replication. Step 1. Set Up Configuration Parameters Note : Before you proceed with configuring replication, ensure you have valid connections for SQL Server and HubSpot. If not, perform the same actions as outlined in Step 1 of the first scenario . \u0421lick Create New in the top menu and select Replication . Select HubSpot as a Source and SQL Server as a Target connection. Under Options, select the necessary checkboxes: Note : For this replication, the Incremental Updates and Create Tables options are enabled by default. Incremental Updates allow tracking metadata changes and updating data without reloading the entire dataset. It also includes Update Schema as a sub-option. Create Tables instructs Skyvia to automatically create source tables in the target database during the initial replication. Step 2. Select Data for Replication Choose objects for replication from the table. You can select either specific objects or all displayed objects: Within the selected objects, specify the fields for replication: click the Edit icon next to an object and choose the fields you want to replicate. Note : By default, Skyvia replicates all fields of the selected objects. To exclude specific fields, simply uncheck their boxes. Select the replication mode as Standard . Note : Selecting Incremental Updates on the main configuration page lets you set a replication mode for each object. Choosing Standard mode automatically applies default replication behavior to each object. Click Save task . Step 3. Run the Replication On the main configuration page, click Create to set up an integration. Click Run to start replicating data from HubSpot to SQL Server. You can track its progress in the Monitor or Logs tabs. Click Schedule to configure automatic execution. Once the replication is completed, a copy of your HubSpot data will be available in the SQL Server. Advantages of Using Skyvia Integrating SQL Server and HubSpot with Skyvia is fast and straightforward. But beyond ease of use, there is more to this: Multifunctionality : Skyvia is capable of a wide range of data-related tasks, including synchronization, replication, cloud backup, and process automation. User-friendly interface : With the platform\u2019s intuitive design, all users have the best experience regardless of their technical expertise. Flexible pricing : There is a [free plan](https://skyvia.com/pricing) to get started, along with scalable paid subscriptions. Secure data sharing : With Skyvia, which is hosted in the Microsoft Azure cloud, you can securely share information in-house or with external partners \u2013 without the need to move or duplicate data. Considerations of Skyvia While Skyvia is a powerful and user-friendly integration tool, there are a few factors to consider before choosing it as your solution: Cost : The free plan has limited functionality and is suitable only for small-scale data volumes. Skyvia does offer affordable pricing, but advanced features require higher-tier plans, which are paid. Limited customization : With over 200 pre-built connectors, Skyvia supports a wide range of applications. However, for other sources you will need to set up a connection manually by defining source API endpoints and metadata in the JSON format via REST connection editor. Method 2. ODBC Drivers Open Database Connectivity (ODBC) is a standardized database API. Using SQL as a unified language, ODBC is a DBMS- and language-independent interface. This way, it provides a uniform method for applications to interact with a variety of databases, regardless of each system\u2019s specificity. Each DBMS requires a dedicated ODBC driver: you can think of it as a translator that interprets application\u2019s ODBC calls into commands understandable by that particular database system. For this example, we\u2019ll demonstrate the method using the Devart [ODBC Driver for HubSpot](https://www.devart.com/odbc/hubspot/) . Note : For this scenario, you will also need SQL Server Management Studio (SSMS) and .NET Framework 4.5 or higher. Best For Database administrators & data analysts. Organizations prioritizing on-premise control. SQL Server users looking for a direct connection. Companies focused on read-only data analysis. Implementation Steps Prerequisites Before initiating the integration, make sure the following prerequisites are met: Bitness compatibility : The Devart ODBC driver, SQL Server, and SSMS must all share the same architecture \u2013 either 32-bit or 64-bit. \u200b Installation requirements : Both the Devart ODBC Driver for HubSpot and SQL Server should be installed on the same machine. Additionally, ensure that .NET Framework 4.5 or higher is installed. Step 1. Set Up the System Download and install the Devart ODBC Driver. Set up a system DSN (Data Source Name): Open the ODBC Data Source Administrator on your system.\u200b Navigate to the System DSN tab and click Add .\u200b Select Devart ODBC Driver for HubSpot from the list and click Finish . Configure the connection settings by providing necessary authentication details and specifying any required connection parameters. Step 2. Configure a Linked Server in SSMS Launch SSMS and connect to your SQL Server instance.\u200b In the Object Explorer pane, expand Server Objects , right-click Linked Servers , and select New Linked Server . In the New Linked Server dialog, populate the fields:\u200b Linked server : Enter a name for the linked server (e.g., \u201cHUBSPOT\u201d). Server type : Select Other data source . Provider : Choose Microsoft OLE DB Provider for ODBC Drivers . Data source : Enter the name of your DSN configured in the previous step (e.g., Devart ODBC Driver for HubSpot). Click OK to create the linked server. Step 3. Enable Allow in-process Option for MSDASQL Provider In SSMS, expand Server Objects > Linked Servers > Providers.\u200b Locate MSDASQL , right-click on it, and select Properties . In the Provider Options window, check the Allow in-process option and click OK. Step 4. Query HubSpot Data In SSMS, open a new query window.\u200b Execute SQL queries to retrieve data from HubSpot tables via the linked server: Pros Direct SQL Server integration : Enables SQL Server to access HubSpot data in real-time without third-party middleware. Standardized interface : Using ODBC-compliant methods makes it a widely accepted and stable integration approach. Supports SQL queries : Enables directly querying HubSpot using SQL syntax. No coding required : Unlike API-based integrations, the ODBC method doesn\u2019t need programming expertise. Cons Performance considerations : Querying large HubSpot datasets directly from SQL Server may impact performance due to API response times. Limited write capabilities : ODBC drivers primarily support read operations; writing data back to HubSpot may require additional configurations or API usage. Manual setup : Unlike cloud-based tools, the ODBC method requires installation with further manual configuration. Dependency on external software : Employing third-party ODBC drivers may involve licensing costs. Potential compatibility issues : The driver, SQL Server version, and SSMS must match in bitness, which can sometimes lead to setup challenges. Method 3. Custom API Integrations As a highly compatible platform, HubSpot offers native integration capabilities with external systems like SQL Server through its [RESTful APIs](https://developers.hubspot.com/docs/reference/api) . Considering the variety of CRM functionalities, these APIs are organized into several categories, highlighting the complexity of their main platform: CRM API : Manages data related to contacts, companies, deals, and more. \u200b Marketing API : Handles marketing assets and campaigns.\u200b Sales API : Focuses on sales processes and tools.\u200b Service API : Relates to customer service interactions.\u200b Settings API : Provides information about account settings and daily API usage limits. \u200b Best For Large-scale data transfers. Businesses with complex or unique integration needs. Tech-savvy teams with in-house developers. Companies seeking full control over their integrations. Implementation Steps Define integration requirements: Identify the data entities (e.g., contacts, deals) that need synchronization between HubSpot and SQL Server.\u200b Determine the direction of data flow:\u200b From HubSpot to SQL Server: use HubSpot APIs to extract data and insert it into SQL Server. From SQL Server to HubSpot: utilize HubSpot APIs to push data from SQL Server into HubSpot. Set up authentication: obtain necessary credentials (e.g., API keys, OAuth tokens) from HubSpot to authenticate API requests.\u200b Develop data retrieval and transformation logic using programming languages or integration platforms:\u200b Call HubSpot APIs and fetch the required data. Transform the data into a format compatible with SQL Server schemas. Implement data loading into SQL Server: utilize SQL Server\u2019s data import capabilities to insert or update records based on the transformed data.\u200b Schedule regular data synchronization: if needed, set up automated tasks or services to perform data synchronization at defined intervals.\u200b Pros Highly customized integrations : Full flexibility to tailor data sync based on unique business needs. Real-time synchronization : Enables instant data updates, ensuring information remains up-to-date across both platforms. Full control over integrations : Allows \u200bbusinesses to define data flow, security protocols, and performance optimizations. Automation : Supports workflow automation by programmatically triggering actions based on data changes. Cons Requires programming expertise : Development skills are necessary to build and maintain API-based integrations. Ongoing maintenance & monitoring : Needs regular updates to adapt to API changes, business logic updates, and error handling. API rate limits & quotas : HubSpot\u2019s imposed API call limits may restrict large-scale or frequent data transfers. Conclusion In this article, we explored what HubSpot and SQL Server are and discussed how integrating these platforms can facilitate data-driven decisions. We examined three practical integration methods: Cloud integration tools, ODBC drivers, HubSpot\u2019s APIs. Each is flexible enough to cater to various business scenarios and technical skill levels, whether you\u2019re a data analyst, marketing manager, or developer. Whichever method you choose, aligning HubSpot with SQL Server is a crucial step to take the most out of your data potential. HubSpot\u2019s versatility makes it a popular choice for integrating various tools, including payment processing systems, CRM\u2019s, e-commerce platforms, and sales solutions. Check out similar integration cases on our blog: [How to Connect HubSpot to MySQL In Various Ways](https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/) [How to integrate HubSpot and Dynamics 365](https://skyvia.com/blog/hubspot-dynamics-integration/) [HubSpot and Stripe Integration: Improve Your Customer Interactions](https://skyvia.com/blog/hubspot-and-stripe-integration/) FAQ for Hubspot to SQL Server What are the main benefits of integrating HubSpot with SQL Server? Integration enables\u00a0real-time data syncing,\u00a0advanced analytics,\u00a0centralized reporting, and\u00a0automation. It helps businesses create a single source of truth, improve decision-making, and align workflows between sales, marketing, and customer service teams. Can I synchronize data between HubSpot and SQL Server in both ways? Yes, bidirectional synchronization is possible using\u00a0custom API integrations\u00a0or\u00a0third-party cloud tools. ODBC-based methods primarily support read-only access, so writing back to HubSpot may require API configurations or additional integration layers. How can I ensure data consistency during the integration? Use\u00a0incremental updates,\u00a0schema mapping, and\u00a0automated data validation\u00a0to prevent duplicates and mismatches. Custom API integrations offer real-time syncing, while third-party tools provide pre-built data integrity checks for maintaining accuracy. What are some common use cases for HubSpot and SQL Server integration? Sales & marketing alignment: Sync lead and deal data for better insights. Customer support: Centralize interactions for improved service. Advanced reporting: Combine CRM & SQL data for in-depth analysis. Predictive analytics: Use AI/ML models on integrated data for forecasting. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Hubspot+to+SQL+Server%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-hubspot-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-hubspot-to-sql-server/&title=How+to+Connect+Hubspot+to+SQL+Server%3A+A+Comprehensive+Guide) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-salesforce-to-ftp-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Salesforce to FTP Server In 4 Ways By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-connect-salesforce-to-ftp-server/#respond) 406 March 18, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-ftp-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Salesforce+to+FTP+Server+In+4+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-ftp-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-ftp-server/&title=How+to+Connect+Salesforce+to+FTP+Server+In+4+Ways) Integrating legacy systems with Salesforce that won\u2019t support SFTP can be a pain. But if it accepts files through Salesforce FTP integration and exchanges won\u2019t happen over the internet or public network, security may not be a big deal. FTP provides an accessible way to move data between Salesforce and other systems. While SFTP is the gold standard for data exchanges from and to Salesforce, FTP has its place where security is not so much of a concern, and other means exist to protect the transfer. This article will discuss 4 ways to use FTP in Salesforce integration. Rest assured that the basics are not so challenging, and you have more than one way to choose from. Table of Content What is FTP, and How Does it Compare to SFTP? What is Salesforce FTP Integration? Use Cases for FTP Integration with Salesforce Advantages of Connecting an FTP Server to Salesforce Considerations Before Data Integration Overview of the Methods to Connect FTP with Salesforce Native Methods to Connect FTP and Salesforce Custom Code Integration FTP with Salesforce Integrating FTP and Salesforce Using No Coding Approach Conclusion Let\u2019s dive in. What is FTP, and How Does it Compare to SFTP? FTP stands for [File Transfer Protocol](https://en.wikipedia.org/wiki/File_Transfer_Protocol) . It allows file transfers from your computer to an FTP server and vice versa. It is built on a client-server model and data transmission is in clear text. So, if someone is skilled enough to pry on your network, files like CSV are readable. FTP can be secured using [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security) or Secured Sockets Layer (SSL) protocols. Thus, this will become FTPS or [File Transfer Protocol Secure](https://en.wikipedia.org/wiki/FTPS) . SFTP stands for Secure File Transfer Protocol. Since it\u2019s secured, file transmissions are encrypted. Encrypting data transmission adds overhead to the file transfer process. So, if you\u2019re transmitting non-confidential files to internal networks or VPNs, FTP is a faster option. Below is a table of differences between FTP, FTPS, and SFTP: Feature FTP\u00a0(File Transfer Protocol) SFTP\u00a0(SSH File Transfer Protocol) FTPS\u00a0(FTP Secure) Security No encryption (data sent in plain text) Encrypted via SSH Encrypted via SSL/TLS Authentication Internal network transfers (not secure for Internet use) Username/password + SSH key Username/password + SSL/TLS certificate Port Uses ports 21 (command) and 20 (data) Uses port 22 Uses ports 21 (command) and random ports (data) Firewall Compatibility Harder to configure (uses multiple ports) Easy (single port 22) Complex (requires opening multiple ports) Performance Fast but insecure Slightly slower due to encryption Slower than FTP but secure Use Case Internal network transfers (not secure for internet use) Secure file transfers over the internet Secure transfers where SSL/TLS is required (e.g., banking, compliance needs) If secured file transfer is a must, consult our guide for [Salesforce SFTP integration](https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/) . What is Salesforce FTP Integration? Salesforce FTP integration is the process of linking Salesforce with an FTP server to send and receive files. It works by uploading or downloading files like CSV or JSON to an FTP server to update Salesforce records or store the data externally. This way, Salesforce, and other systems can share and exchange data. In the absence of SFTP, FTP can be a good choice when transferring files within: Local Area Networks (LANs) Wide Area Networks (WANs) Virtual Private Networks (VPNs) Virtual Private Clouds (VPCs) Use Cases for FTP Integration with Salesforce You should know when to work on files to understand Salesforce FTP integration. Below are the use cases when this integration applies. Data Recovery and Backup You can store historical data as backup and prevent data loss by storing them in files within an FTP server. This can be scheduled daily or weekly to archive records for compliance. But remember that to pass compliance, the FTP server should be secured through a VPN or other means. Example : Create copies of sales transactions in CSV files as external backups. Management of Documents Salesforce can manage documents that relate to Salesforce objects. It can generate PDFs for reports and CSVs for report data. You can back up these documents in an FTP server. Example : Syncing customer-uploaded documents between Salesforce and FTP. Migration of Data from Legacy Systems Legacy systems may only integrate through flat files like CSVs in an FTP server. They lack modern APIs and other modern integration capabilities. When moving from the legacy system to Salesforce, you can dump the records in CSV files stored in an FTP server. Then, Salesforce can migrate these files into its internal storage. Example : You can migrate a legacy customer and sales system to Salesforce by transferring data in bulk using FTP file imports. Advantages of Connecting an FTP Server to Salesforce Using FTP as storage of data imports and exports has merits to your business processes. Below are the three most common ones: Improved Data Administration Having a centralized storage for Salesforce-generated reports, customer records, and logs can simplify data management and administration. Managing security, backups, imports, and exports will only come from one central point (the FTP server and designated folder(s)). Securing this location is easier for the administrator. Improved Data Security Although FTP lacks encryption, your company can implement security measures such as VPNs or FTPS to safeguard your data from unauthorized access. This allows you to control who can access the files, when they can access them, and the specific folders where the files are stored. Streamlined Workflows Automation of data imports/exports reduces manual work and ensures real-time updates. Human errors are also avoided and data consistency is ensured. Considerations Before Data Integration Salesforce FTP integration is not a walk in the park, though it\u2019s very doable. To make sure your integration will not go haywire, consider the following considerations. Data Compatibility Salesforce data is structured. Incoming data within CSV, JSON, or XML should match the data types and sizes of Salesforce object columns. Use proper column mapping to tell Salesforce that a CSV column goes to the correct object and column. Security Measures Since data flowing in networks are clear texts through FTP, it\u2019s advisable to integrate inside a private network or use FTPS. Then, limit the IP addresses that can get through the private network using IP Whitelisting. Deny traffic coming from unused ports and other security criteria using the Access Control List (ACL). Compliance Requirements If handling personal data, consider GDPR, HIPAA, or other regulations that might require SFTP over FTP. Remember that FTP has no encryption, so do not push FTP in exchange for expensive penalties and lawsuits. Performance Impact Large file transfers can slow down systems, especially when done during office hours. So, schedule the imports/exports during off-peak hours when there is less operational load on Salesforce and the FTP server. Data Quality Some data columns in the target system whether Salesforce or otherwise require valid data. Make sure the source column is not empty and that the value fits the data type and size of the target column. In other words, the data should be complete as expected by the target system. Avoid duplicates in the source by making sure the source query returns unique rows. Overview of the Methods to Connect FTP with Salesforce We will classify the methods as native, custom coding, or no coding. There are pros and cons for each method. Here\u2019s a comparison table based on the groups, methods, and key factors for Salesforce FTP integration: Group Method Best For Skill Level Required Customizability Native Salesforce Data Loader + FTP tools/workarounds Simple CSV uploads/downloads Low (basic configuration) Low Custom-Coding Apex Development Full control over file processing and automation High (requires coding skills) High API-Based Integration Advanced automation with external services High (requires API and middleware knowledge) High No-Coding Use no-coding tools (e.g. Skyvia, Hevo, Talend) Business users or teams needing quick setup Low (UI-based setup) Medium to High (depends on platform) Let\u2019s examine each in the following sections. Native Methods to Connect FTP and Salesforce Native methods use tools made by Salesforce. We will examine the Salesforce Data Loader for native Salesforce integration. Method 1: Salesforce Data Loader for FTP Integration Salesforce Data Loader is a free desktop app for bulk imports and exports. It supports large CSV files with up to 5 million records in Bulk API 1.0 and 150 million in Bulk API 2.0. You can download and install this app on Windows or Mac. FTP isn\u2019t supported in Salesforce Data Loader. But you will learn a simple workaround later in this section. Get your installer from the Salesforce Setup page in Integrations -> Data Loader -> Downloads , or use [this link](https://developer.salesforce.com/tools/data-loader) . This will take you to the Salesforce Downloads page. The installer is wizard-based, so just follow the simple instructions to install the app. Then, run the app. You should see a similar screen below: You can choose between insert, update, upsert, delete, undelete, and export. This tool is best for large-scale, periodic data transfers from and to Salesforce. Install the Workaround to Map the FTP Folder as a Local Drive Before importing or exporting data using Salesforce Data Loader, you need to install this workaround. Note that this will work on Windows only. You can use the open-source [SSHFS-Win](https://github.com/winfsp/sshfs-win) , [Raidrive](https://www.raidrive.com/) , or a similar tool to map the FTP folder as a local drive. Below is an example of mapping an FTP folder using Raidrive: The above maps the FTP server ftp.dlptest.com to drive Z. After a successful mapping, you can save exported CSV files to drive Z, as you would in the local drive C. You can use the File Explorer in Windows to see the files within drive Z. See a sample below: Drive Z is labeled as FTP because it\u2019s an FTP path. It has an\u00a0import\u00a0folder with a\u00a0customer.csv\u00a0file inside it. At this point, we are ready to integrate Salesforce into FTP. Let\u2019s start importing CSV data to Salesforce. How to Import Data from FTP to Salesforce Using Data Loader Using Salesforce Data Loader with a mapped drive for the FTP folder is nothing extraordinary. The process is still the same as working in a local drive C. Check out the steps below: Run Salesforce Data Loader. Click Upsert . Log in to Salesforce if this is your first time. Select the Salesforce Object you want the data to be imported (e.g. Contact, Account). Choose the CSV file from the mapped FTP folder (Z:\\import\\customer.csv). Click Next , then a small window will appear showing the number of rows in the CSV, API usage, etc. Click OK . Choose the column to use for matching existing rows, then click Next . (Optional) Relate using look-up key to other Salesforce objects. Click Next , and map the CSV columns to Salesforce columns. See below: Choose a results folder where Salesforce will dump data rows in CSV format. Click Finish . You can choose between View successes , View errors , or OK . Choose OK to close the wizard. How to Export Data from Salesforce to SFTP Using Data Loader The export process is the same as working in a local drive. Check out the following steps: Click Export , then select the Salesforce object you want to export. In this case, Contacts. Then, provide the path to the CSV file in the mapped FTP drive (Z:\\exports\\contact.csv). Click Next , then choose the columns to include in the CSV file. Click Finish , then click Yes when a prompt appears if you want to proceed. Then, you will see how many records were processed. Check it out below: Using Salesforce Data Loader in imports and exports provides a basic way to integrate Salesforce with other systems. Consider the pros and cons of this approach in the next sections. Pros The following are the benefits of Salesforce Data Loader: No coding required. Works well for scheduled [batch processing](https://skyvia.com/blog/batch-etl-processing/) and ad hoc integrations with Salesforce and FTP. Easy to use. Cons Salesforce Data Loader has the following disadvantages: No real-time integration. Requires external FTP tools or workarounds. For simple integrations only. Custom Code Integration FTP with Salesforce Custom code integration means you will use a programming language to achieve integration. We have 2 methods under custom coding. Method 2: Custom Apex Development Using Apex development offers a flexible way to integrate Salesforce to FTP. Use Apex if you have a developer team skilled in Apex development and you want full control of how the integration will happen. Apex doesn\u2019t have native support for FTP. But if your FTP server has an API service (like CurlFTPfs, AnyClient, or a middleware proxy that exposes an FTP service as HTTP), you can use this to make HTTP callouts. If your FTP server doesn\u2019t support an API service, you can use a middleware ( e.g., AWS Lambda, Azure Functions, or Node.js server) to accept Salesforce HTTP requests and forward the files via FTP. We will use Visual Studio Code and the [Salesforce CLI](https://developer.salesforce.com/tools/salesforcecli) to prepare the Apex Class. How to Integrate Salesforce and FTP Using Apex Follow the steps below to use Apex development with Visual Studio Code. 1. Install Visual Studio Code & Salesforce CLI If you don\u2019t have VS Code and/or Salesforce CLI, download and install [Salesforce CLI](https://developer.salesforce.com/tools/sfdxcli) . Then, install [VS Code](https://code.visualstudio.com/) and the Salesforce Extension Pack from the marketplace. You can skip this step if you already have these on your computer. 2. Authorize Salesforce Org Open Visual Studio Code, then type the following command in the Terminal : sfdx force:auth:web:login -a MyDevOrg This will log you to your Salesforce Org. Replace\u00a0MyDevOrg\u00a0with any alias name you want for your Salesforce Org. This will open your browser so you can log in to your org. 3. Create a New Apex Class Again, from the Terminal, type this command: sfdx force:apex:class:create -n FTPContactUploader -d force-app/main/default/classes This will create a new Apex class named\u00a0FTPContactUploader. Below is a sample code you can follow as a pattern for the class. This will query the Salesforce Contacts and place the CSV file in an FTP location. You need to replace the FTP_URL, FTP_USERNAME, and FTP_PASSWORD with your own FTP URL and credentials. public class FTPContactUploader {\n \n private static final String FTP_URL = 'https://your-ftp-server.com/upload'; // Replace with actual FTP API endpoint\n private static final String FTP_USERNAME = 'your-username';\n private static final String FTP_PASSWORD = 'your-password';\n \n @future(callout=true)\n public static void uploadContactsToFTP() {\n // Step 1: Query Contacts\n List contacts = [SELECT FirstName, LastName, Email FROM Contact LIMIT 100];\n\n // Step 2: Convert Contacts to CSV Format\n String csvContent = 'First Name,Last Name,Email\\n';\n for (Contact c : contacts) {\n csvContent += c.FirstName + ',' + c.LastName + ',' + c.Email + '\\n';\n }\n\n // Step 3: Encode CSV as Blob\n Blob csvBlob = Blob.valueOf(csvContent);\n String csvBase64 = EncodingUtil.base64Encode(csvBlob);\n \n // Step 4: Create HTTP Request\n HttpRequest req = new HttpRequest();\n req.setEndpoint(FTP_URL);\n req.setMethod('POST');\n req.setHeader('Content-Type', 'application/json');\n req.setHeader('Authorization', 'Basic ' + EncodingUtil.base64Encode(Blob.valueOf(FTP_USERNAME + ':' + FTP_PASSWORD)));\n \n // Step 5: Send File in Body\n String requestBody = JSON.serializePretty(new Map{\n 'fileName' => 'Contacts.csv',\n 'fileData' => csvBase64\n });\n req.setBody(requestBody);\n\n // Step 6: Send Request\n Http http = new Http();\n HttpResponse res = http.send(req);\n\n // Step 7: Handle Response\n if (res.getStatusCode() == 200) {\n System.debug('Upload Successful: ' + res.getBody());\n } else {\n System.debug('Upload Failed: ' + res.getStatusCode() + ' - ' + res.getBody());\n }\n }\n} Breakdown of Key Sections Query Contacts from Salesforce: Fetches\u00a0FirstName,\u00a0LastName, and\u00a0Email\u00a0fields from Contact records . The\u00a0LIMIT 100\u00a0will only process 100 rows. This can be adjusted as needed. Convert Data to CSV Format: Creates a\u00a0String\u00a0containing CSV-formatted data. Encode CSV as Blob & Base64: Converts the string to Blob and encodes it in Base64 to safely send over HTTP. Create HTTP Request for FTP Upload: Uses a POST request to an FTP API endpoint . Adds\u00a0Authorization\u00a0header with Base64-encoded credentials. Formats the request body with\u00a0fileName\u00a0and\u00a0fileData. Send the Request & Handle Response: Sends data using an HTTP callout . Logs success or failure. 4. Deploy to Salesforce You need to deploy your code to Salesforce to run it. Type the following command in the Terminal . sfdx force:source:push -u MyDevOrg 5. Test the Apex Callout Log on to your Salesforce Org in your browser and open the Developer Console \u2192\u00a0Execute Anonymous\u00a0window. Then, run FTPContactUploader.uploadContactsToFTP(); Finally, check logs for success or failure. Pros The following are the advantages of this approach: Flexible to any type of Salesforce integration requirement. Can integrate into Salesforce automation. Cons Using Apex has the following disadvantages: Requires development effort. Apex governor limits may restrict large file transfers. May take longer to develop compared to no-coding solutions. Method 3: API-Based Integration Aside from Apex coding, you can use other programming languages that can make Salesforce REST API calls. For example, the following Python code will query the Salesforce Contacts: import requests\n\nTOKEN = \"your_access_token\"\nINSTANCE_URL = \"https://your-instance.salesforce.com\"\n\nheaders = {\n \"Authorization\": f\"Bearer {TOKEN}\",\n \"Content-Type\": \"application/json\"\n}\n\nquery = \"SELECT Id, FirstName, LastName, Email FROM Contact LIMIT 10\"\nresponse = requests.get(f\"{INSTANCE_URL}/services/data/v59.0/query/?q={query}\", headers=headers)\n\ndata = response.json() The above uses your Salesforce Org URL and queries the Salesforce Contacts using requests.get and the query API. But you can use a Python Salesforce library called [simple_salesforce](https://simple-salesforce.readthedocs.io/en/latest/index.html) to abstract and simplify the code. That way, you don\u2019t need to handle REST API endpoint formatting, headers, and session management. We\u2019re going to use\u00a0simple_salesforce\u00a0in our example. How to Integrate Salesforce and FTP Using API Follow the steps below using Visual Studio Code as your development IDE. 1. Install Visual Studio Code, Python, and Simple_Salesforce If you haven\u2019t done so, [download and install Python](https://www.python.org/downloads/) . Then, install [Visual Studio Code](https://code.visualstudio.com/) and the [VS Code Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) . Follow other specific instructions from [this link](https://code.visualstudio.com/docs/python/python-tutorial) depending on your operating system. Then, install\u00a0simple_salesforce\u00a0using the following VS Code Terminal command: pip install simple-salesforce csv\u00a0and\u00a0ftplib\u00a0are built into Python, so there is no need to install them. 2. Create the Python Script Create a folder where you will place the Python script. Then, open the folder in Visual Studio Code, and create the script file\u00a0salesforce_to_ftp.py. Check out the code below: import csv\nimport requests\nfrom simple_salesforce import Salesforce\nfrom ftplib import FTP\n\n# Salesforce credentials\nSF_USERNAME = 'your_username'\nSF_PASSWORD = 'your_password'\nSF_SECURITY_TOKEN = 'your_security_token'\n\n# FTP credentials\nFTP_HOST = 'ftp.example.com'\nFTP_USERNAME = 'ftp_user'\nFTP_PASSWORD = 'ftp_password'\nFTP_DIRECTORY = '/uploads/' # Change this to your desired folder\n\n# CSV file name\nCSV_FILE = 'salesforce_contacts.csv'\n\n\ndef fetch_contacts():\n \"\"\"Fetch contacts from Salesforce API and return as a list of dictionaries.\"\"\"\n sf = Salesforce(username=SF_USERNAME, password=SF_PASSWORD, security_token=SF_SECURITY_TOKEN)\n \n query = \"SELECT Id, FirstName, LastName, Email FROM Contact LIMIT 10\"\n records = sf.query_all(query)['records']\n \n contacts = [\n {\n 'Id': record['Id'],\n 'FirstName': record.get('FirstName', ''),\n 'LastName': record.get('LastName', ''),\n 'Email': record.get('Email', '')\n }\n for record in records\n ]\n \n return contacts\n\n\ndef write_csv(contacts):\n \"\"\"Write contacts to a CSV file.\"\"\"\n with open(CSV_FILE, mode='w', newline='') as file:\n writer = csv.DictWriter(file, fieldnames=['Id', 'FirstName', 'LastName', 'Email'])\n writer.writeheader()\n writer.writerows(contacts)\n\n\ndef upload_to_ftp():\n \"\"\"Upload the CSV file to an FTP server.\"\"\"\n ftp = FTP(FTP_HOST)\n ftp.login(FTP_USERNAME, FTP_PASSWORD)\n ftp.cwd(FTP_DIRECTORY)\n \n with open(CSV_FILE, 'rb') as file:\n ftp.storbinary(f'STOR {CSV_FILE}', file)\n \n ftp.quit()\n\n\ndef main():\n \"\"\"Main function to execute the workflow.\"\"\"\n print(\"Fetching contacts from Salesforce...\")\n contacts = fetch_contacts()\n \n print(\"Writing data to CSV...\")\n write_csv(contacts)\n \n print(\"Uploading CSV to FTP...\")\n upload_to_ftp()\n \n print(\"Process completed successfully!\")\n\n\nif __name__ == \"__main__\":\n main() Breakdown of Key Sections Function\u00a0fetch_contacts():\u00a0Uses\u00a0simple_salesforce\u00a0to connect to your org using your username, password, and security token. Then, it uses SOQL to query the records in Salesforce contacts. Function\u00a0write_csv(contacts): Creates the CSV file with the result set from #1. Function\u00a0upload_to_ftp(): Uploads the created CSV file to the FTP server and folder. Function\u00a0main(): Handles the workflow from connecting to Salesforce to uploading to the FTP server. You should replace the Salesforce and FTP credentials with your access information for Salesforce and the FTP server. Pros The following are the plus side of this approach: It will work on Windows, Mac, and Linux distributions like Ubuntu. Very flexible and more scalable. Works with modern cloud platforms. Cons This approach has these disadvantages too: Requires coding expertise in a programming language and Salesforce REST API. Changes in Salesforce APIs requires modification in code. Takes more time to develop compared to no-coding methods. FTP and Salesforce Integration Using No-Coding Approach The main goal of no-coding integration platforms is to simplify the development processes and minimize the need for an expert developer to do the job. This will significantly lessen the development time of simple integrations from hours to minutes. Some notable tools that can handle Salesforce FTP integration are Mulesoft (from Salesforce), Jitterbit, Talend, and Skyvia. Then, there\u2019s [AppExchange](https://appexchange.salesforce.com/) . It\u2019s an enterprise cloud marketplace where you can extend your Salesforce Org with integrations to more than 7000 apps. For example, if you need to extend Salesforce with some finance features, there\u2019s a [QuickBooks](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N300000016bTHEAY) app for that. There\u2019s even a [Salesforce to FTP app](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000FnBMFUA3) in AppExchange with a 14-day trial and an annual subscription. In this section, we will discuss how it is done in Skyvia. Method 4: Integration of Salesforce and FTP Using Skyvia Skyvia is a 100% cloud data platform with hundreds of connectors to various data sources. Unlike AppExchange which deals with Salesforce integration to other data sources, Skyvia can integrate at least two sources other than Salesforce and another app. Their free tier allows you to test out [Salesforce and FTP integration](https://skyvia.com/data-integration/salesforce-csv-file-import-and-export) without spending a dime. This is one of the best free alternatives to Salesforce Data Loader. To integrate, you need 2 connections (one for Salesforce and another for your FTP server). Then, create a Skyvia integration package using the two connections. The simplest of these integrations are the Import and Export integrations. How to Create the Salesforce and FTP Connection You don\u2019t need to code to create a Salesforce connection in Skyvia. Assuming you already signed in, here\u2019s how: Click + Create New from the upper left corner of your Skyvia workspace, then click Connection . Choose from the list of connectors. Do that by typing sales or\u00a0salesforce\u00a0in the filter box. Then, click Salesforce . Fill out the connection form and sign in to Salesforce to get an OAuth token. Click Test Connection to test the connection to Salesforce. You will see a\u00a0Connection is successful\u00a0message in the upper right of the workspace. Name your connection. You will see\u00a0Untitled\u00a0beside the Salesforce icon. Click and change that to your preferred connection name. I named mine as\u00a0salesforce-developer. Finally, click the Create Connection button to save your new Salesforce connection. You will see Save Connection if you\u2019re editing the connection to make changes. Repeat the steps from 1 to 6, but this time create an FTP connection. Ready your FTP server credentials and use them to fill out the FTP connection form. Below is mine: Creating a connection is plain simple. If you\u2019re experiencing connection problems, contact your system administrator. How to Import Data From FTP to Salesforce Using Skyvia Please follow the steps to create a Skyvia Import Integration: Click + Create New and under INTEGRATION , select Import . From the Import Integration page, change the Source Type to CSV from storage service . Choose the FTP connection you created earlier as your source. Then, choose the Salesforce connection you created earlier as your target. Add a new task by clicking Add New . Then, choose the CSV file you want to import on the next page. On the next page, choose the Salesforce object and the operation to perform (insert, update, upsert, delete). In this example, we will choose Salesforce Contact. Map the columns of the CSV file to Salesforce Contact. Repeat steps 5 to 7 if you need to insert more tasks that will import another CSV to a Salesforce object. Name your Import Integration and click Create . That\u2019s it. Below is the final setup of my Import Integration: How to Export Data From Salesforce to FTP Using Skyvia This portion will discuss [exporting data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) to a CSV file in an FTP location. Below are the steps: Click + Create New and under INTEGRATION , click Export . In the Export Integration page, choose your Salesforce connection as the source. Then, select the Target Type as CSV to storage service . Choose the FTP connection you created earlier from the dropdown list. Select the target folder in the dropdown list. Then, click Add New to create a new task. Choose a Salesforce object like\u00a0Contacts\u00a0and mark the columns to export. Optionally, set a filter and sort order. Click the Next step button and set the CSV filename and compression type. Click Next step , and review the output columns. Click Save task . Repeat steps 6 to 10 for more tasks that will use a different CSV file and Salesforce object. Name your Skyvia Export and click Create to save your export integration. Below is the final setup of the Export integration sample: How to Set up Scheduling Every Skyvia integration package can be scheduled to run at a specific time and let it recur every day. From inside the Import or Export integration, click Schedule from the top-left corner of the page. Below is a sample for setting up a schedule in Skyvia. The schedule will run daily at 12 midnight: Once saved, Skyvia will note this schedule and let your integration run unattended. To modify a schedule at a later time, you have to open an integration package. To do that, click Integrations from the upper part of the Skyvia workspace. The list is grouped according to the type of integration. Look for Import or Export groups. Then click the integration package you want. See a sample below where the import and export packages we created earlier are boxed in green: Click one of the integrations to edit. A new page will appear allowing you to change the integration package. Then, click Schedule in the upper-left corner of the page and enter the modified schedule. Run and Monitor Integration You can monitor the runtime results of your integration package. Open the integration you want to check the progress. Then, click Monitor . You will see a similar page below when the integration is successful: Below is a video tutorial for creating Import and Export: Pros The following are the good side of the no-coding approach: No coding or very minimal coding needed. Simple UI for integration. Easy to use. Faster integration development lifecycle. Cons These are the disadvantages of this approach: Might have usage limits depending on your tool\u2019s pricing plans. The tool\u2019s pre-built workflows or templates may not be flexible as you need it to be for your requirements. Salesforce API changes may not reflect immediately depending on your tool of choice. Conclusion In this article, we discussed what FTP is and its differences to SFTP. FTP is the faster option but compromises security because of its unencrypted nature. To overcome this problem, integrate within a VPN or a similar secured environment. We also learned about the Salesforce Data Loader, two coding methods in Apex and Python, and a no-coding method using Skyvia. These are the 4 methods to integrate Salesforce with FTP. Choose the best option for your needs from the methods we discussed. F.A.Q. Can Salesforce directly connect to an FTP server? No, Salesforce lacks built-in FTP support, so integration requires third-party tools or coding. Is it possible to automate file uploads from Salesforce to an FTP server? Yes, using middleware, Apex callouts, or external schedulers. What security measures should I consider when integrating Salesforce with an FTP server? Use FTPS, limit access permissions, and encrypt sensitive data before transfer. Can I transfer data from FTP to Salesforce without coding? Yes, tools like Skyvia, MuleSoft, and Jitterbit offer no-code integrations. What data can be extracted from Salesforce? Reports, contacts, opportunities, cases, and any object data in structured formats. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-ftp-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Salesforce+to+FTP+Server+In+4+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-ftp-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-ftp-server/&title=How+to+Connect+Salesforce+to+FTP+Server+In+4+Ways) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-salesforce-to-google-sheets/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce to Google Sheets: Ultimate Connection Guide By [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) [0](https://skyvia.com/blog/how-to-connect-salesforce-to-google-sheets/#respond) 2774 March 27, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-google-sheets%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+to+Google+Sheets%3A+Ultimate+Connection+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-google-sheets%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-google-sheets/&title=Salesforce+to+Google+Sheets%3A+Ultimate+Connection+Guide) Summary Salesforce provides powerful features for customer data storage, but Google Sheets offers simplicity and flexibility for data analysis and sharing. There are multiple methods to connect Salesforce to Google Sheets, including using Salesforce Data Loader, add-ons, and third-party integration platforms. Salesforce Data Loader allows bulk data operations but requires technical expertise and lacks automatic scheduling. Add-ons like Salesforce Connector and Skyvia Query provide user-friendly interfaces for data transfers but may have limitations on scheduling and complexity. For scalable and automated solutions, third-party integration platforms like Skyvia offer advanced features for seamless synchronization and complex transformations between Salesforce and Google Sheets. Are you tired of manually exporting Salesforce to Google Sheets and wasting hours on routine? Ineffective data transfers between these two solutions are a problem for many companies. Luckily, there are multiple ways to simplify this process. In this article, we\u2019ll walk through the best integration methods \u2013 from native tools to third-party automation platforms. We will discover both sides of the coin for each of them so you can choose the right approach for your workflow. Table of Contents Why Connect Salesforce to Google Sheets? Best Ways to Connect Salesforce to Google Sheets Method 1: Salesforce Data Loader Method 2. Add-ons Method 3. Third-Party Integration Platforms Export data from Salesforce to Google Sheets using CSV Moving data from Salesforce to Google Sheets directly Conclusion Why Connect Salesforce to Google Sheets? [Salesforce](https://www.salesforce.com/) , with its powerful features, is good at storing customer data and reporting. But sometimes, you just need the simplicity and flexibility of [Google Sheets](https://workspace.google.com/products/sheets/) . For instance, if you want to analyze trends quickly, create a custom report, or share data with someone who doesn\u2019t have Salesforce access. With Google Sheets, you can sort, filter, and visualize SF data without dealing with complex CRM functionalities. Let\u2019s walk through some common use cases: To track sales performance , pull live data into a spreadsheet and create a custom dashboard with formulas and charts. Assume your finance team needs revenue data for forecasting, but they don\u2019t work in Salesforce daily. Instead of manually exporting reports, you can sync Salesforce with Google Sheets and let the numbers update automatically. Marketing teams might want to monitor lead data alongside ad spend, combining Salesforce insights with Google Ads data in a single spreadsheet. If you\u2019re working with external partners , sharing a Google Sheet is much easier than granting Salesforce access. So, connecting Salesforce to Google Sheets is getting the best of two worlds: structured CRM data and spreadsheets\u2019 flexibility. Best Ways to Connect Salesforce to Google Sheets You may connect Salesforce and Google Sheets in several ways: Using a native Salesforce Data Loader , With one of the add-ons like Salesforce or Google Cloud Connector , Using third-party tools like Skyvia . Each of these methods has its benefits and drawbacks. Let\u2019s break them down in the comparison table. Salesforce Data Loader Add-ons Integration Platforms Ease of Use Requires manual setup and technical knowledge Easy setup, user-friendly interface No-code, intuitive UI with guided setup Automation No scheduling \u2013 requires manual execution Some add-ons offer scheduled syncs Supports automated, scheduled syncs Customization Limited to predefined options Some add-ons allow custom queries Highly customizable with filters and transformations Scale Small to medium transfers Small to medium transfers Large transfers Technical Knowledge Required Salesforce admin skills Minimal technical skills No coding Method 1: Salesforce Data Loader [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) is a native solution that\u2019s helpful in routine work with bulk data, like inserting, editing, updating, exporting, and deleting. It reads, extracts, and loads data from CSV for Import to Salesforce and outputs CSV for Export . CSV files can then be easily opened and edited in Google Sheets. There are two different ways to use it: Via UI \u2013 a user has to set CSV files and configuration options for import/export and specify the mapping for the field names between the import file and Salesforce. Via command line (for Windows only) \u2013 a user has to set up configuration, data sources, mappings, and actions to automate the process. Pros Enables custom field mapping . Users can define how Salesforce fields align with Google Sheets columns. Works without additional add-ons . Since Data Loader is an official Salesforce tool, it doesn\u2019t require third-party connectors or apps to extract data for Google Sheets. Allows selecting data . Users can apply filters to transfer only relevant Salesforce information to Google Sheets. Cons No built-in automatic schedule for data synchronization. It requires additional scripting with another automation tool. No cloud version. You have to [install](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/loader_install_general.htm) the tool on your PC. It\u2019s not the end user\u2019s tool . It requires some technical expertise. Best for Users with technical expertise who are comfortable with manual imports/exports. Method 2. Add-ons Add-ons are lightweight extensions available in marketplaces like [Google Workspace Marketplace](https://workspace.google.com/marketplace) . They provide a direct data exchange between Salesforce and Google Sheets. After installing an add-on to the browser, the user can pull data from Salesforce, update it, and push it back to Salesforce without leaving Google Sheets. Among the various add-ons available, some of the most widely used ones are Salesforce Connector for Google Sheets and Skyvia Query Google Sheets Add-on. Let\u2019s explore them in detail. Salesforce Connector [Salesforce Connector](https://workspace.google.com/marketplace/app/salesforce_connector/857627895310) is a popular tool for data transferring between Salesforce and Google Sheets. It\u2019s free for Enterprise, Performance, Unlimited, or Developer subscriptions. For the Professional plan, you will need API access for an extra fee. With this tool, you can create, read, update, and delete Google Sheets data in SF and import reports on a schedule. Pros Direct integration . It seamlessly connects Salesforce with Google Sheets without additional software. Auto refresh . Allows automatic updates every 4, 8, or 24 hours. Query-based data extraction . Users can filter and pull only the needed records using SOQL (Salesforce Object Query Language) queries. Free to use . Available for Salesforce users with appropriate permissions at no extra cost. Cons The schedule options are cut. The recurring sync intervals are limited, and all reports share the same schedule. Complex Setup for Beginners . Requires some knowledge of Salesforce queries (SOQL) for advanced use. The Salesforce and Google Sheets connect only. It cannot be integrated with other data sources or BI tools. Best for Salesforce admins looking for a simple, no-code solution to handle data transfers for small to mid-sized datasets (around 50,000 rows). Step-by-step guide To start working with the Salesforce connector, download it from Google Workspace Marketplace in a few simple steps. Step 1 Go to your Google Sheets. Select Extensions in the top menu bar. Click Get add-ons . Step 2 Type Salesforce Connector in the Google Workspace Marketplace search bar and choose the add-on. Step 3 Open Add-on and click Install . Follow the installation prompts to finish the process. Now, you can open it from the list of installed extensions. It will appear on the top right section of the screen. It\u2019s ready to work after selecting the Salesforce environment and clicking Authorize . Let\u2019s look at a real-life example to see how it works. Suppose you need to import report data from Salesforce to Google Sheets. To do it: Go to the Data connector for Salesforce > Select an operation > Reports . Select Import to active sheet to import your report data. When your report is ready, you will see its rows and columns in your spreadsheet. Skyvia Query Google Sheets Add-on [Skyvia Query Add-on](https://skyvia.com/google-sheets-addon/salesforce) is a tool for smooth data transfer to Google Sheets reports. With it, you can import data from various cloud applications and relational DBs to Google Sheets through [Skyvia Query](https://skyvia.com/query/) . The add-on is cloud-based, has no code, and is easy to use even for non-tech people, allowing them to create and run queries without knowing SQL language. Pros Integration with other systems. Users can import data from various cloud applications and relational databases like Google Analytics, MySQL, Looker, etc. Advanced Filtering . Allows users to apply filters and run SQL-based queries for data extraction. Automatic Data Updates . Can schedule queries to refresh data in near real-time, keeping reports up to date. No Coding Required . A user-friendly interface makes it accessible to non-technical users. Saved Query Gallery . Users can store queries for future use. Cons Limited Free Tier . Some advanced features require a paid plan. Google Sheets Limitations . Large datasets may slow down or exceed Google Sheets\u2019 row limitations. Best for Users who need advanced query-based data extraction with filtering, scheduled updates, and other tools integration. Step-by-step guide Let\u2019s see how it works in three steps. [Register](https://app.skyvia.com/register) on a Skyvia account for free. Create connections to [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Google Sheets](https://docs.skyvia.com/connectors/cloud-sources/googlesheets_connections.html#establishing-connection) . Install the Skyvia Query Google Sheets add-on from [G-Suite Marketplace](https://gsuite.google.com/marketplace/app/skyvia_query/134536098526) and create reports from Salesforce anytime. Visit the Skyvia [documentation Portal](https://docs.skyvia.com/skyvia-query-google-sheets-add-on/) to learn more about the setup process. Method 3. Third-Party Integration Platforms As we discussed before, manual exports and add-ons often lack scalability, automation, and integration with other business tools. In this case, integration platforms can do the job. They can automate data syncs using complex transformations and integrate Google Sheets with other platforms and databases. Popular integration platforms like Skyvia, Zapier, and Coupler.io provide a no-code approach to managing Salesforce data without complex scripting. Skyvia allows you to [integrate](https://skyvia.com/data-integration/integrate-salesforce-google-sheets) Salesforce and Google Sheets data without any limitations on data and file size. As a bonus, it\u2019s intuitive, so you don\u2019t need additional technical knowledge. Skyvia is cloud-based and doesn\u2019t need installation and coding to set the export/import data as CSV files. You just open your laptop, connect, and start working. Pros Simplicity . Skyvia is user-friendly, transparent, and convenient, even for non-tech people. Flexible Data Selection . Export/import specific objects, fields, or filtered records. Rich functionality. Supports a set of integration scenarios of any complexity like ETL, ELT, reverse ETL, data migration, one-way and bi-directional data sync, workflow automation, data sharing via REST API, backups for cloud apps, etc. Easy Automation . Enables scheduled automated exports/imports without manual effort. Cons Limited Free Plan . While Skyvia offers a free tier, advanced features, higher data volumes, and more frequent syncs require a paid subscription. Best for Businesses of any size looking for scalability and automation with advanced transformations, bidirectional sync, and multi-platform connectivity. Let\u2019s take a closer look at how Skyvia simplifies Salesforce to Google Sheets integration in two real-life examples. Export data from Salesforce to Google Sheets using CSV Suppose you need to export your Account data as a CSV file from Salesforce to Google Sheets. To do it, you must create connections to the source and target and configure export in Skyvia. Step 1 . Create connections to [Salesforce and Google Drive](https://skyvia.com/blog/google-drive-salesforce-integration/) : [Register](https://app.skyvia.com/register) on a Skyvia account for free. Click +Create New > Connection. Select the Salesforce connector. Select the Authentication Type and Environment and set the appropriate parameters. Learn more about connecting Google Sheets in [Skyvia documentation](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) . Similarly, set up a Google Drive connection. Find additional details in the [Skyvia documentation](https://docs.skyvia.com/connectors/file-storages/googledrive_connections.html) . Step 2 . Set up the export scenario: Click +Create New in the upper menu. Click Export in the Integration column. Select Salesforce as a source and CSV to storage service > Google Drive as a target . Note. You can also download the CSV file manually. Click Add new in the Task Editor. Select the Account object in the Source Definition tab and click Next step. Select the data for export in the Object filter, specify the Target File Name , and click Next step. Check the data in the Output Columns tab and click Save . Click Create and Save to finish the component creation. Now, the component is ready to run by the schedule or manually: Click Run in the upper right section of the screen to start the export. Set your schedule in the upper left menu. When the export is ready, pay attention to the Model , Monitor , and Log tabs. Model allows editing the existing connections and tasks, adding new ones, and rescheduling. Monitor allows reviewing the current component\u2019s status and the five most recent components run. Log provides a record of executed runs. Moving data from Salesforce to Google Sheets directly In contrast to the CSV method via Skyvia, now we will show the direct synchronization of Salesforce and Google Sheets. Let\u2019s consider importing SF account data to Google Sheets. We might need it for various reasons, like reporting, analysis, or integration with other systems. To do it, create connections to the source, target and configure import in Skyvia. Step 1 . Create connections [Register](https://app.skyvia.com/register) on a Skyvia account for free. Click +Create New > Connection. Select the Google Sheets connector. Sign in with your Gmail account to receive the access token. To learn more about connecting Google Sheets, visit [Skyvia documentation](https://docs.skyvia.com/connectors/cloud-sources/googlesheets_connections.html#establishing-connection) . Step 2. Configure import Click +Create New , select Import in the Integration section, and set the component name. Choose Database or cloud app as a Source Type. Choose Salesforce as the Source. Choose Google Sheets as a Target. On the upper right side of the screen, click Add new to open the Task Editor and set up the import component task. Select the Source (Account) in the Source Definition tab, state the filter, and click Next Step. Select the Target and operation type in the Target Definition tab, and click Next Step. Map the fields in the Mapping Definition tab and click Save. Click Create and Save to finish the component creation. You can run it manually by clicking Run or automatically by the Schedule. After the Import is complete, check the error logs and fix the necessary issues. Note . Similarly, you can set up the import to automate moving Google Sheets to Salesforce data for reporting or analysis. Conclusion Salesforce and Google Sheets integration should be simple, automated, and scalable. We\u2019ve looked at various options to \u201cmake friends\u201d on Salesforce and Google Sheets. If you need a quick, one-time transfer, a CSV export might be enough. If you want direct integration without coding, an add-on could work. For long-term automation, flexible data transformations, and seamless synchronization, an integration platform like Skyvia is a good choice. F.A.Q. How to integrate Salesforce with Google? Salesforce can be integrated with Google services in multiple ways, depending on your needs: \u2013 Native tools like Salesforce\u2019s built-in Google integrations. \u2013 Add-ons like the Salesforce Connector for Google Sheets for simple data transfers. \u2013 Integration platforms like Skyvia for advanced automation and more. Does Google Sheets have an API? Yes, Google Sheets has an API that allows developers to read, write, and modify data in spreadsheets. The Google Sheets API supports integration with third-party platforms, automation tools, and custom applications. However, using the API requires coding and knowledge of authentication processes. How do I link Salesforce to Google Sheets? You can link Salesforce to Google Sheets using different methods: \u2013 Google Sheets Add-ons \u2013 install a connector from the Google Workspace Marketplace. \u2013 Integration Platform s \u2013 use tools like Skyvia to automate data import/export without coding. \u2013 Salesforce API \u2013 if you have developer skills, you can use the API to build a custom integration. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-google-sheets%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+to+Google+Sheets%3A+Ultimate+Connection+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-google-sheets%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-google-sheets/&title=Salesforce+to+Google+Sheets%3A+Ultimate+Connection+Guide) [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) A dedicated technical writer, Liudmyla brings extensive experience in creating and managing diverse learning materials. Passionate about user-centered documentation, she thrives on enhancing user experiences through clear, engaging, and accessible content. With a keen analytical mindset and a collaborative approach, Liudmyla excels in bridging information gaps and simplifying complex concepts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 4 Ways to Connect SFTP to Salesforce for Seamless Integration By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/#respond) 855 February 20, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-sftp-server%2F) [Twitter](https://twitter.com/intent/tweet?text=4+Ways+to+Connect+SFTP+to+Salesforce+for+Seamless+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-sftp-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/&title=4+Ways+to+Connect+SFTP+to+Salesforce+for+Seamless+Integration) Negative findings could mar life with auditors. If you carelessly transfer exported files from Salesforce to an unsecured folder, you\u2019re up for a data protection policy violation. It\u2019s time to consider Salesforce SFTP integration. Unsecured file locations are breeding grounds for data breaches, which are costly mistakes. According to [IBM\u2019s Cost of Data Breach Report](https://www.ibm.com/reports/data-breach) , the cost of data breaches keeps increasing, reaching USD 4.88 million in 2024! SFTP is a secure way to place sensitive information from Salesforce, mainly exported CSV files. It\u2019s a revamped FTP ( [File Transfer Protocol](https://en.wikipedia.org/wiki/File_Transfer_Protocol) ) with security. This article will discuss what SFTP is and the 3 approaches to Salesforce SFTP integration. Below are the main points we are going to discuss. Let\u2019s dive in. Table of Contents What is SFTP and Why Is It Important Why Use SFTP with Salesforce 6 Considerations Before Data Integration Overview of Methods to Connect SFTP and Salesforce Native Methods Method 1: Salesforce Data Cloud SFTP Integration Method 2: Salesforce Data Loader for SFTP Integration Using Third-Party Tools Method 3: How to Integrate Salesforce and SFTP Using Skyvia Custom Coding Method 4: Salesforce and SFTP Integration Using Apex and Node.js Use Cases for SFTP Integration with Salesforce Conclusion What is SFTP and Why Is It Important SFTP is a Secure File Transfer Protocol. It is a network protocol that allows access, transfer, and management of files over a secure connection, ensuring your data is encrypted. This is also known as SSH File Transfer Protocol because it is an extension of [Secure Shell (SSH)](https://en.wikipedia.org/wiki/Secure_Shell) protocol 2.0. Let\u2019s explain this a bit with an example. Files like CSV are readable. So, if you transfer files like this and there\u2019s a line that says \u201cIncome 48000000\u201d, it will be sent to the network that way. It will also be written in readable form. With SFTP, the transfer is encrypted, so those sniffing the network for sensitive information will read cryptic information. \u201cIncome 48000000\u201d may look like \u201c#$%^@ XY!!!!!!\u201d. Accessing the stored file will also make it difficult for cybercriminals. What a relief if you want this a secret. Unlike FTP, SFTP protects your data from tampering and theft. Protecting sensitive data is no longer just an option. It\u2019s a very important requirement. Companies use SFTP to transfer and store files because it\u2019s a tried and tested way to share data. Why Use SFTP with Salesforce Salesforce SFTP integration allows businesses to exchange files between Salesforce and external systems. The following are the reasons to consider SFTP for import and export in Salesforce: Data protection . SFTP safeguards sensitive data coming in and out of Salesforce and stored as files. It will be an easy data leak if finance and healthcare data are stored in an unsecured folder. Compliance . SFTP will help your organization comply with data security standards like GDPR and HIPAA. Efficiency and automation . There are tools both from Salesforce and elsewhere that allow easy automation through Salesforce SFTP integration. This reduces human error and slow manual processing. 6 Considerations Before Salesforce and SFTP Integration Consider the following points before starting an integration project with Salesforce. This will affect the success of your integration. 1. Data Compatibility You can import and export data into Salesforce with CSV files. Other formats supported are XML, JSON, and Parquet files. Using third-party integration tools can increase this further. They have more data connectors aside from the file formats mentioned. Note that Salesforce also enforces other restrictions. Data type formats, for example. So, make sure you consult the [documentation](https://help.salesforce.com/s/articleView?id=data.c360_a_supported_file_formats.htm&type=5) . Meanwhile, SFTP can store many file types. However, depending on the SFTP server configuration, it may restrict certain file types for security reasons. Consult the server administrator for information on supported file types. The point : Ensure the file and data types are compatible between the source and target to avoid integration errors. 2. Requirements for Compliance Importing and exporting data to and from Salesforce should follow data protection regulations. Storing the data in an SFTP server will increase the likelihood of compliance. But make sure the server is configured to support the latest encryption methods. Access controls should also be in place and only for authorized accounts. Another requirement is audit trails and logs. This will keep tabs on who and when data is accessed and modified. The point : Consider security in your integration from the start to avoid non-compliance and hefty penalties. 3. Safety Procedures In addition to security for compliance, other configurations merit the safety of your data during transfer and storage. This includes: IP whitelisting so the SFTP server will reject any other external machine except the ones you allow. Multi-factor authentication because passwords are not enough anymore. Firewalls and intrusion detection systems. Regular security updates from the SFTP server and Salesforce. The point : Make it harder for intruders to access your sensitive information by hardening security in SFTP and Salesforce. 4. Impact on Performance Large imports and exports can affect the performance of both Salesforce and the SFTP server. Consider scheduling the integration run during non-office hours. Use incremental updates instead of bulk batch updates if your tool allows it. Real-time integration can be harder but the impact on both systems is less. The point : Examine the size of your data and determine the best integration approach to keep the SFTP server and Salesforce running smoothly. 5. Backup and Recovery Plans Integrations can fail and can end up causing data corruption. The old but gold trick is to back up your data regularly using reliable tools. Then, test them to see if the backup can be restored completely. The point : A backup and recovery plan can save your day when a data disaster strikes. 6. Data Quality Part of the success of your data integration is data quality. The resulting file in SFTP or the new and updated records in Salesforce should be correct, updated, and clean. Common mistakes include duplicate records, null values, missing data, and wrong [mapping](https://skyvia.com/learn/what-is-data-mapping) . The point : Make sure your data to export or import is clean and reliable. Summary of Methods to Connect SFTP and Salesforce Several options exist to make a Salesforce SFTP connection and integrate data between them. Salesforce itself and third-party vendors provided integration tools for this purpose. You can also code using your favorite programming language. Each of the methods has its advantages and disadvantages. It depends on your expertise, use cases, and preferences if you will choose one method over the other. Below is a table summarizing the approaches we will discuss: Method Group Method Best For Skill Level Customizability Native Methods Salesforce Data Loader Periodic, large-scale data transfers Low Low Salesforce Data Cloud SFTP Integration Secure, scalable transfers with Data Cloud Medium Medium Using 3rd Party Tools Third-Party Integration Tools Complex integrations, enterprise-level workflows Medium to High High Custom Coding Apex Code Full control over custom workflows High High Platform Events + SFTP Integration Event-driven, real-time file transfers High High Custom Middleware with Node.js or Python Complex, cross-platform integrations High High Let\u2019s describe each method in detail. Native Methods to Connect SFTP and Salesforce Native methods mean the integration tools you will use are already in the Salesforce ecosystem. These tools are tightly integrated into Salesforce CRM, so you don\u2019t need to set it as a source or target. We will discuss two: Salesforce Data Cloud and Salesforce Data Loader . The former is a cloud-based integration tool, and the latter is a client-server app. Method 1: Salesforce Data Cloud SFTP Integration [Salesforce Data Cloud](https://www.salesforce.com/data/what-is-data-cloud/) is a part of Salesforce for integrating its data into external systems, including files in an SFTP server. It supports APIs and [data management tools](https://skyvia.com/blog/best-data-management-tools/) for streamlined file transfers. Best for: Salesforce Data Cloud is best for companies already using Salesforce Data Cloud to centralize customer data and automate secure file sharing. Let\u2019s explore a step-by-step guide on integrating Salesforce with SFTP using Salesforce Data Cloud. Step 1. Setting up an SFTP Connection in Salesforce Data Cloud You need an SFTP connection before you can import or export using Salesforce Data Cloud. Below are the steps to create it. This assumes you already have a valid Salesforce Data Cloud account and you already signed in. Go to Data Cloud Setup . Look for External Integrations and select Other Connectors . Click New, choose Secure File Transfer Protocol (SFTP) , then click Next . Enter a connection name, a connection API name, and the authentication details (Authentication method, username, password, etc.) Review your setup and click Test Connection . Once the test connection is successful, click Save . For hardening the security of the SFTP connection, consult the [documentation](https://developer.salesforce.com/docs/data/data-cloud-int/guide/c360-a-set-up-sftp-connection.html) . Step 2. Setting Up an SFTP Data Stream for Import After setting up an SFTP connection, you can now prepare a Data Stream to import a file like CSV. Here\u2019s how it goes: On the Data Stream tab, click New . Select Secure File Transfer (SFTP) and click Next . Use the SFTP connection created using the steps in the previous section. Enter the Import details, like directory or folder, filename, and source. Click Next and create a Data Lake Object (DLO) or [use an existing DLO](https://help.salesforce.com/s/articleView?id=data.c360_a_guardrails_existing_data_lake_object.htm&type=5) . Edit the fields identified in the table if you\u2019re creating a new DLO. Input a DLO label, API name, and object details like Primary key, Category, Record Modified Field, and Organization Unit Identifier. Optionally add formula fields and/or click Next . Enter the deployment details, and click Deploy . The new Data Stream is now created. For more details, consult the [documentation](https://developer.salesforce.com/docs/data/data-cloud-int/guide/c360-a-create-sftp-data-stream.html) . Pros Built into Salesforce. There\u2019s no need for external tools. Highly secure and scalable for enterprise use cases. Seamlessly integrates with Salesforce\u2019s data management features. Cons Requires familiarity with Salesforce Data Cloud features and setup. Some Data Cloud features need add-on licenses. Consult your Salesforce Account Executive for more information. Available only to Salesforce Data Cloud users. [Salesforce Data Cloud is Free for Enterprise Edition and above](https://help.salesforce.com/s/articleView?id=000396380&type=1) only. Method 2: Salesforce Data Loader for SFTP Integration [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) is a client app available on Windows and Mac. It allows bulk data import and export. It supports large CSV files with up to 5 million records in Bulk API 1.0 and 150 million in Bulk API 2.0. Best for: The Salesforce Data Loader is best for companies needing periodic, large-scale data transfers. Salesforce Data Loader does not support SFTP directly, but this section will discuss workarounds for using it with SFTP. I will use a Windows 11 machine for this discussion. You can access the installer from the Salesforce Setup page. Then, navigate to Integrations -> Data Loader -> Downloads . See it below: You will be taken to the Salesforce Data Loader Downloads page. Alternatively, you can go straight by using [this link](https://developer.salesforce.com/tools/data-loader) . So, download the app, install it, and follow the instructions later. After a successful installation, you should see a similar screen below: You can see from the above that it supports several operations like import ( Insert , Update , Upsert ), Delete , Undelete , Export , and Export All . Let\u2019s procees to the step by step guide of integrating Salesforce with SFTP using native Salesforce data loader: Step 1. SFTP Workaround for Salesforce Data Loader You can avoid manually uploading and downloading CSV files after an import or export in Salesforce Data Loader. One way to do that is to make your SFTP server a mapped drive in Windows. So, if you map the SFTP server to, let\u2019s say drive Z, you can import or export CSV files from that drive, like it is local to your PC. You can use tools like the open-source [SSHFS-Win](https://github.com/winfsp/sshfs-win) or a similar tool to make this possible. The following are the steps: Open the Command Prompt and install SSFHS-Win and WinFsp using Winget . winget install SSHFS-Win.SSHFS-Win Open File Explorer and right-click This PC . Click Map Network Drive and select a drive letter, like Z. On the folder box, type the path to the SFTP server in the format \\\\sshfs\\sftp_username@sftp_hostname . Replace the sftp_username and sftp_hostname with your SFTP username and hostname. Click Finish and enter the SFTP password in the prompt. If you encounter connection or permission problems, consult your administrator. The SFTP server will now act like a regular Windows or USB drive. Step 2. How to Import Data from SFTP to Salesforce using Data Loader You can use the Insert, Update, or Upsert in Salesforce Data Loader to import CSV files into Salesforce. Let\u2019s say you want to Upsert. Follow the steps below: Click Upsert . Log in to Salesforce if asked. Select the Salesforce Object you want the data to be imported (e.g. Contact, Account). Choose the CSV file from the mapped SFTP server. Click Next , then a small window will appear showing the number of rows in the CSV, API usage, etc. Click OK . Choose the column to use for matching, then click Next . (Optional) Relate using the look-up key to other Salesforce objects. Click Next , and map the CSV columns to Salesforce columns. See below: Choose a results folder where Salesforce will dump data rows in CSV format. Click Finish . You can choose between View successes , View errors , or OK . Choose OK to close the wizard. As you can see, nothing else special happened within the Salesforce Data Loader. The mapped SFTP server made the difference. Step 3. How to Export Data from Salesforce to SFTP Using Data Loader You can also export from Salesforce to a CSV file into the mapped SFTP server. The steps are plain and easy to follow, as seen below: Click Export , then select the Salesforce object you want to export and the path to the CSV file in the mapped SFTP drive. Click Next , then choose the columns to include in the CSV file. Click Finish , then click Yes when a prompt appears asking you if you want to proceed. The results will display the number of rows processed. You have the option to click View Extraction or OK . Choose OK to close the window and finish the export. These are the native options Salesforce has in store for you. If you need more advanced workflows but easy to use, Skyvia is a first-class\u00a0Salesforce [Data Loader alternative](https://skyvia.com/blog/salesforce-best-data-loaders/) . It supports SFTP and has a scheduler for automated runtimes. This, along with other third-party methods, will be discussed next. But first let\u2019s discuss the advantages and disadvantage of the discussed approach. Pros Easy to use, wizard-based import and export. Minimal setup for businesses already using Salesforce. Automatically maps CSV columns with the same name as the Salesforce column. Cons Supports CSV file format only. Requires workarounds if the source or target is an SFTP folder. Scheduling integrations is not supported. Not ideal for real-time data sync or complex workflows. Integrate SFTP and Salesforce using 3rd party tools Third-party integration tools provide easy-to-use and flexible solutions. It reduces the need for custom development or coding. They have pre-built connectors, including a Salesforce SFTP connector. Moreover, they provide graphical interfaces, making it easier to design integration pipelines. Here are some popular tools: Mulesoft is an external product from Salesforce that offers robust integration solutions. Its Anypoint Platform provides pre-built connectors and APIs to make a Salesforce SFTP integration seamless. This is ideal for enterprises that have deep pockets. Jitterbit simplifies Salesforce-to-SFTP integration through its intuitive drag-and-drop integration builder. It\u2019s designed for fast implementation without requiring coding skills. Advanced features are available for higher-paid tiers. Dell Boomi is an [Integration Platform as a Service](https://skyvia.com/blog/what-is-ipaas/) (iPaaS) that supports robust Salesforce and SFTP integration. It supports real-time Salesforce data processing and file sharing but requires a steeper learning curve. Skyvia is a no-code/low-code integration platform offering simple to complex integrations. It\u2019s a 100% cloud solution to [integrate Salesforce SFTP](https://skyvia.com/data-integration/integrate-salesforce-sftp) . It offers a free tier that you can try now, but you can extend the capabilities through paid tiers. This is ideal for small to large data integration projects. Best for: Using third-party tools is best when you have limited development resources, but you need a quick implementation. This will also allow real-time integration if the tool has that feature. If you\u2019re on this path, make sure the tools you choose are reliable, easy to use, and with reliable support. Let\u2019s have examples using Skyvia. Method 3: How to Integrate Salesforce and SFTP Using Skyvia Skyvia is a cloud-based data platform that can do simple import and export, complex data flows, automation, and more. It uses a graphical user interface to create integration packages. Two hundred plus reviewers in G2 gave it a [4.8 out of 5 stars](https://www.g2.com/products/skyvia/reviews#reviews) . Ease of use is the most prominent in the reviews. But being easy doesn\u2019t mean it\u2019s lacking flexibility. It offers several ways to integrate 2 or more data connections, including SFTP and Salesforce. From simple to complex, you can find an approach that will suit your needs. This tutorial will start with creating 2 connections. Then, moving on to creating Salesforce SFTP integrations. But before we proceed, [register for free](https://app.skyvia.com/register) . You can follow along if you have your Salesforce and SFTP credentials provided to you. Let\u2019s begin with the 2 connections. How to Create Skyvia Connections Connections refer to your Salesforce and SFTP connections. You can\u2019t make integrations without these connections. So, you need to create these first. To do that, log on to Skyvia and follow these steps: Click the + Create New button from the top of the Skyvia workspace, then click Connection . Select from the list of connectors. There are hundreds of them. So, filter the list by typing \u2018sales\u2018 or \u2018salesforce\u2019 from the filter box. Then, click Salesforce . Fill out the form and sign in to Salesforce to get an OAuth token. Click Test Connection . Once the test is good, you will see a Connection is successful in the upper right . Rename the Untitled connection name with your desired connection name (e.g., Salesforce prod, Salesforce test, etc.). Below is mine, and I named it salesforce-developer . Finally, click the Create Connection button to save your new connection. Repeat from #1, but this time, select the SFTP connector and fill out the form. Have your valid SFTP credentials with you. Below is my successful SFTP connection: Creating Skyvia Connections is a simple fill-in-the-blanks. If you\u2019re having problems with the credentials, consult your system administrator. How to Import Data from SFTP to Salesforce Using Skyvia Skyvia Import is one of the simplest methods to integrate SFTP into Salesforce. It is a straightforward process with the following steps: Click + Create New and select Import . Once you are inside the Import integration page, choose CSV from Storage Service . Then, select the SFTP connection you created earlier as your source. Next, select the Salesforce connection you created earlier as your target. Click Add New for a new task, then choose the CSV file that you need to import on the next page. Choose the Salesforce object and the operation to perform (insert, update, upsert, delete). Map the CSV columns to Salesforce object columns. Ensure your CSV columns match the data type of the Salesforce object column. Add more tasks depending on the number of CSV files you want to import. Follow steps 5 to 7. Give your Import integration a name and click Create to save it. If you want to run this import regularly, create a schedule by clicking Schedule in the upper-left corner of the page. Otherwise, you can run the Import integration by clicking Run in the upper-right corner of the page. Below is a sample for setting up a schedule in Skyvia. The schedule will run daily at 12 midnight. That\u2019s how to do the Import. If you get used to it, you can create a simple 1-task Import in Skyvia in under 3 to 5 minutes. That\u2019s how easy it is. Below is the result of the above Skyvia Import setup: How to Export Data from Salesforce to SFTP Using Skyvia Exporting Salesforce data to SFTP is also easy using Skyvia Export. Below are the steps: Click + Create New and select Export . Once you\u2019re inside the Export page, choose your Salesforce connection as the source. Then, select CSV to Storage Service . Choose your SFTP connection from the dropdown list. If there\u2019s a folder, choose one too. The CSV file will be written there. Then, click Add New to create a new task. Choose a Salesforce object like Contacts and mark the columns to export. Optionally, set a filter and sort order. Click the Next step button and set the CSV filename and compression type. Leave the compression to None if you don\u2019t want the result compressed. Click Next step , and review the output columns. You can still rename the output columns from here if you like. Click Save task . If there are more tasks, repeat steps 6 to 10 and use a different CSV file and Salesforce object. Name your Skyvia Export and click Create to save your export integration. If needed, create an export schedule by clicking Schedule from the top-left corner. The setup is the same as scheduling a Skyvia Import. Run your Skyvia Export by clicking Run in the upper-right corner. Below is the result of the above setup: The export process will extract the Contacts data and write a CSV file into the MyFolder folder in the SFTP server. If you need to see all these in action with a Skyvia Data Flow example, watch the video below: So let\u2019s summarize advantages and disadvantages of using third party tools for Salesforce and SFTP Integration. Pros Ease of Use : Drag-and-drop interfaces and pre-configured templates make it easy even for non-technical users. Pre-built connectors, components, and templates significantly reduce the setup time needed compared to custom development. Automation : Scheduled data exports/imports reduce manual effort and errors you can make. Scalability : Tools can handle increasing data volumes and complex workflows as your business grows. Error Handling : Advanced tools provide logging and error notifications to troubleshoot failed integrations efficiently. Cons Cost : Some tools have hefty subscription fees, which is bad for startups and small businesses. Advanced features like API customization or support for specific file formats may only be available in higher pricing tiers. Learning Curve : Unfamiliar interfaces may need initial user training. Limited Customization : Some pre-built workflows and templates may not cover every unique business requirement. This will need workarounds and additional setups. Dependency on Vendor : Relying on a third party means you\u2019re at their mercy on support, update schedules, and potential downtime. Integrate SFTP and Salesforce using Custom Coding Custom coding refers to using a programming language to build a tailored solution. This approach allows you to make solutions unique to your business that is not possible with the native or third-party tools. You can say that this is the most flexible approach to Salesforce SFTP integration. However, this method needs high technical expertise using Salesforce, SFTP, and the programming language. The following are your options: Apex Coding : Salesforce\u2019s Apex programming language offers custom coding with deep ties to Salesforce itself. Platform Events + SFTP Integration : If you need real-time, event-driven integration. You can use Apex code to subscribe to events and file transfers and to communicate with the SFTP server. Custom Middleware with Node.js or Python: You can develop a custom middleware written in Node.js or Python. This middleware should handle SFTP and Salesforce connections, file transfers, and other integration logic. Then, your Apex code will pass the CSV content to this middleware and save it to the SFTP folder. Best for: Using custom coding is best for organizations with experienced developers and they need complete control over the integration process that other tools cannot provide. Method 4: Sample Code Snippet Using Apex and Node.js Apex doesn\u2019t directly support SFTP. So, you need to do some workarounds. Using Node.js, you can build a middleware that will handle the SFTP connection and transfer. Apex will then call the middleware to pass the generated CSV. Here\u2019s a snippet for the Apex part of the code: public with sharing class ContactSftpExporter {\n \n // Query Contacts and convert them to CSV\n public static String generateCSV() {\n List contacts = [SELECT Id, FirstName, LastName, Email FROM Contact LIMIT 10];\n String csvHeader = 'Id,FirstName,LastName,Email\\n';\n String csvBody = '';\n\n for (Contact c : contacts) {\n csvBody += c.Id + ',' + c.FirstName + ',' + c.LastName + ',' + c.Email + '\\n';\n }\n\n return csvHeader + csvBody;\n }\n public void uploadCSVToSFTP() {\n // Create HTTP request to the external web service\n String csvContent = generateCSV();\n HttpRequest req = new HttpRequest();\n req.setEndpoint('https://your-external-service.com/upload');\n req.setMethod('POST');\n req.setHeader('Content-Type', 'application/json');\n \n // Prepare data to send (CSV file content and other needed info)\n String requestBody = '{\"csvData\": \"' + EncodingUtil.urlEncode(csvContent, 'UTF-8') + '\"}';\n req.setBody(requestBody);\n \n // Send the request\n Http http = new Http();\n HttpResponse res = http.send(req);\n \n // Process the response (optional)\n if(res.getStatusCode() == 200) {\n System.debug('File uploaded successfully to SFTP.');\n } else {\n System.debug('Failed to upload file: ' + res.getBody());\n }\n } It queries the Salesforce Contact and creates the CSV. Replace the URL in the req.setEndpoint to point to the Node.js web service. Below is the sample Node.js code to do that. You need the ssh2 library and the Express app. const express = require('express');\nconst { Client } = require('ssh2');\nconst app = express();\nconst port = 3000;\n\napp.use(express.json());\n\napp.post('/upload', (req, res) => {\n const csvData = req.body.csvData; // Get the CSV data from request\n\n const sftpClient = new Client();\n sftpClient.on('ready', () => {\n sftpClient.sftp((err, sftp) => {\n if (err) return res.status(500).send('SFTP connection failed.');\n\n // Upload file to SFTP server\n const writeStream = sftp.createWriteStream('/path/to/remote/contact.csv');\n writeStream.write(csvData);\n writeStream.end();\n\n writeStream.on('close', () => {\n sftpClient.end();\n res.status(200).send('File uploaded successfully!');\n });\n });\n }).connect({\n host: 'your-sftp-server.com',\n port: 22,\n username: 'your-sftp-username',\n password: 'your-sftp-password'\n });\n});\n\napp.listen(port, () => {\n console.log(`SFTP middleware service listening at http://localhost:${port}`);\n}); Replace /path/to/remote/contact.csv with the actual SFTP folder. Then, replace the host, username, and password for the SFTP credentials. Ask your system administrator for this setup. Pros Full control over the integration process. Tailored solutions for unique business needs. Scalable and extendable for future use cases. Cons Requires advanced development skills and familiarity with Salesforce\u2019s Apex programming, Platform Events, or another programming language (e.g. Python) Initial development can be time-consuming. Ongoing maintenance and updates are necessary to keep the integration secure and compatible with Salesforce updates. Overkill for simple integrations. Use Cases for SFTP Integration with Salesforce SFTP has some interesting use cases when integrating with Salesforce. Consider the following: Data Migration . When your company is moving from a legacy CRM system built in-house to Salesforce. For example, a company transitioning from an on-premise CRM to Salesforce can use SFTP to transfer bulk customer records securely. Data Backup and Recovery . When you want to maintain your own backup by exporting your Salesforce Org data. For example, a business setting up automated nightly backups of their Salesforce data to an SFTP server. This ensures they have a fail-safe. Document Management . You can store documents related to your Salesforce records securely in SFTP. For example, a real estate company managing property documents can use SFTP to store sensitive files. Then, access securely through Salesforce. Automated Data Sync . You can synchronize your Salesforce data to another system regularly by storing them in an SFTP server. From there, the external system will access the files securely. For example, an e-commerce company using SFTP to update Salesforce with inventory data from their ERP system nightly. Regulatory Compliance . This is the scenario presented from the beginning. The secure file management of SFTP can help with data security compliance. For example, a healthcare provider using SFTP to transfer patient data securely. This ensures compliance with healthcare data protection regulations. Conclusion To recap our discussion, we described what SFTP is and how it differs from FTP. We also discussed why SFTP is a good location for sharing data between organizations. Then, we presented the use cases and, finally, the methods, namely native, third-party, and custom coding to integrate Salesforce and SFTP. You can choose which of the methods can best address your scenario. FAQ for Salesforce SFTP Integration What are the main differences between FTP and SFTP? FTP transfers files in plain text, making it insecure, while SFTP uses encryption for secure file transfers. Why should I choose SFTP over FTP for file transfers in Salesforce? SFTP ensures data security with encryption, meeting compliance and safeguarding sensitive business information. How can I ensure data integrity during SFTP transfers in Salesforce? Use file checksums or hashes (e.g., MD5, SHA256) to verify file integrity before and after transfer. How can I monitor and troubleshoot SFTP connections in Salesforce? Implement logging for file transfer events and errors, and use Salesforce tools like Debug Logs and Email Alerts for monitoring. What are the compliance considerations when integrating SFTP with Salesforce? Ensure encryption standards align with GDPR, HIPAA, or other applicable regulations, and audit data access regularly. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-sftp-server%2F) [Twitter](https://twitter.com/intent/tweet?text=4+Ways+to+Connect+SFTP+to+Salesforce+for+Seamless+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-salesforce-to-sftp-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/&title=4+Ways+to+Connect+SFTP+to+Salesforce+for+Seamless+Integration) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/how-to-connect-sql-server-to-google-sheets/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Microsoft SQL Server to Google Sheets By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/how-to-connect-sql-server-to-google-sheets/#respond) 128 May 13, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-google-sheets%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Microsoft+SQL+Server+to+Google+Sheets&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-google-sheets%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-sql-server-to-google-sheets/&title=How+to+Connect+Microsoft+SQL+Server+to+Google+Sheets) Imagine a fast-growing retail company relying on Google Sheets for sales tracking and SQL Server for inventory management. The sales team manually exports orders from a spreadsheet, sends them to operations, and waits for someone to upload the data into a database. It takes hours to update inventory and leads to data discrepancies, duplicates, and missed opportunities. In today\u2019s fast-paced business environment, such inefficiencies are costly and unsustainable. However, there is a remedy for this disease: you can connect Google Sheets to SQL Server.\u00a0 It will help you implement fast data flow and automate collaboration. This integration transforms disconnected workflows into a unified, efficient, and data-driven operation. This article explores the ways to connect SQL Server to Google Sheets, from simple manual data transfer to fully automated integration provided by dedicated data platforms. Table of Contents Why Connect SQL Server to Google Sheets? Methods for Moving Data from SQL Server to Google Sheets Method 1. The Manual Way: CSV Export/Import Method 2. The Developer Way: Scripting Method 3. No-Code/Low-Code Integration Platforms Method 4. Alternative No-Code Option: Skyvia Google Sheets Add-on Best Practices and Considerations Conclusion Why Connect SQL Server to Google Sheets? When businesses connect SQL Server and Google Sheets, a range of powerful improvements across different teams and workflows becomes available to them. Real-Time Reporting and Analytics One of the most significant benefits is eliminating manual exports and outdated reports. Businesses are finally working from a [single source of truth](https://skyvia.com/learn/single-source-of-true) . Sales teams can see live order numbers without waiting for weekly spreadsheets. Finance teams can track revenue and cash flow in real time. Executives can make faster, better-informed decisions based on the latest numbers. Free Collaboration Across Teams Different teams work with live, reliable data with no SQL skills required. Marketing teams can get customer lists, product catalogs, or inventory levels straight into Google Sheets without directly accessing the SQL database. Support teams can update customer case notes or order statuses in Google Sheets, with changes automatically syncing back to SQL Server. Automated Data Entry Field employees like sales reps, service staff, event teams, and others need simple ways to capture information fast. It saves time and provides data accuracy. Sales reps can update service records from their phones using a simple spreadsheet, which instantly sends data to the main database. Operations teams can set up automatic alerts if stock levels drop below a safe threshold. Logical Flows and Data Validation Beyond basic syncing operations, integration can help you implement more complex flows for your data. New customer signups can be checked for missing data or errors inside the spreadsheet. Updated pricing lists can be verified collaboratively before going live. Backup and Auditing Scheduled syncs between SQL Server and Google Sheets can help to track user activity. Lightweight backup snapshots for critical datasets. Audit trails showing who changed what and when. Methods for Moving Data from SQL Server to Google Sheets There are several ways to connect Google Sheets to SQL Server: Manual Export from SQL Server database and manual Import to Google Sheets. Custom scripting method for developers. No-Code/Low-Code Integration Platforms. Skyvia Google Sheets Add-on. We reviewed and compared common methods below. Method Ease of Setup Flexibility Maintenance Best For Manual Export/Import Easy Low High One-time tasks or simple use cases Custom Scripting Complex High High Advanced use-cases and complex logic No-Code/Low-Code Platforms Easy Moderate Low Fast setup of easy or complex use cases Skyvia Google Sheets Add-on Easy Moderate Low Google Sheets users who need quick sync Let\u2019s look at each method in more detail. Method 1. The Manual Way: CSV Export/Import You export data from SQL Server using SQL Server Management Studio or another tool into a CSV file, then manually import this file into Google Sheets. Best For Small businesses or individuals for one-time tasks where automation isn\u2019t required and no customizations are needed. Method Pros No setup or tools required. Simple and accessible for non-technical users. Method Cons No automation and customization support. No real-time data \u2014 static snapshots only. Below we show how to export query results from an existing table on SQL Server and Import the exported file into an existing spreadsheet in Google Sheets. Step 1. Export Data from SQL Server Compose a query that selects the required data for export and execute it. Right-click the query results and select Save Result As . Specify the file name and click Save . Step 2.\u00a0 Import a CSV file to Google Sheets Go to Google Sheets and click File -> Import . Select or drag the CSV file. Select the import location and separator type and click Import data . As a result, you get your data imported to Google Sheets manually in a few clicks. Method 2. The Developer Way: Scripting You can build a fully custom integration using [Google Apps Script](https://developers.google.com/apps-script/guides/sheets) to connect to SQL Server via JDBC or REST API. It allows users to build custom apps to query and write data between the services. Best For Tech-savvy teams or businesses with complex workflows or advanced logic. Method Pros Total control over the integration Highly customizable. Can be fully automated. Method Cons Requires strong coding skills. Needs ongoing maintenance and ownership. Implementation complexity. Step 1. Prepare Your SQL Server for External Connections Enable TCP/IP connections on the SQL Server. Make sure your SQL Server accepts remote connections. Open the necessary firewall ports . Ensure the server is accessible from Google\u2019s [IP address range](https://www.gstatic.com/ipranges/goog.json) . Step 2. Create a Google Sheet Open Google Sheets. Go to Extensions \u2192 Apps Script. Step 3. Write Google Apps Script Code In the Apps Script editor, write a script using the JDBC service to connect. Use the [available functions and methods](https://developers.google.com/apps-script/reference/jdbc) to manipulate data. Run the script and check View \u2192 Logs for results. For example, the script below establishes the connection to SQL Server. function testSQLConnection() { \n var url = 'jdbc:sqlserver://:1433;databaseName='; \n var user = ''; \n var userPwd = ''; \n try { \nvar conn = Jdbc.getConnection(url, user, userPwd); \nLogger.log('Connection successful!'); \nconn.close(); \n } catch (err) { \nLogger.log('Connection failed: ' + err); \n } \n} To use this script for your case, replace , , , and with your real credentials. Method 3. No-Code/Low-Code Integration Platforms Cloud [data integration platforms](https://www.g2.com/search?utf8=%E2%9C%93&query=data+integration&filters%5Bstar_rating%5D%5B%5D=5) like Skyvia, Zapier, Integrate.io, and others offer user-friendly interfaces and prebuilt connectors to link SQL Server and Google Sheets with no effort. Best For Companies of all sizes, from small startups to enterprises, that need frequent syncs for advanced scenarios without coding. Method Pros Fast and easy to set up without coding. Flexible scheduling and data transformation options. Integration maintenance, and support. Method Cons Costs can grow with usage or scale. Less control over advanced logic. May have data volume limits. Let\u2019s look at how to do that using [Skyvia](https://skyvia.com/) , a no-code cloud data platform. In this example, we demonstrate copying data from a Google Sheets table to SQL Server with [Replication.](https://skyvia.com/data-integration/replication) Skyvia requires no installations; only a web browser is needed. You can get a 2-week trial or use a free plan.\u00a0Register in Skyvia now and enjoy chic solutions for your data-related tasks. Step 1. Create Connection to Google Sheets Go to Skyvia, click +Create New -> Connection . Type Google Sheets or select it from the Spreadsheet category. Sign in with Google, select the desired spreadsheet, and save your connection. Step 2. Create Connection to SQL Server If your server is available for external connections, use a direct connection. If your server is protected from external connections, you need an [Agent application](https://skyvia.com/agent) to allow Skyvia to connect to a remote SQL Server. Let\u2019s create an Agent Connection. Click +Create New -> Agent . Download the Agent application and follow the installation instructions on the page. Run the install agent. Create a new connection and select the SQL Server. Set the Agent connection mode and select the agent you created earlier from the dropdown list. Enter the server\u2019s name, user ID, password, and database name. Step 3. Create Integration Create a new integration from the menu and select Replication. Set the Google Sheets connection as a Source and the SQL Server connection as a Target . Select one or more sheets to replicate. Click Edit to configure replication settings for a specific sheet. Here you can choose specific columns, set filters, or apply data hashing. Save the task and the integration. Step 4. Run the Integration and check the results Set the schedule to run automatically or launch the integration manually. You can set the schedule to run on specific days or weekdays, every few hours or minutes. Check the integration results on the Monitor and Logs tabs. Method 4. Alternative No-Code Option: Skyvia Google Sheets Add-on Skyvia offers one more opportunity, which is special for Google Sheets users. This method allows users to query and refresh data from SQL Server directly within a spreadsheet. You can compose queries without SQL knowledge, using a visual query builder, use prebuilt queries from the gallery, or write custom queries against your data. Best For Users who prefer working inside Google Sheets but need direct access to SQL Server data. Method Pros Works entirely within Google Sheets. Various data sources support. Automated data refresh Visual query builder. Supports both import and export of data. Method Cons Requires initial setup. May not support complex use cases. Note: You need an active Skyvia account and valid connections to Google Sheets and SQL Server. See how to do that above. Look how easy integration can be. Create your Skyvia account now and start syncing your data today. To query data using the visual query builder, do the following. Step 1. Install Skyvia Add-on Open Google Sheets and click Extensions -> Add-ons -> Get add-ons. Type Skyvia and search for Skyvia Query. Click Install -> Continue to launch the add-on installation. Sign in with Google and give Skyvia permission to process your data. Return to Extensions, click Skyvia Query , and click Login . Sign in with Skyvia. Step 2. Query data using Skyvia Google Sheets Add-on Click Extensions -> Skyvia Query -> Query . Select the workspace and choose the source connection. Define the object you want to query and set filters if needed. Click Run. Your query results will be displayed directly in Google Sheets. If you need to refresh results, you are not supposed to rebuild your query. You can just refresh it using the add-on menu. Best Practices and Considerations A reliable integration between Google Sheets with SQL Server depends on careful planning across several key areas. Let\u2019s break down what you should keep in mind before you dive in. Data Security Whenever you connect a cloud tool like Google Sheets to a database, security should be top of mind. You\u2019re dealing with sensitive business data, and the last thing you want is to open up vulnerabilities. With third-party platforms like Skyvia, you can breathe a little easier. They use strong encryption and secure authentication methods and comply with industry standards like GDPR. Still, it\u2019s good practice on your end to limit permissions, use strong passwords or keys, and regularly review access rights. Data Volume When choosing a method, always consider how much data you are moving and whether your chosen approach can handle it reliably. Manual CSV exports might work fine for a few hundred rows, but once you\u2019re dealing with large datasets, you need a tool designed for the job. [Data integration platforms](https://skyvia.com/blog/data-integration-tools/) and custom scripting methods are particularly strong here. Both methods can handle large data volumes efficiently, moving big chunks of data without choking your system or hitting limits. Frequency How often do you really need your data updated? Some businesses can live with daily updates, while others (like sales or inventory teams) might need near real-time syncing. With custom scripts, you can schedule triggers manually, but no-code platforms let you set flexible, automatic schedules from hourly refreshes to once-per-minute syncs, ensuring the data is always as fresh as your business demands. Data Mapping One often-overlooked detail is data type compatibility. Think about matching fields: date formats, number precision, or text encoding. Without careful mapping or transformations, you might end up with broken formulas or corrupted records. Third-party integration tools offer built-in features to map and transform data between systems, helping ensure accurate alignment. Even if you\u2019re handling this manually or via scripts, take the time to define mappings clearly and test thoroughly before going live. Error Handling The key to smooth integration is knowing what happens when something goes wrong. Will you be alerted if a sync fails? How will you trace the issue and correct it? Data platforms often offer detailed logging and alerting, so you\u2019ll know immediately if something breaks, and you can quickly diagnose the problem. If you\u2019re using scripts or manual methods, make sure to build in error capture, retries, and logging to avoid silent failures that go unnoticed. Conclusion Integrating Google Sheets with SQL Server transforms the way your business works with data. From one-time exports to fully automated pipelines, there\u2019s no one-size-fits-all method. The best approach depends on your goals, technical resources, data volume, and how frequently your data changes. Manual exports might work for small, occasional tasks, while custom scripting offers ultimate flexibility for teams with in-house development power. No-code and low-code platforms fill the middle ground, and tools like Skyvia go a step further, offering a secure, scalable, and user-friendly way to sync your data. Whichever method you choose, focus on the fundamentals: security, reliability, smart scheduling, and clean data mapping. With careful and accurate integration, your Google Sheets SQL Server integration becomes a core part of your business improvement. Start exploring your integration options today. Skyvia is a great place to start. F.A.Q. for Google Sheets to SQL Server Can I sync data from Google Sheets back to SQL Server? Yes. You can export the spreadsheet as a CSV file and [load CSV file to SQL Server](https://skyvia.com/blog/import-csv-file-to-sql-server/) , use third-party tools to do it automatically, or compose a custom script. Is connecting my SQL Server to a cloud tool secure? Yes. Data platforms like Skyvia, Zapier, Fivetran, and others use encryption, secure authentication, and comply with industry standards to protect your data. Can Skyvia handle custom SQL queries as a source? Yes. Skyvia lets you write custom SQL queries to pull exactly the data you need from your SQL Server database. How often can I schedule the data sync? Third-party data platforms support flexible scheduling, from hourly syncs to daily or weekly runs, depending on your business needs and plan. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-google-sheets%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Microsoft+SQL+Server+to+Google+Sheets&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-google-sheets%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-sql-server-to-google-sheets/&title=How+to+Connect+Microsoft+SQL+Server+to+Google+Sheets) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-sql-server-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) SQL Server to SQL Server: Migration & Replication Guide By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-connect-sql-server-to-sql-server/#respond) 202 April 29, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=SQL+Server+to+SQL+Server%3A+Migration+%26+Replication+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-sql-server-to-sql-server/&title=SQL+Server+to+SQL+Server%3A+Migration+%26+Replication+Guide) Have you ever felt stuck on what method to use for your SQL Server to SQL Server integration? Perhaps you need to migrate to the latest SQL Server version. Or you just need a copy for testing or debugging purposes. But you\u2019re new to this kind of stuff. So, you sat down to work and clicked Start from Windows. There\u2019s SSMS and there\u2019s Visual Studio. Which method and tool is right for you? Staring at your screen won\u2019t help, but this article can. You will learn 7 methods and the scenarios that are best for each. And don\u2019t worry. You got this. Choosing will be a piece of cake. Here\u2019s what we will discuss: Table of Contents Why Move Data Between SQL Servers? (Common Use Cases) Key Considerations Before You Start a SQL Server to SQL Server Data Movement 7 Methods for Moving Data Between SQL Servers Method 1: Backup and Restore Method 2: SQL Server Replication Method 3: SQL Server Integration Services (SSIS) Method 4: Import and Export Wizard in SSMS Method 5: Using BCP and SQLCMD Method 6: Linked Servers Method 7: Third-Party Tools Best Practices for SQL Server Data Movement Conclusion Roll your sleeves and let\u2019s dive in. Why Move Data Between SQL Servers? (Common Use Cases) There\u2019s more than one reason to shift data between SQL Servers \u2014 from keeping your setup modern to making sure your team isn\u2019t working in production by accident. Let\u2019s break down the most common situations where data movement makes sense. 1. Migration & Modernization Server Upgrades : Older versions of SQL Server eventually fall out of support, so you need a SQL Server upgrade. Moving to a newer version helps you stay secure, faster, and compatible with newer tools. Example: Migrating from SQL Server 2016 to SQL Server 2022 to take advantage of Query Store hints. Cloud Adoption : Many teams are moving from on-premise to the cloud to reduce hardware costs and improve scalability. Azure SQL Managed Instance is a popular target. Example: Moving a legacy ERP database from an on-prem server to Azure SQL MI. Server Consolidation : Managing too many SQL Servers can get messy. Database consolidation reduces maintenance and licensing overhead. Example: Combining five underused SQL instances into one high-capacity SQL Server 2022 environment. 2. Development & Testing Environment Setup : Dev and QA teams need safe environments to test code and schema changes. Cloning data from production avoids accidents. Example: Restoring a recent prod backup to a staging server before a major deployment. 3. Performance & Load Optimization Offloading Reporting or Analytics : Running heavy queries on production can slow down real-time users. Moving data to a separate reporting database helps balance the load. Example: Copying nightly transactional data to a reporting server used by Power BI. Load Balancing : Distributing reads across multiple servers keeps things snappy. This setup improves user experience during high traffic. Example: Using replication to sync product data to a read-only SQL Server for your eCommerce app. 4. Availability & Recovery High Availability and Disaster Recovery (HA/DR) : You need a plan for when something breaks. Syncing data to a standby server gives you a quick way to recover. Example: Using log shipping to keep a standby server ready in case the primary goes down. Data Distribution : Sometimes you need to send data to remote offices or partners. Replication helps push changes where they\u2019re needed. Example: Using transactional replication to send customer order data to a remote warehouse system. Key Considerations Before You Start a SQL Server to SQL Server Data Movement You can\u2019t just dive in and move data. A little planning now can save hours of cleanup later. Here\u2019s what you need to think through before making a move. Planning and Scope Planning is Crucial : Start with a clear goal. What are you moving, and why? The method you choose depends on the answer. Example: Backup and restore might be a good bet for your one-time migration, but go with replication for ongoing sync. Schema vs. Data : Sometimes you just need the table structure. Other times, it\u2019s the actual data \u2014 or both. Example: When setting up a dev environment, you may only copy the schema without production data. Objects to Move : It\u2019s not always just tables. Think about stored procedures, views, logins, Agent jobs, and more. Example: Migrating a reporting database might mean exporting views and linked server logins, too. Performance and Logistics Downtime Tolerance : Can the app go offline during the move? If not, you\u2019ll need a near-zero-downtime approach. Example: Transactional replication is a good call to avoid downtime for your customer-facing apps. Data Volume : Are your databases too fat? Moving it might need advanced tools. Example: A 10GB database can go via [BACPAC](https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/data-tier-applications/overview?view=sql-server-ver16#bacpac-operations) , but a 2TB one might need [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) or a third-party tool with chunked loads. Network Bandwidth : Slow or unstable networks turn data movement into slow motion \u2013 a sure headache. More so with cloud or remote servers. Example: Syncing data to Azure over a weak VPN can crawl without proper compression or staging. SQL Server Versions & Editions : Some features work only on certain versions. Make sure the source and target match or are compatible. Example: You can restore a SQL 2016 backup to SQL 2022, but not the other way around. Data Handling and Workflow Need for Transformation (ETL)? Do you need to clean, reshape, or map data during the move? ETL adds complexity, but it might be necessary. Example: You might need to convert text fields into date fields when moving old CRM data. Ongoing Synchronization vs. One-Time Move : Is this a one-time shift, or will you need continuous updates? Your approach depends on that. Example: A dev server just needs a one-time restore. A BI dashboard needs nightly updates via a third-party tool like Skyvia. Security and Reliability Security : Don\u2019t forget logins, roles, and encryption. Data access needs to work on the target server too. Example: Migrating logins with mismatched SIDs can break app authentication if you don\u2019t fix them. Testing and Validation : Did the data reach the target? Well, that\u2019s not the end of the story. Always double-check that the move worked. Run checks on row counts, referential integrity, and app behavior. Example: After a large migration, run [CHECKSUM_AGG](https://learn.microsoft.com/en-us/sql/t-sql/functions/checksum-agg-transact-sql?view=sql-server-ver16) on key tables to compare source and target. SQL Server to SQL Server Pre-Migration Checklist Before you hit \u201cGo,\u201d make sure you\u2019ve ticked these off: Defined the goal: One-time move or ongoing sync? Chosen what to move: Schema, data, or both? Listed all needed objects: Tables, views, procs, logins, Agent jobs? Checked SQL Server version compatibility. Estimated the downtime and planned for it. Measured data volume and picked the right method. Verified network bandwidth , especially for remote/cloud moves. Identified any ETL or transformation needs. Reviewed security settings , permissions, and encryption. Planned testing and validation after the move. 7 Methods for Moving Data Between SQL Servers Moving data between SQL Servers isn\u2019t a one-size-fits-all deal. It\u2019s more like picking the right tool from a toolbox. Let\u2019s open it up and see what works best for your job. But before that, below is a comparison table for the 7 methods: Method Best For Skill Level Required Customizability Backup and Restore Full database migrations, upgrades Beginner to Intermediate Low SQL Server Replication Real-time sync, reporting, data distribution Advanced High SQL Server Integration Services (SSIS) Complex ETL, data cleansing, automation Intermediate to Advanced Very High Import/Export Wizard in SSMS Quick one-time moves, small datasets Beginner Low Scripting with BCP and SQLCMD Moving large tables, automation via scripts Intermediate Medium to High Linked Servers Ad hoc queries, cross-server joins, sync tasks Intermediate Medium Third-Party Tools\u00a0(Skyvia) UI-based migration, monitoring, hybrid scenarios Beginner to Intermediate Medium to High Quick Highlights SSIS offers the most flexibility for complex transformations and automations. It has a Script component that allows skilled coders to embed C# code in the integration package. Import/Export Wizard is great for beginners, but limited in control and features. Replication and Linked Servers support ongoing data sync, but need deeper SQL know-how . Third-party tools like Skyvia, Stitch, and others offers the best options for beginners and experts alike. Method 1: Backup and Restore This is the old reliable. You take a snapshot of your database (backup), then bring it back to life somewhere else (restore). It works great when you\u2019re moving everything as-is. Best For Full database migrations Upgrading from one server version to another (e.g., SQL Server 2016 to SQL Server 2022). Note:\u00a0After restoring a database from an older version, you need to change the compatibility level to take full advantage of the new SQL Server features. Moving databases to a new environment (like test or staging). Step-by-Step Guide Migration involves two parts: backup and restore. Follow the steps for each part below using SQL Server Management Studio (SSMS). Part 1: Backup the Database Open SSMS. Right-click the source database, then select Tasks > Back Up\u2026 Choose Full as the backup type Set the destination path for the\u00a0.bak\u00a0file Click OK to start the backup. Check out the Backup feature of SSMS below: Part 2: Restore the Database Copy the\u00a0.bak\u00a0file to the target server Open SSMS on the target server Right-click the target database. Then, select Tasks > Restore > Database . Select Device as the source. Then, click the \u2026 button and add your\u00a0.bak\u00a0file. Choose the target database from the dropdown list and restore options. Click OK to complete the restore. Check out the Restore feature of SSMS below: Pros & Cons Like everything else, this has advantages and disadvantages. Check it out below: Pros: Simple and reliable Great for full migrations Works across versions (upgrades) Cons: Not for partial moves (e.g., specific tables only) Downtime is needed during backup and restore Can be slow for large databases. Permissions and logins are not carried over ; requires manual recreation on the target server. Method 2: SQL Server Replication Replication is syncing your data across multiple SQL Servers. It\u2019s great for cloning data to different servers or maintaining copies of it. It\u2019s always on standby for changes. You don\u2019t need to worry about manually moving data once it\u2019s running. SQL Server Replication Types Replication comes in different flavors. Check them out below: Transactional Replication This one is ideal for real-time, one-way replication of transactional data. Use case : When you need to replicate changes from one Publisher to multiple Subscribers without delays. Key feature : Data is replicated immediately as your data changes in the source database. Snapshot Replication This is best for scenarios where data doesn\u2019t change frequently, or you can afford not-so-fresh data from time to time. Use case : Replicating static data or data that doesn\u2019t change often (like reports). Key feature : A snapshot of the entire dataset is taken and replicated at specified intervals. Merge Replication Good for environments where data can be modified at multiple nodes. Use case : Ideal for mobile applications or remote offices where each node can both modify and receive data. Key feature : Supports bi-directional replication and allows updates to be made at both the Publisher and the Subscriber. Peer-to-Peer Transactional Replication Ideal for High Availability (HA), distributed systems, and environments where you want all nodes to have the same data. Use case : Perfect for geographically distributed data centers, or if you need bidirectional data flow between multiple nodes. Key feature : Each server (node) in the system acts as both Publisher and Subscriber, allowing changes from any node to be replicated across all nodes in the system. Best For Real-time data distribution : Ideal for keeping multiple copies of your data synchronized. High Availability (HA) : Replicating data to standby servers for failover. Data distribution across locations : Useful when you need to distribute data to remote locations. Multiple active copies : Merge replication allows multiple servers to write to the same data. Step-by-Step Guide Replication involves setting up and management. Each has steps of their own. Check out the following: Part 1: Setting Up Replication Open SQL Server Management Studio (SSMS) From the Object Explorer , right-click Replication > Configure Distribution . Set up the Publisher (the source server). From the Object Explorer , navigate to the Replication folder, expand it, and right-click Local Publications and select New Publication . Follow the wizard to set up the Publisher, including setting the replication type (Snapshot, Transactional, etc.). Configure Subscriptions (target servers that will receive data). From the Object Explorer , navigate to the Replication folder, expand it, and right-click Local Subcriptions and select New Subscription . Follow the wizard to set up the Subscriber. Part 2: Managing Replication Monitor replication status via SSMS and the Replication Monitor. From the Object Explorer , right-click Replication and select Launch Replication Monitor . Review the Replication Monitor to track the flow of data. Troubleshoot any issues with Replication Agents and logs. Check out a sample of the Replication Monitor below: Pros & Cons SQL Server replication also has a good and a bad side: Pros: Real-time or near-real-time data movement Keeps multiple servers in sync without manual intervention Excellent for high availability and disaster recovery Cons: Complex setup, especially for Transactional and Merge replication Can be resource-intensive (especially with large datasets) Requires ongoing monitoring to ensure replication is working smoothly Potential for replication conflicts in Merge replication . Available on the Windows operating system only. Method 3: SQL Server Integration Services (SSIS) When you need serious control over how data moves and changes on the way, SSIS is your toolbox. It\u2019s a full-blown ETL (Extract, Transform, Load) tool built right into SQL Server. You design packages, and it handles everything from pulling data to reshaping it before it lands on the target. Best For Complex data flows with transformations Cleaning, merging, or splitting data mid-transfer One-time migrations or recurring automated moves Moving between different databases or servers Upgrading while converting schema/data to match new standards Example : Moving customer data from multiple regional servers into one clean structure in a central database. Step-by-step Guide The steps below apply to simple integrations. One Data Flow task is good for one SQL Server database table. Repeat the steps for multiple tables. Then, arrange the Data Flow tasks in a Control Flow on which task(s) will run first, second, etc. Open SQL Server Data Tools (SSDT) or Visual Studio with the SSIS extension. Create a new Integration Services Project . Drag a Data Flow Task into the control flow. Double-click it, then add Source (e.g., OLE DB Source pointing to the old server). Add a Destination (e.g., OLE DB Destination pointing to the new server). Optionally insert transformations like Derived Columns , Data Conversion , or Lookups between them. Save and deploy the package to SSISDB or run it directly. Schedule the package using SQL Server Agent if needed. The following is a simple integration in Visual Studio 2022 using SSIS: Pros & Cons SSIS also has advantages and disadvantages. Pros: Highly customizable for complex scenarios Built-in support for transformations, loops, conditions, and scripting. It can integrate with other sources (CSV, Excel, Oracle, etc.) Automatable and reusable. Cons: Requires Visual Studio and some development effort. It can get complex for large projects with many transformations. Limited native support for non-Microsoft cloud services (like AWS RDS or GCP SQL); requires extra setup or custom connectors. May run slower over long distances or if cloud firewalls/networks aren\u2019t properly configured. Designing and configuring integration packages requires Windows and Visual Studio. There\u2019s no equivalent on other platforms like Linux and Mac. Method 4: Import and Export Wizard in SSMS Sometimes, you just want to move data\u00a0fast, without diving into code or packages. That\u2019s where the Import/Export Wizard in SSMS comes in. It\u2019s a simple, guided tool to copy data between SQL Servers in just a few clicks. Best For Quick, one-time moves of specific tables or views . Moving data between different SQL Server instances , local or remote. Developers or DBAs who need to transfer sample data or prep a test environment. Step-by-Step Guide (Importing or Exporting Data) Open SQL Server Management Studio (SSMS) . Right-click the source database > Tasks > Export Data (or Import Data on the target). Choose the source and destination data sources. Select whether to copy data from tables/views or write a custom query . Choose your tables , then map columns and data types if needed. Select whether to run immediately or save as an SSIS package . Review your choices, click Finish , and watch the wizard do the work. Below is a sample summary of an Import job done using the Import and Export Wizard: Pros & Cons The Import and Export Wizard is a simple tool with pros and cons. Pros: Built into SSMS\u2014 no extra setup needed . Simple, visual step-by-step process. Great for quick wins or testing. Cons: Doesn\u2019t move schema objects like stored procedures or triggers . No fine-grained control over transformations or logic . It can be tricky with large datasets or special data types . Logins and permissions not handled \u2014 must be set up manually. Works on Windows and SSMS only. Method 5: Using BCP and SQLCMD Need something lightweight and scriptable? [BCP (Bulk Copy Program)](https://learn.microsoft.com/en-us/sql/tools/bcp-utility) and [SQLCMD](https://learn.microsoft.com/en-us/sql/tools/sqlcmd/sqlcmd-utility) are command-line tools that get the job done without the bells and whistles. Perfect when you want speed, control, and automation. Here\u2019s a comparison between the two: Feature BCP (Bulk Copy Program) SQLCMD Main Purpose Export/import table data Run T-SQL scripts and commands Schema Support No (data only) Yes (can run full schema scripts) Data Movement Excellent for bulk table data Limited to what\u2019s in the script Command Style Command-line utility focused on data transfer Command-line utility focused on SQL scripting Transformations Not supported Only basic logic via T-SQL in scripts Best For Large flat table exports/imports Schema setup, scripting logic, automation Output Format .bcp, .csv (with switches) .sql, .txt, result sets Learning Curve Medium Low (if familiar with T-SQL) Windows/Linux Support Yes Yes Quick Highlights: Use BCP when moving data , especially large tables. Use SQLCMD when applying a schema or running SQL scripts . Combine both to move schema + data in scripted migrations. Best For Moving large tables quickly without using SSMS or Visual Studio. Automating exports/imports using batch or PowerShell scripts. Environments where GUI tools aren\u2019t available (e.g., servers only). Step-by-Step Guide We will cover the syntax for import and export. Export & Import a Single Table Using BCP 1.\u00a0Export the table to a file from the source server: The following bcp command will create a bcp file for employee data, as indicated by\u00a0out. It will connect to\u00a0SourceServer\u00a0(-S) using a trusted connection (-T). It will also perform bulk copy using Unicode characters (-w). bcp SourceDB.dbo.Employees out Employees_data.bcp -S SourceServer -T -w 2. Import into the target server: Below is the bcp command that will import the employee bcp file created earlier into\u00a0TargetServer\u00a0(-S) using a trusted connection (-T). It will import the data to the Employees table indicated by\u00a0in. bcp TargetDB.dbo.Employees in Employees_data.bcp -S TargetServer -T -w Export Schema Using SSMS + All Data Using SQLCMD + BCP Use a mix of BCP and SQLCMD to create a copy of the source database schema on the target server. Then, export the customers and orders table data into a bcp file. Then, run the script created by SQLCMD on the target server. Finally, import the data from the bcp file into the target server. 1. Script out schema (tables, procedures, etc.) using SSMS: You can generate a script of the entire SQL Server database using SSMS through the Tasks > Generate Scripts . Then, let SQLCMD run it on a target server. See the sample SQLCMD command in #3 later. Below is a sample summary on what to generate using Generate Scripts: 2. Export each table\u2019s data using BCP: Below will export data to customers and orders bcp file using the same parameters as earlier. bcp SourceDB.dbo.Customers out Customers_data.bcp -S SourceServer -T -w\nbcp SourceDB.dbo.Orders out Orders_data.bcp -S SourceServer -T -w 3. Run the script on the target server: This will start the migration to the target server. Run SQLCMD with the script (schema.sql) using a similar command below. The\u00a0schema.sql\u00a0should have been generated by SSMS in #1. sqlcmd -S TargetServer -d TargetDB -E -i schema.sql This will connect to\u00a0TargetServer\u00a0(-S) using a trusted connection (-E), and use the\u00a0TargetDB\u00a0database (-d). Then, run\u00a0schema.sql\u00a0(-i). 4: Import table data to the target: This is the final leg of the migration. It will import the bcp data files into the target tables. bcp TargetDB.dbo.Customers in Customers_data.bcp -S TargetServer -T -w\nbcp TargetDB.dbo.Orders in Orders_data.bcp -S TargetServer -T -w Pros & Cons Scripting has its place. It may not be the best for all cases. Pros: Fast for bulk inserts/exports . Easy to script with SSMS and automate. Both BCP and SQLCMD work on Windows, Linux, and Mac. You can re-run the script on a target SQL Server the next time you need another one (like a test environment). Cons: BCP Only handles data \u2014you must script the schema separately. No support for complex logic or transformations. You need to script it separately. Manual setup of permissions and logins. Not ideal for non-technical users or huge numbers of tables. Method 6: Linked Servers Linked Servers let you connect one SQL Server instance to another. Then, run queries across them like they\u2019re part of the same database. With the right permissions, this can work like a charm. In your queries, you need to add the server name of the linked server, like the one below: SELECT * FROM LinkedServer.DatabaseName.dbo.TableName Notice the\u00a0LinkedServer\u00a0above? That\u2019s what you need to add when you run a query against a linked server. Best For Accessing remote SQL Server data without moving it. Running cross-server queries (e.g., join TableA on Server1 with TableB on Server2). Situations where data stays in place but needs to be queried together. Works well for read operations or occasional updates. Example: You have a reporting app on Server A, but the sales data sits on Server B. With linked servers, you can just\u00a0SELECT * FROM ServerB.SalesDB.dbo.Orders\u00a0right from Server A. Step-by-Step Guide Open SSMS and connect to your source SQL Server. Expand Server Objects > Linked Servers . Right-click Linked Servers > New Linked Server . Set these: Linked Server name: Friendly name (e.g.,\u00a0SalesServer) Server type: Choose SQL Server or Other data source Provider: Use SQLNCLI or Microsoft OLE DB Provider for SQL Server Product name: SQL Server Data source: Server name or IP Go to Security tab: Choose how to map local logins to remote server logins. Test the connection and click OK . Pros & Cons Linked servers also have a good and a bad side: Pros: Easy to set up inside SSMS. Lets you run cross-server queries with T-SQL. Good for integrating with other servers, including Oracle or Access. Supports stored procedures, views, and joins. Cons: Performance drops on large joins or heavy cross-server queries. Not ideal for bulk data movement. Security setup can get tricky (especially with login mapping and Kerberos). Limited error handling when the remote server fails. Method 7: Third-Party Tools Think of this as hiring a moving company instead of carrying boxes yourself. Third-party tools take care of the heavy lifting\u2014migration, sync, automation\u2014all wrapped in a user-friendly package. They save time and reduce the chances of missing something important. There are many third-party tools that can do [SQL Server to SQL Server](https://skyvia.com/connectors/sql-server) integration. But we will consider Skyvia \u2013 a cloud-native data platform that offers replication, data migration, import/export, automation, data and control flows, and more. Best For When you want several integration options depending on your requirements. Simple to complex moves involving transformations, filtering, or ongoing sync. Scheduling and automating data movements. Moving between different environments, like on-prem to Azure SQL. Beginners and experts alike. Step-by-step Guide Skyvia can do simple integrations like imports and exports, replication, and complex ones like Data Flow and Control Flow. All you need are 2 SQL Server connections \u2013 one for the source and another for the target. Then, the integration package that suits your requirements best. We will consider Import and Replication integrations in the following subsections. Using Skyvia Import for Quick Migration Between SQL Servers You can import one or more tables using the Skyvia Import integration. This is good for creating development and testing/debugging copies, migration for server upgrades, and more. The steps: Create 2 SQL Server connections \u2013 one for the source and another for the target. To create one connection, click + Create New and select Connection . Then, choose SQL Server as the connector and configure it. The Skyvia SQL Server connector supports SQL Server authentication and needs the following: Server name/IP address Username Password Database name Installing and running the [Skyvia Agent](https://docs.skyvia.com/agents.html#creating-using) (needed for on-premises SQL Server only). Below is a sample connection using the Skyvia Agent: Create an Import integration. Click + Create New and select Import . Choose Data Source as the Source Type . Choose one of your SQL Server connections as Source and the other as Destination, as seen below: Click Add New to add a new task, then choose the source table on the next page. Choose a State Filter . Choose between All , Inserted , and Update . All is the default, which means all rows in the source table. Click Next step and select the target table from the dropdown list. Choose the Operation (Insert, Update, Upsert, Delete). Map the source columns (Source SQL Server) to the target columns (Target SQL Server). Source column names with the same name as Target column names will map automatically.\u00a0See a sample below: Click Save Task . Add more tasks, if applicable. In our sample, we only have 1 table, so it ends after saving the first task. Give your Skyvia Import a name and click Create to save it.\u00a0See the final setup below: Using Skyvia Replication for SQL Server to SQL Server Replication Replication is the process of making exact copies of databases across different servers. Skyvia offers replicating SQL Server tables to another SQL Server in an easy-to-set-up interface. Below are the steps: Create the 2 SQL Server connections \u2013 one for the source and another for the target. You can reuse the same connections done in Skyvia Import, or follow those steps if you have not done it yet. Create the Skyvia Replication Click + Create New , then select Replication . Choose the Source and Target SQL Server connections from the dropdown lists. The tables will appear. Mark checked the tables that you wish to replicate. Optionally change the Options applicable to your requirements. Name your replication and click Create to save it the first time. Click Save if you are modifying the replication setup. See a sample below: Pros & Cons of Third-Party Tools Not everything is bells and whistles in third-party tools. Pros: Easier UI with powerful features. Handles schema, data, logins, and permissions. Robust error handling and logging. Often supports cloud platforms and older SQL versions. Some tools, like Skyvia, support scheduling and automation. Cons: High licensing cost of some tools (some have free/trial versions). May require installation and setup for some. Features vary across tools\u2014research is needed. Best Practices for SQL Server Data Movement Moving data isn\u2019t just about hitting \u201ccopy\u201d and \u201cpaste.\u201d SQL Server migration best practices help keep your data moving smoothly and drama-free. 1. Plan Before You Move Don\u2019t wing it. Know what you\u2019re moving, where it\u2019s going, and how you\u2019ll verify success. Make a checklist and map out the process. 2. Define Downtime Tolerance Can users wait 5 minutes? Or 5 hours? Knowing your downtime window helps you pick the right method\u2014some are fast, some aren\u2019t. 3. Know Your Data Volume A 50GB database moves differently than a 500GB one. Big databases need better planning, compression, or chunking.\u00a0 Apply performance tuning to handle big data. 4. Check Network Bandwidth Slow network = slow transfer. Especially true for cross-location or cloud moves. Test it ahead of time if possible. 5. Match SQL Server Versions and Editions Older to newer? You\u2019re good. But newer to older may break things. Also, features vary across editions\u2014watch for that. 6. Don\u2019t Forget Logins and Permissions Databases move. Logins don\u2019t \u2014 at least not automatically. Script them out or sync them manually to avoid access issues. Part of security best practices is to make sure this is working after the migration. 7. Test and Validate Move it. Then test it. Compare row counts, run queries, and check for broken procedures. Trust, but verify. 8. Monitor the Growth of Your Transaction Log If you\u2019re using the Full recovery model, watch out. It can quickly grow and fill the entire disk space. Consider using a fixed-size transaction log based on your average daily log accumulation. Conclusion Moving data between SQL Servers isn\u2019t a one-size-fits-all task. Each method \u2014 from Backup and Restore to Third-Party Tools \u2014has its sweet spot. Whether you\u2019re upgrading, consolidating, or setting up a reporting environment, there\u2019s a tool or technique built just for it. Avoid caveats by following key considerations and best practices. Why not try Skyvia for testing your SQL Server data movement? You have nothing to install. All you need is a browser and a Skyvia account. It offers a robust, easy-to-use solution for various SQL Server data movement scenarios. F.A.Q. for SQL Server to SQL Server What is the easiest way to copy a SQL Server database to another server? Try Backup and Restore in SQL Server Management Studio (SSMS). It\u2019s straightforward and doesn\u2019t need extra tools. Just back up the DB, copy the file, and restore it on the target server. But Skyvia is great for backups in the cloud. No manual copying either. How can I migrate a SQL Server database with minimal downtime? Use Log Shipping or Transactional Replication . Set them up ahead of time, sync the data, then do a quick final cutover. This way, users won\u2019t feel much of a bump. I need to keep two SQL Server databases continuously synchronized. What should I use? Go with Transactional Replication , Peer-to-Peer Replication, or Skyvia Replication . They\u2019re built for ongoing sync between servers and can handle changes in near real-time or on a schedule. Which method is best for moving very large SQL Server databases (Terabytes)? Use Backup and Restore with compression, SQL Server Integration Services (SSIS) with tuning, or a third-party tool like Skyvia. They\u2019re better equipped to handle big data without choking your network or resources. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=SQL+Server+to+SQL+Server%3A+Migration+%26+Replication+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-sql-server-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-sql-server-to-sql-server/&title=SQL+Server+to+SQL+Server%3A+Migration+%26+Replication+Guide) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-connect-stripe-with-shopify/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Stripe with Shopify: A Full Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/how-to-connect-stripe-with-shopify/#respond) 1183 October 9, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-stripe-with-shopify%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Stripe+with+Shopify%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-stripe-with-shopify%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-stripe-with-shopify/&title=How+to+Connect+Stripe+with+Shopify%3A+A+Full+Guide) Traditional cash desks in physical shops still exist, but little real cash goes there. In online shops, a cash desk is at the checkout, and it obviously accepts only online payments. So, the main question for the shop owners is to decide which payment gateway to implement. It\u2019s not so easy since each payment provider imposes different commissions on transactions at the checkout. What\u2019s more, the availability of payment getaways depends on the country. Stripe could be a decent option since it offers acceptable transaction fees and is available in many countries. It integrates with CRM systems like HubSpot and Pipedrive, e-commerce platforms like Shopify, and many other services. In this article, learn how to connect Stripe with Shopify and unveil the pitfalls of such integration. Table of Contents Benefits of Integrating Stripe with Shopify Methods of Integrating Stripe with Shopify Direct Integration through Shopify Admin Connect Stripe with Shopify Using Skyvia Tips for Managing Shopify Payments via Stripe Conclusion Benefits of Integrating Stripe with Shopify [Stripe](https://stripe.com/) , together with Shopify, make up a hit mix with benefits that businesses can\u2019t resist. Simplicity. The embedded payment module on Shopify is intuitive and makes the checkout process straightforward. It helps users to navigate the payment window with no fatigue and stress. Security. It\u2019s safe to say that [Stripe offers strong security](https://docs.stripe.com/security) compared to other payment providers. Payments made on Shopify with this gateway are encrypted, protecting sensitive data. Diversity. Even though Stripe accounts can be registered in around 50 countries, businesses can still accept payments from all over the world. This payment provider supports 135+ currencies and various payment methods, including credit cards, debit cards, Apple Pay, and Google Pay. It introduces a great opportunity for businesses to sell their products worldwide and expand their reach. Customization. The appearance of the payment window can be aligned with your website branding style. Unified design plays a significant role in consistency within audience interactions and marketing communications. Support. Stripe guarantees 24/7 customer support, so you can address your issue anytime and get a response shortly. Seems impressive, right? Along with all these advantages of the Stripe to Shopify integration, don\u2019t forget about the costs the payment provider imposes. This payment gateway can be added to Shopify for free, but it will [charge extra for each transaction](https://stripe.com/en-it/pricing) . The amount depends on a number of factors, including pricing plan, card issue country, and so on. Methods of Integrating Stripe with Shopify There are two types of integration between Stripe and Shopify. Inserting a Stripe payment gateway on Shopify. Exchanging data between Stripe and Shopify using third-party [data integration tools](https://skyvia.com/blog/data-integration-tools/) like Skyvia. These two approaches are different and can\u2019t be used interchangeably, but they can be used together to ensure better transaction management and financial data analysis. Direct Integration through Shopify Admin Stripe service is available in nearly 50 countries around the world. You can create a Stripe account only if your bank account and business are registered in the [following countries](https://stripe.com/en-it/global) : Anyway, Stripe allows you to accept payments from all over the world . Even if your business operates in the US, you can still sell your products worldwide and receive payments in different currencies. Before we proceed to the tutorial on how to integrate Stripe on Shopify, you need to bear these nuances in mind: Even though Stripe is supported in 50 countries, it can\u2019t be added to Shopify in all these countries. The reason is that Shopify implemented its own getaway called Shopify Payments that runs on the Stripe engine. For that reason, it\u2019s impossible to add Stripe as a separate payment provider in the countries listed below. See Case 1 for details . You can add Stripe payment getaway only in those countries that are supported by Stripe but not supported by Shopify Payments. See Case 2 for details . In countries supported neither by Stripe nor by Shopify Payments, you can select and embed any other available payment gateway. Case 1. Activate Shopify Payments For the countries listed below, Shopify Payments is the default payment method on Shopify. Stripe is unavailable on the list of the payment options in these countries. ?? [Australia](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/australia) ?? [Austria](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/austria) ?? [Belgium](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/belgium) ?? [Canada](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/canada) ?? [Czechia](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/czechia) ?? [Denmark](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/denmark) ?? [Finland](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/finland) ?? [France](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/france) ?? [Germany](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/germany) ?? [Hong Kong SAR](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/hong-kong) ?? [Ireland](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/ireland) ?? [Italy](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/italy) ?? [Japan](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/japan) ?? [Netherlands](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/netherlands) ?? [New Zealand](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/new-zealand) ?? [Portugal](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/portugal) ?? [Romania](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/romania) ?? [Singapore](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/singapore) ?? [Spain](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/spain) ?? [Sweden](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/sweden) ?? [Switzerland](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/switzerland) ?? [United Kingdom](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/united-kingdom) ?? [United States](https://help.shopify.com/en/manual/payments/shopify-payments/supported-countries/united-states) All you have to do is activate Shopify Payments if your business operates in one of the countries mentioned above. In your Shopify store settings, go to Payments on the left menu. Click Activate Shopify Payments . Proceed with the on-screen instructions. Case 2. Activate Stripe Gateway The Stripe payment gateway option is available in the following countries. ?? Bulgaria ?? Brazil ?? Croatia ?? Cyprus ?? Estonia ?? Gibraltar ?? Ghana ?? India ?? Indonesia ?? Hungary ?? Greece ?? Kenya ?? Latvia ?? Lithuania ?? Liechtenstein ?? Luxembourg ?? Malaysia ?? Malta ?? Mexico ?? Nigeria ?? Norway ?? Poland ?? Slovakia ?? Slovenia ?? South Africa ?? Thailand ?? United Arab Emirates To add Stripe gateway to Shopify: Click on the option that appears and proceed with the in-app instructions. In your store settings, go to Payments on the left menu. Click Choose a provider and type Stripe in the text box. When everything is set up, make a test order to verify that the chosen payment option is available at the checkout on your Shopify store. Connect Stripe with Shopify Using Skyvia Another approach to integrating Stripe with Shopify implies data exchange between these two systems using [Skyvia](https://skyvia.com/) . This platform contains a wide set of tools for various data-related tasks. The [Data Integration](https://skyvia.com/data-integration) and [Automation](https://skyvia.com/automation) solutions would be the most suitable for the Shopify to Stripe integration. Given that the Stripe Dashboard provides a detailed overview of each particular transaction, it makes sense to consolidate all payment data there, making it a [single source of truth](https://skyvia.com/learn/single-source-of-true) . If you have Shopify Payments, PayPal, or other gateways at the checkout on Shopify and use Stripe to process other business transactions, Skyvia will help you consolidate all payments in Stripe. Automation Tool The [Automation](https://skyvia.com/automation) tool allows you to create event-driven workflows between Shopify and Stripe in the visual builder. For instance, once a new transaction is made on Shopify, a trigger invokes an action to copy transaction data to Stripe. Sample scenario: Create a new Stripe payment from a Shopify order The first thing to do is establish a connection with Shopify and Stripe from Skyvia in a few clicks. [Log into your Skyvia account](https://app.skyvia.com/) . [Connect Shopify to Skyvia](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html#establishing-connection) [Connect Stripe to Skyvia](https://docs.skyvia.com/connectors/cloud-sources/stripe_connections.html#establishing-connection) Then, follow the steps below to create an automated workflow: In the top menu, click +Create New and select Automation . Name the automation workflow and set Connection as a trigger type. Click on the trigger and select the Shopify connection from the list. Select New Record under the trigger and the preferred table and column with transaction data. Add the Action component from the panel on the left to the automation workflow and click on it. Select Stripe connection. Select the required DML operation and a table where the information from Shopify needs to be stored. Save the automation workflow and turn to the overview mode. To start the automation, set it to the Enabled state. Check the progress and outcomes of the automation workflow under the Monitor and Log tabs. Check [official Skyvia documentation](https://docs.skyvia.com/automation/) for detailed guides. Data Integration Tool With Skyvia, you can import data from Shopify to Stripe on a regular basis. The [Import tool](https://skyvia.com/data-integration/import) offers a user-friendly interface for connecting both services, mapping their data structures, and scheduling regular data transfers from Shopify to Stripe. Another solution is the Replication tool, which allows you to copy transaction data from Shopify and Stripe to the [data warehouses](https://skyvia.com/connectors) . That way, you will create a solid base for financial analytics and reporting. Sample scenario: Send payment data from Shopify to Stripe In this article, we explore the data integration scenario of sending payment transaction data from Shopify to Stripe using the Import tool. This scenario will be useful if you use Shopify Payment, PayPal, or another payment gateway on the Shopify store but use Stripe to record other transactions. The first thing to do is establish a connection with Shopify and Stripe from Skyvia in a few clicks. [Log into your Skyvia account.](https://app.skyvia.com/) [Connect Shopify to Skyvia](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html#establishing-connection) [Connect Stripe to Skyvia](https://docs.skyvia.com/connectors/cloud-sources/stripe_connections.html#establishing-connection) Then, follow the steps below to transfer transaction data from Shopify to your Stripe account. In the top menu, click +Create New and select Import . Under the source, select Shopify. Under the target, set Stripe. Click Add task . In the source settings, select Orders from the drop-down of Shopify items. In the target settings, select the PaymentIntents item. In the mapping settings, establish the correspondence between fields in two systems for correct integration. Save the task. Save the integration flow. If you want to make the data transfer recurring, click Schedule . Select the interval or particular times to make your integration recurring. That way, you will automate the Shopify to Stripe integration, facilitating the financial data management. Overall, Skyvia is an excellent tool for creating integration since it: Remains among the [top user-friendly data integration tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . Can be accessed from anywhere at any time. Connects to 190+ sources. Has pricing plans available for companies of any size in any industry. [Explore Skyvia pricing plans](https://skyvia.com/pricing) Tips for Managing Shopify Payments via Stripe Currency Conversions As you know, Stripe supports 135+ currencies, so it can accept payments from all over the world. If your business sells products in different countries, Stripe will accumulate separate balances for each currency. If the transaction currency differs from your bank account currency, Stripe will automatically convert it at a small fee. The actual conversion rate for every transaction is available on the payment overview page. Financial Reporting Stripe offers financial reports that will help you with accounting management. They summarize everything that has impacted your Stripe balance. They can be generated on a daily, weekly, or monthly basis. Fraud Prevention Stripe includes the [Radar module](https://stripe.com/en-it/radar) that helps to identify fraudulent transactions. It\u2019s based on the machine learning engine that provides real-time transaction scoring and evaluates the likelihood that a payment is fraudulent. Each payment includes information on the outcome of risk evaluation, which is available on the Stripe Dashboard. Payments with high risk are blocked by default. It\u2019s possible to set custom rules on Radar to block transactions from certain locations or cards. This helps to elevate security and enhance the overall reliability of payment processing. Conclusion Payment gateways like Stripe are integral parts of online shopping engines nowadays. It\u2019s important to select the right payment option that would go well with the business locations and geographical markets in which you operate. There are many nuances in adding Stripe as a payment gateway on Shopify, and we have listed them in this article. Also, you may consider data integration between these two services to maintain consistency in transaction management and cash flow. This could be done with the Skyvia platform, which is easy to use and has a free version for newcomers! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-stripe-with-shopify%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Stripe+with+Shopify%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-stripe-with-shopify%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-stripe-with-shopify/&title=How+to+Connect+Stripe+with+Shopify%3A+A+Full+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/how-to-connect-tableu-and-jira/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) How to Connect Tableau with Jira (Step-by-Step Guide) By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-connect-tableu-and-jira/#respond) 4420 March 4, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-tableu-and-jira%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Tableau+with+Jira+%28Step-by-Step+Guide%29&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-tableu-and-jira%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-tableu-and-jira/&title=How+to+Connect+Tableau+with+Jira+%28Step-by-Step+Guide%29) Users who work in Jira and analyze insights in Tableau have probably dreamed of bringing Jira info into Tableau for better reporting and understanding the whole picture to make decisions. At the same time, manually exporting its details into Tableau is time-consuming and error-prone and lacks real-time updates. With the proper integration, businesses can track real-time project progress, automate info syncing, and enhance project tracking using Tableau\u2019s advanced visualizations. This article explores different ways to connect Jira with Tableau, including direct no-code integration, database syncing for advanced analytics, and custom API solutions for full flexibility. Table of contents What Are Tableau and JIRA? Why Integrate Tableau with Jira? Methods for Integrating Tableau with Jira Connect Tableau with Jira using Tableau Connector for Jira (by Alpha Serve) Integrate Tableau with Jira using Skyvia Data Replication Integrate Tableau with Jira using Custom Coding Conclusion What Are Tableau and JIRA? Before starting with the integration, let\u2019s quickly walk through [Tableau](https://www.tableau.com/) and [Jira](https://www.atlassian.com/software/jira) to see what level they can take your business to when combined. Tableau: Turning Data into Insights The platform visualizes information and helps businesses transform raw data into interactive dashboards and reports. Depending on the customer\u2019s use cases, it provides various products, including Tableau Desktop, Tableau Server, Tableau Public, Tableau Prep, etc. Whether tracking sales performance, monitoring project progress, or analyzing trends, Tableau shows the bigger picture with clear, customizable visualizations, like: User-friendly . A drag-and-drop interface, no coding needed. Real-time insights . Instantly update dashboards with fresh ones. Connects to multiple sources . Databases, cloud apps, and spreadsheets. Jira: The Ultimate Project Management Tool It\u2019s a powerful project management platform for agile development, task tracking, and team collaboration. It integrates many products used in different segments and helps teams organize work, set priorities, and monitor progress efficiently. Whether you\u2019re managing software development, IT projects, or business workflows, Jira keeps everything structured and transparent, like: Task and issue tracking . Manage projects with detailed workflows. Collaboration & automation . Keep teams aligned and automate repetitive tasks. Customizable dashboards . Track performance, deadlines, and bottlenecks. Why Integrate Tableau with Jira? Jira is great for managing projects, but its built-in reporting tools can feel limiting when it comes to in-depth analysis and real-time insights. That\u2019s where Tableau comes in. By connecting it with Jira, companies unlock a powerful way to visualize, analyze, and optimize their project data like never before. Project Performance Monitoring Jira captures vast project-related information, but its built-in reporting can be limited. With Tableau, teams can: Visualize real-time project progress across multiple teams and sprints. Track task completion rates and identify bottlenecks. Compare planned vs. actual timelines to improve project delivery accuracy. Sprint and Agile Metrics Analysis For agile teams, understanding sprint progress is crucial. Integrating Jira with Tableau allows teams to: Analyze sprint velocity over time and predict future performance. Track story points, burndown rates, and backlog trends in an intuitive dashboard. Identify blockers and unplanned work that impact sprint goals. Team Productivity and Workload Balancing Overloaded teams can lead to burnout and project delays. Tableau helps by: Tracking workload distribution among team members. Identifying underutilized or overworked employees to optimize assignments. Analyzing resolution times for different issue types to improve efficiency. SLA Compliance and Issue Resolution Analysis Many companies track Service Level Agreements (SLAs) to ensure timely responses. With Tableau, teams may: Monitor SLA adherence and flag potential breaches. Analyze ticket resolution times by category, priority, or assignee. Improve customer support workflows by identifying recurring issues. Resource Allocation and Budget Tracking For managers and executives, integrating Jira with Tableau provides insights into resource allocation and financial impact, helping to: Measure project costs vs. budget to avoid overspending. Analyze time spent on different tasks for more accurate project estimations. Optimize resource allocation by identifying inefficiencies. Stakeholder Reporting and Data-Driven Decision-Making Jira info can be overwhelming for non-technical stakeholders. Tableau makes it accessible and actionable by: Providing interactive dashboards instead of static Jira reports. Automating real-time reporting for leadership teams. Combining Jira data with other business metrics (e.g., sales, finance, customer success). Cross-Departmental Insights: Bridging Development and Business Teams Jira often contains technical project details, but decision-making requires a business-wide view. With Tableau, companies can: Merge Jira information with CRM, finance, and sales reports to connect technical and business insights. Identify trends between project progress and business outcomes (e.g., how delays impact revenue). Support cross-functional collaboration by making project data easily accessible across departments. Methods for Integrating Tableau with Jira There are multiple ways to bring Jira details into Tableau, and the right choice depends on users\u2019 technical skills, business needs, and how much control they want over the integration. Below, we review three main integration methods, each with its advantages and trade-offs. Approach Description Pros Cons Direct Connectors Plug-and-play solutions that directly pull Jira data into Tableau (e.g., Tableau Connector for Jira ). No coding required. Quick setup. Live data sync. Limited customization. May have licensing costs. Data Replication Syncs Jira data to a database or data warehouse before connecting it to Tableau (e.g., Skyvia Data Replication ). Great for handling large datasets. Enables advanced data modeling. Works with multiple BI tools. Not real-time (depends on sync frequency). Requires database setup. Custom Coding (API-based) Uses Jira\u2019s REST API to pull data into Tableau with custom scripts. Full customization and flexibility. Control over data structure. Works for complex use cases. Requires development skills. More time-consuming to set up & maintain. Each approach has its strengths and weaknesses. Direct connectors offer a fast and easy solution but may have limited customization options. Data replication provides flexibility for handling large datasets but doesn\u2019t support real-time updates. If an organization needs complete control, API-based integration is ideal, though it requires technical expertise to implement and maintain. Now, let\u2019s dive into each one in detail and see how to set it up. Connect Tableau with Jira using Tableau Connector for Jira (by Alpha Serve) As was said above, bringing Jira data into Tableau using a direct connector is fast and easy. This approach lets users extract Jira insights, apply filters, and send them straight to Tableau without coding. You can find Tableau Connector for Jira and other direct connectors on the [Atlassian Marketplace](https://marketplace.atlassian.com/) . But for this guide, we\u2019ll focus on one of the most popular options, the Tableau connector for Jira by [Alpha S](https://www.alphaservesp.com/products/atlassian/tableau/) erve. Best for No-code users who want real-time dashboards directly from Jira. How It Works Extracts Jira data and formats it for Tableau. Applies JQL filters to refine the dataset before export. Generates a Tableau Data Source URL , which you can use to connect Jira data to Tableau. Pros No coding is required. Beginner-friendly. Works directly within Jira. No need for extra software. Real-time filtering using JQL. Get the exact data you need. Cons Paid tool. Requires a subscription. Limited data transformation options. Exports data as-is without advanced modeling. Prerequisites Before starting, ensure what to have: Jira admin permissions (needed to install the app). Tableau Connector for Jira license (subscription required). Tableau Desktop or Tableau Server installed . Step-by-Step Guide to Setting It Up Step 1: Log into Your Jira Instance Log into the Jira instance that you want to integrate with Tableau. Ensure you sign in as an administrator, as you\u2019ll need admin rights to install and grant the plugin necessary permissions. Step 2: Find the Plugin on the Atlassian Marketplace To locate the Tableau Connector for Jira plugin, go to the Administrator menu and click Manage Apps . In the Find new apps tab, type \u201cTableau Connector for Jira Alpha Serve\u201d and hit Search . Then, click on the correct version of the plugin to proceed with the installation. Step 3: Install Tableau Connector for Jira Click Try it free to activate a 30-day free trial , or select Buy now to purchase a full license. The installation process will begin automatically. Note: You can also install the plugin directly by visiting its Atlassian Marketplace page and clicking Get it now . Step 4: Configure the Plugin Settings After installation, you need to configure the connector to match your needs. To do it, go to Jira Admin > Manage Apps and find Tableau Connector for Jira in the list. Then, open the Administration section to adjust the necessary settings and manage permissions . Step 5: Create and Import Jira Data into Tableau Navigate to the Tableau Connector for Jira app in Jira and click Connectors to start adding new data sources. The next step is to select Create a Data Source . Note: This step allows you to: Create, view, edit, and share data sources . Select and filter Jira data before exporting it to Tableau. Step 6: Load Jira Data into Tableau Once your data source is created, importing it into Tableau is simple. To do it, just copy the Data Source URL generated by the connector. After that, open Tableau Desktop and select Web Data Connector . Then, paste the Data Source URL into the Web Data Connector field. Enter appropriate Jira credentials when prompted, then click Log in and Export. The Jira data is now successfully loaded into Tableau and ready for analysis, visualization, and reporting . Why Needed Companies should choose Tableau Connector for Jira if they need a fast, no-code integration, eliminating manual data exports and ensuring real-time reporting. It allows users to customize and filter Jira data directly within the tool, making analytics more flexible and insightful. Integrate Tableau with Jira using Skyvia Data Replication Integrating Jira with Tableau using [Skyvia Data Replication](https://skyvia.com/learn/what-is-data-replication) offers a more advanced and flexible approach compared to a Marketplace Direct Connector. Instead of simply pulling live data, this method automatically [replicates](https://skyvia.com/data-integration/replication) Jira data to a database or DWH, enabling users to build complex data models, store historical data, and refine insights with additional sources. This ensures greater customization, long-term data retention, and the ability to connect Jira data with other business metrics, creating a more comprehensive and strategic view. With scheduled updates, teams always have up-to-date structured data in Tableau, supporting in-depth analysis, trend tracking, and better decision-making. Note: Skyvia has a freemium pricing plan that allows running schedule two data integration operations once per day. The user receives it automatically when signing up to Skyvia. Click [here](https://skyvia.com/pricing/) to review and compare all available plans. Best for Users who need automated data sync between Jira and Tableau for advanced modeling, historical analysis, and cross-platform data integration. How It Works Syncs Jira data with a database or data warehouse. Runs automated replication to keep data fresh. Connects Tableau to the database , allowing advanced modeling and historical analysis. Pros No manual exports. Data is automatically replicated on a schedule. Works with multiple BI tools. Not limited to Tableau. Supports large datasets. Handles extensive Jira data efficiently. Cons Not real-time. Info updates depend on the sync schedule. Requires a database setup. Needs a target destination like BigQuery or Redshift. Prerequisites Before you begin, ensure that to have: A Skyvia account (sign up for free at [Skyvia.com](https://skyvia.com/) ). Jira admin permissions to access and export data. A database or data warehouse (e.g., Google BigQuery, Amazon Redshift, Snowflake, etc.). Tableau Desktop or Tableau Server installed. Step-by-Step Guide to Setting It Up Step 1: Sign Up for Skyvia Go to [Skyvia\u2019s](https://app.skyvia.com/register?) sign-up page. Register with your email, Google, or Microsoft account. Log in to Skyvia Dashboard. Step 2: Connect Jira to Skyvia In Skyvia Dashboard , click Connections \u2192 + Create New . Choose Jira from the list of connectors. Enter your Jira domain, email, and API token . Click Create Connection to establish it. Note: If you need help getting your Jira API token, check [Skyvia\u2019s Jira Connection Guide](https://docs.skyvia.com/connectors/cloud-sources/jira_connections.html) . Step 3: Connect Your Data Warehouse to Skyvia In Connections , click + Create New\u201d again. Select a data warehouse (Google BigQuery, Amazon Redshift, Snowflake, etc.). Provide the necessary credentials and database connection details. Click Create Connection to confirm it. Step 4: Set Up a Data Replication Package In Skyvia Dashboard , navigate to Integration. Click + Create New and choose Replicationas as the package type. Select the Jira connection as the source. Select the DWH you need as the target. Choose objects to replicate and click Validate . A popup message \u2018The package is valid\u2019 will appear. Then click Create or Save. Once this is saved, click Run to do the first sync and ensure everything is working as expected. Note: Go [here](https://skyvia.com/blog/snowflake-to-salesforce-integration/) to read more about replication. Set Replication Schedule to automate updates hourly, daily, or weekly . Click Save and then Run to initiate the first sync. Note: Full configuration details you may find in [Skyvia Replication Documentation](https://docs.skyvia.com/data-integration/replication/) . Step 5: Connect Tableau to Your Data Warehouse Open Tableau Desktop . Click Connect \u2192 Select the data warehouse type (BigQuery, Redshift, Snowflake, etc.). Enter the database credentials . Once connected, select the Jira tables replicated by Skyvia. Step 6: Build Your Tableau Dashboards Drag and drop Jira fields into Tableau\u2019s visualization workspace. Create project tracking dashboards to analyze issues, team performance, and workflows. Blend Jira data with other business metrics for deeper insights. Save Tableau reports and share them with the team. Why Needed Companies should choose Skyvia if they need a powerful, automated data replication solution that provides more flexibility than direct connectors. By syncing Jira data to a database or data warehouse, Skyvia enables advanced data modeling, historical data storage, and merging Jira data with other business sources for deeper insights. Integrate Tableau with Jira using Custom Coding Integrating Tableau with Jira through custom coding offers a tailored solution for organizations seeking flexibility and control over their integration process. This approach leverages Jira\u2019s REST API to extract data and seamlessly import it into Tableau for advanced visualization and analysis. Best For This approach is cool for businesses needing customized data extraction and transformation, ensuring efficient integration tailored to unique user needs. It\u2019s also a great alternative when third-party plugins aren\u2019t an option due to security or compliance restrictions. How It Works Use scripts to send requests to Jira\u2019s REST API, retrieving data in JSON format. Process and structure the JSON data to fit relational database schemas or Tableau\u2019s data requirements. Import the transformed data into Tableau using supported connectors or interfaces. Pros Highly Customizable. Tailor the integration to meet specific data needs and business logic. Cost-Effective. Eliminates the need for paid third-party connectors or plugins. Enhanced Security. Direct control over data handling reduces reliance on external tools. Cons Development Time. Requires significant time investment to develop, test, and maintain the integration. Complexity. Involves handling API authentication, data pagination, and error management. Maintenance. Ongoing updates may be necessary to accommodate changes in Jira\u2019s API or data structures. Prerequisites Access to Jira\u2019s REST API. Ensure you have the necessary permissions to interact with Jira\u2019s API. Programming Knowledge. Proficiency in languages such as Python or JavaScript to script API calls. Tableau Desktop or Server. Installed and ready to import external data sources. Intermediate SQL Skills. You need it for data manipulation and structuring within Tableau. Step-by-Step Guide to Setting It Up Step 1: Generate Jira API Token Log in to your Jira account. Navigate to Account Settings > Security > [API Tokens](https://id.atlassian.com/manage-profile/security/api-tokens) . Click Create API Token , provide a label, and save the token securely. Step 2: Develop a Data Extraction Script Choose a programming language (e.g., Python) and install the necessary libraries (requests, Pandas). Write a script to authenticate with Jira\u2019s API using your email and API token. Define the API endpoint and parameters to extract desired data (e.g., issues, projects). Handle pagination to retrieve all records. Convert the JSON response to a structured format (e.g., CSV). Example in Python: import requests\nimport pandas as pd\n\n# Jira credentials\nemail = 'your_email@example.com'\napi_token = 'your_api_token'\ndomain = 'your_domain.atlassian.net'\n\n# API endpoint\nurl = f'https://{domain}/rest/api/3/search'\n\n# Headers and authentication\nheaders = {\n 'Accept': 'application/json'\n}\nauth = (email, api_token)\n\n# Parameters\nparams = {\n 'jql': 'project=YOUR_PROJECT_KEY',\n 'maxResults': 100\n}\n\n# Data extraction\nresponse = requests.get(url, headers=headers, auth=auth, params=params)\ndata = response.json()\n\n# Data transformation\nissues = data['issues']\ndf = pd.json_normalize(issues)\n\n# Save to CSV\ndf.to_csv('jira_issues.csv', index=False) Step 3: Import Data into Tableau Open Tableau and select Text File as the data source. Import the Jira_issues.csv file generated by your script. Configure data types and relationships as needed. Step 4: Automate the Process Schedule the data extraction script to run at desired intervals using task schedulers (e.g., cron jobs). Set up Tableau to refresh data extracts automatically to reflect the latest information. Why Needed Businesses may choose custom coding for Jira-Tableau integration if they need full control over data extraction, transformation, and integration. This approach allows users to customize queries, structure data precisely, and merge Jira data with other sources for deeper analysis. Conclusion Integrating Jira with Tableau unlocks advanced analytics and reporting, helping teams understand what they do more effectively. Tableau Connector for Jira is the best choice for those needing a quick, no-code solution with real-time data access. Skyvia Data Replication offers greater flexibility, allowing businesses to store historical data, build complex models, and refine insights in a data warehouse. Custom coding with Jira\u2019s REST API means full control and customization. So, such solutions are perfect for companies with specific security, compliance, or integration needs. Each method is about different use cases, ensuring users can select the best approach based on their technical expertise, scalability needs, and reporting goals. FAQ fo Tableau with Jira Why should I integrate Jira with Tableau? Integrating Jira with Tableau allows teams to visualize, analyze, and optimize their project data beyond Jira\u2019s built-in reports. It enables real-time tracking, advanced analytics, and cross-functional insights, helping businesses confidently make data-driven decisions. What are the best ways to connect Jira with Tableau? There are three main methods: 1.\u00a0Direct Connectors\u00a0(e.g., Tableau Connector for Jira). A no-code\u00a0solution that allows direct integration. 2.\u00a0Data Replication\u00a0(e.g.,\u00a0Skyvia).\u00a0Syncs Jira data with a database or warehouse for deeper analysis. 3.\u00a0Custom API Integration.\u00a0Jira\u2019s REST API is used for a fully customizable solution that requires coding skills. Can I use Tableau with Jira without coding? Yes. If you\u2019re not a developer, the Tableau Connector for Jira provides a no-code way to bring Jira data into Tableau. Alternatively, Skyvia allows you to automate Jira data replication without a complex setup. How frequently can I update Jira data in Tableau? The update frequency depends on the integration method: \u2013\u00a0Direct Connectors\u00a0provide real-time or near-real-time updates. \u2013\u00a0Skyvia\u2019s free plan\u00a0allows two automated daily data updates, while paid plans support more frequent syncs. \u2013\u00a0Custom API integrations\u00a0allow complete control over update schedules but require scripting. How can I choose the best method for my team? The correct integration method depends on your needs. \u2013 If you need a quick and easy setup,\u2192 Use a direct connector like Tableau Connector for Jira. \u2013 If you need advanced data modeling and cross-platform analysis,\u2192 Use Skyvia\u2019s Data Replication. \u2013 If you want complete customization and full control over data handling,\u2192 Use the Jira REST API for a custom-coded solution. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-tableu-and-jira%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Tableau+with+Jira+%28Step-by-Step+Guide%29&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-connect-tableu-and-jira%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-connect-tableu-and-jira/&title=How+to+Connect+Tableau+with+Jira+%28Step-by-Step+Guide%29) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [Power BI ETL: A Comprehensive Guide](https://skyvia.com/blog/etl-in-power-bi/)" }, { "url": "https://skyvia.com/blog/how-to-convert-mysql-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) MySQL to SQL Server Migration: A Comprehensive Tutorial By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-convert-mysql-to-sql-server/#respond) 2338 May 22, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-convert-mysql-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=MySQL+to+SQL+Server+Migration%3A+A+Comprehensive+Tutorial&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-convert-mysql-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-convert-mysql-to-sql-server/&title=MySQL+to+SQL+Server+Migration%3A+A+Comprehensive+Tutorial) Imagine a mid-sized retail company\u2019s operations data stored in MySQL while finance and reporting are run off SQL Server. Every week, someone manually exported CSVs, emailed them across teams, and re-imported them just to generate reports. It was slow, error-prone, and left everyone asking, \u201cWhy isn\u2019t this automated by now?\u201d That\u2019s the reality for many businesses juggling multiple database systems. Eventually, the need arises to migrate MySQL data to SQL Server, maybe to standardize platforms, take advantage of SQL Server\u2019s enterprise features, or simply follow a company-wide tech mandate. But let\u2019s be real: database migration sounds scary. Between schema mismatches, data integrity concerns, and the fear of downtime, it\u2019s easy to stall before you even begin. And depending on the scale, one wrong move can turn into hours of cleanup. This guide will break down the most common methods (manual scripting, SQL Server Migration Assistant, ODBC-based transfers, and cloud-based [ETL tools](https://skyvia.com/blog/etl-tools/) like Skyvia). Each approach is covered with clear pros, cons, and best-use scenarios, so you can choose based on your technical comfort level, business needs, and future scalability. Table of Contents Why Migrate From MySQL to SQL Server? Essential Pre-Migration Steps: Planning for Success How to Migrate from MySQL to SQL Server Method 1: Manual Export/Import (Scripting) Method 2: Using SQL Server Migration Assistant (SSMA) for MySQL Method 3: Migrating MySQL to SQL Server Using ODBC Driver Method 4: Using Cloud-Based ETL/Migration Tools Key Challenges and How to Address Them Conclusion Why Migrate From MySQL to SQL Server? So why do so many teams decide to move their data from MySQL to SQL Server?\u00a0Here are a few of the most common reasons that tend to spark that shift. 1. Tighter Integration with the Microsoft Ecosystem. SQL Server fits right in if your organization leans on Azure , builds apps in .NET , or uses Power BI for reporting. It plays nicely with Microsoft\u2019s broader stack, which can streamline everything from identity management to cloud scaling and native analytics. 2. More Advanced Features, Out of the Box. SQL Server brings serious muscle regarding analytics , security , and high availability . Built-in features like row-level security , partitioning , columnstore indexes , and Always On availability groups aren\u2019t just nice. They\u2019re often mission-critical in enterprise environments. 3. Handling More Demanding Workload. At a certain point, performance starts to matter more, whether it\u2019s for complex reporting, concurrent queries, or large transactional loads. SQL Server tends to scale better in high-demand scenarios, with features optimized for enterprise-class performance tuning and resource management. 4. Vendor Consolidation or Internal IT Mandates. Sometimes, the decision isn\u2019t technical at all. It\u2019s strategic. Your IT team might be consolidating platforms for easier support and license management, or you\u2019ve merged with a company already standardized on SQL Server. Either way, aligning tools across teams can simplify long-term operations. 5. End of Life or Support Concerns with MySQL. Support can get sketchy fast if you\u2019re running an older MySQL version, especially community builds. Losing critical patches or compatibility updates is a ticking time bomb for data teams. Migrating to SQL Server ensures continuity with a fully supported platform and long-term roadmap. Essential Pre-Migration Steps: Planning for Success Before you think about moving data, let\u2019s talk about planning. A solid migration plan can save you from last-minute surprises, data loss, and weekend-long fire drills. Preparation makes all the difference, whether you\u2019re doing a quick one-off migration or setting up a long-term sync. Database Assessment Start with a deep dive into your current MySQL setup. Think of this like surveying a building before remodeling. Review your schema. List all tables, views, stored procedures, triggers, and functions. Complex logic in MySQL won\u2019t always translate cleanly to SQL Server, so note anything that could be tricky. Watch out for data type mismatches. Not all MySQL types have a direct match in SQL Server. For example, TINYINT(1) often maps to a BIT, and TEXT fields might need to become VARCHAR(MAX) depending on usage. Tip : Keep a reference table of standard mappings handy; it\u2019ll save you time later. Size matters. Estimate how much data you\u2019re moving and how fast your network can realistically handle it. A few gigabytes over Wi-Fi? Probably okay. Terabytes over a slow VPN? That needs a rethink. Define Migration Strategy Not every migration looks the same. Before you choose your method, get clear on your goals and constraints . One-time move or a phased rollout ? Or you may need ongoing sync until the complete switch is done. This choice affects the tools you\u2019ll use and how you\u2019ll test. What\u2019s your downtime window? Can the system be down for an hour? A weekend? Or not at all? Knowing this upfront helps you rule out some options early. Who\u2019s involved? Consider the resources you\u2019ll need: developers, DBAs, infrastructure support, and ensure everyone\u2019s aligned. You don\u2019t want to scramble for permissions or tools mid-migration. Backup Everything Yes, everything. And yes, even if it\u2019s \u201cjust a test.\u201d Before you touch anything, take a complete, verified backup of your MySQL database. This step is your safety net if something goes sideways. Test restoring it too, just to be safe. Prepare the Target Environment Your SQL Server should be ready before the first data packet flies across the wire. Ensure you\u2019re running a compatible version of SQL Server. Double-check that it has enough storage, memory, and CPU headroom to handle the incoming load. Don\u2019t forget permissions. Depending on your method, you\u2019ll need write access, the ability to create objects, and possibly manage linked servers or external data sources. How to Migrate from MySQL to SQL Server There\u2019s no single \u201cright\u201d way to migrate. It all depends on your team\u2019s skill set, how much control you need, and how hands-off (or hands-on) you want to be. Let\u2019s look at the four main approaches we\u2019ll cover in this guide to make things easier. Manual Export/Import (Scripting) . Using SQL Server Migration Assistant (SSMA) for MySQL . Migrating MySQL to SQL Server Using ODBC Driver . Using Cloud-Based ETL/Migration Tools . You can think of them as sitting on a scale, from fully manual to completely automated. Each comes with trade-offs, so choosing the right one means matching the tool to your specific business case. Here\u2019s a quick comparison to help you get a feel for what fits best: Method Group Method Best For Skill Level Customizability Automation Manual Export/Import with SQL Scripts Small databases, one-time jobs, full control. Intermediate+ Very High None Native Tool SQL Server Migration Assistant (SSMA) Structured migrations with schema + data handling. Intermediate Moderate Partial (semi-automated) ODBS Based ODBC Driver + Linked Server One-off transfers or hybrid setups. Intermediate\u2013Advanced Moderate Limited Cloud-Based/ ETL No-Code Tools (e.g., Skyvia) Ongoing sync, low-code teams, rapid deployment. Beginner\u2013Intermediate Low\u2013Moderate High (fully automated) Method 1: Manual Export/Import (Scripting) This is the old-school, roll-up-your-sleeves method. You export the schema and data from MySQL as SQL scripts, tweak them as needed, and then run them against SQL Server. It gives you complete control over every step, but it also means more hands-on work. Best For Small to mid-size databases. Projects where precision matters more than speed. Developers who like knowing exactly what\u2019s going on under the hood. Pros Total control. You see and manage every change to the schema and data types. No surprises. You can catch and fix issues before they get into SQL Server. Lightweight. No need for extra tools, just your SQL editor and a little patience. Cons Time-consuming. You\u2019ll spend a fair chunk of time massaging those scripts, especially if your schema is complex. Error-prone. One small oversight in a script, and the migration hits a wall. Manual type mapping. MySQL and SQL Server don\u2019t speak the same dialect. You\u2019ll have to translate column types manually. Method 2: Using SQL Server Migration Assistant (SSMA) for MySQL SSMA for MySQL is the go-to tool for those looking to streamline their MySQL to SQL Server migration without getting bogged down in manual scripting. It\u2019s a free utility from Microsoft that automates much of the heavy lifting, making the migration process more efficient and less error-prone. Best For Medium to large databases where manual migration would be cumbersome. Teams seeking a guided, semi-automated migration process. Situations where identifying and resolving schema incompatibilities upfront is crucial. Step-by-Step Guide Install SSMA for MySQL. Download and install the latest version from Microsoft\u2019s official site. Ensure that the MySQL ODBC driver is also installed on your system. Create a Migration Project. Launch SSMA and create a new project. Specify the target SQL Server version and project location. Connect to MySQL (Source). Provide the connection details for your MySQL database: Provider : Select the appropriate MySQL ODBC driver (e.g., MySQL ODBC 5.1 Driver). Mode : Choose Standard mode to enter server details manually. Server name : Enter your MySQL server\u2019s hostname or IP address. Server port : Typically, this is 3306 for MySQL. User name and Password : Provide credentials with at least CONNECT, SHOW, and SELECT privileges. SSL : If your MySQL server requires SSL, check this option and configure the necessary certificates. Note : Once connected, SSMA will display the available schemas. Connect to SQL Server (Target). Enter the connection information for your SQL Server instance: Server name. Enter your SQL Server instance\u2019s name or IP address. Authentication. Choose between Windows Authentication or SQL Server Authentication. User name and Password. If using SQL Server Authentication, provide the necessary credentials. Database. Select an existing database from the dropdown list, or enter a new name to create a new database on the target server. Encrypt connection. Optionally, check this box to encrypt the connection. Note : If the target database doesn\u2019t exist, SSMA can create it for you. Assess MySQL Database Objects. Use the \u201cCreate Report\u201d feature to analyze the MySQL schema. SSMA will generate an HTML report that provides conversion statistics, highlights potential issues, and offers recommendations for resolving them. Convert Schema. After reviewing the assessment report, convert the MySQL schema to SQL Server format. Address any issues that SSMA flags during this process. Note : To adjust data type mappings (if necessary), navigate to Tools > Project Settings > Type Mapping . Here, you can customize how MySQL data types are mapped to SQL Server data types. Synchronize Schema with SQL Server. Apply the converted schema to the target SQL Server database by synchronizing it through SSMA. Migrate Data. Right-click the database or specific object you want to move in the MySQL Metadata Explorer and choose \u201cMigrate Data.\u201d Or head over to the Migrate Data tab. Note : Check the box next to the database name if you\u2019re migrating everything in one go. To cherry-pick tables, expand the database, then the Tables node, and tick only the ones you want. View the Data Migration Report when migration is finished. Pros Automated Schema and Data Migration. SSMA handles schema conversion and data transfer, reducing manual effort. Detailed Assessment Reports. Provide insights into potential issues before migration begins. Customizable Type Mappings. It provides adjustments to data type conversions to fit specific users\u2019 needs. Cons Learning Curve. It may require some time to learn the tool\u2019s interface and features. Not Foolproof. Complex schemas or unsupported features might still need manual intervention. Requires Setup. Installation and configuration of prerequisites, like ODBC drivers, are necessary. Method 3: Migrating MySQL to SQL Server Using ODBC Driver This method relies on the ODBC (Open Database Connectivity) driver to hook MySQL to the SQL Server. It allows users to pull in data with tools like SQL Server Management Studio (SSMS) or Linked Servers . It\u2019s not a one-click solution, but it gives plenty of flexibility, especially if needed to move specific tables or run queries without diving into a full migration process. Best For One-off or partial data transfers. Scenarios where you need to query MySQL data from SQL Server. Users comfortable working in SSMS and writing a bit of SQL. Step-by-Step Guide 1. Install MySQL ODBC Driver Download and install the MySQL ODBC driver on the machine where SQL Server is installed. You can download the MySQL ODBC driver [here](https://www.devart.com/odbc/mysql/) . 2. Create a DSN Go to Control Panel -> Administrative Tools -> Data Sources (ODBC). Navigate to System DSN and click the Add. Choose the MySQL ODBC driver from the list and click Finish. Enter the server name, database name, username, and password to connect to your MySQL database. Click OK to save the DSN. Once the ODBC DSN is set up, it allows applications, including MySQL export tools, to connect to the MySQL database through the ODBC driver. The export tool can use this ODBC connection to access the MySQL database and export the data into a SQL script or dump file. 3. Export MySQL Data Use MySQL\u2019s export tools like mysqldump or a MySQL GUI to export the MySQL database as an SQL script or dump file. If you use mysqldump, you can run a command like: mysqldump -u username -p database_name > dump.sql 4. Import Data into SQL Server Open SQL Server Management Studio (SSMS) on the machine where SQL Server is installed. Connect to your SQL Server instance. Create a new database where you\u2019ll import the MySQL data, if necessary. Open a new query window. Use SQL Server\u2019s sqlcmd utility to execute the SQL script generated from MySQL. Execute the script with the following command: sqlcmd -U username -P password -S server_name -d database_name -i dump.sql Replace username, password, server_name, and database_name with appropriate values. Alternatively, you can open the SQL script in SSMS and execute it directly. Pros Flexible access. Easily query MySQL data without migrating everything up front. No extra tools needed. Just SSMS and the ODBC driver. Good for hybrid environments. Works well when you\u2019re integrating systems. Cons Not ideal for full migrations. Better suited for selective transfers or integration. Performance can lag, especially with large datasets or complex joins. Manual setup. Users must configure DSNs and write the queries. Method 4: Using Cloud-Based ETL/Migration Tools If the company prefers to take the load off its team and let the cloud handle the heavy lifting, ETL and migration platforms like Skyvia can be a lifesaver. These tools are built to move data at scale, often with minimal setup and a slick UI that keeps things smooth. Best For Larger, ongoing migrations or syncing setups. Teams who\u2019d rather focus on outcomes than wrangling SQL scripts. Organizations that are already invested in cloud platforms or SaaS tools. Step-by-Step Guide With Skyvia, you can import data from MySQL to SQL Server in three easy steps: Create a connection to SQL Server. Create a connection to MySQL. Create and run import integration. To proceed with the steps described below, you need a Skyvia account. Obtain it for free [here](https://skyvia.com/) . Create a Connection to MySQL To establish a connection to MySQL in Skyvia: Go to + Create New > Connection . Choose MySQL. Specify the database server hostname, port, user name, password, and database. Click Create Connection Optionally, you can use the SSL and SSH connections to MySQL. Create a Connection to SQL Server To connect to MySQL in Skyvia: Go to + Create New > Connection . Choose SQL Server. Specify the database server hostname, port, user name, password, and database. Click Create Connection . Optionally, you can use the SSL and SSH connections to SQL Server. Create Import Integration Go to + Create New > Integration > Impor t. Choose the Data Source type. Select MySQL as the Source and SQL Server as the Target. Click Add new to add a new import task. You can add any number of import tasks to your integration. Select the table to import data from and click Next . Select the table to import data to and click Next . Map the fields in Source and Target. Skyvia will try to map fields automatically. Save and Run the import integration. Additionally, you can create a schedule for the automated runs of your integration. On the Monitoring tab, you can check the progress of your import and view previous import logs and details. Get More With Skyvia Importing is just one slice of what Skyvia can do. It unlocks a toolkit built to make managing, syncing, and automating data less challenging. Here\u2019s the kind of stuff you can do with it: Back it all up . Set up automatic backups and restores so cloud data is always safe and recoverable. Keep systems in sync . Need MySQL and SQL Server to stay in lockstep? Skyvia can sync them both ways, no sweat. Build smarter pipelines . Connect multiple sources, run complex transformations, and route data where needed. Automate everything . Trigger actions based on changes, add logic and steps, and build workflows. Create live data access points . Turn data into secure OData endpoints for BI tools and apps to connect to your data in real time. Replicate for real-time reporting . Pipe data straight into your warehouse for up-to-the-minute dashboards. Swap CSVs like a pro . Upload, download, and move CSVs between systems without the usual hassle. Stay on top of it all . Get detailed logs, alerts, and error handling to smooth data flow. Connect to pretty much anything . With [200+](https://skyvia.com/connectors) connectors (Salesforce, MySQL, Google Sheets, etc.), you\u2019ve got options. And the best part? It\u2019s no-code and cloud-based, so whether you\u2019re an IT pro or just data-curious, you can handle serious workflows from your browser, no installs or downloads. The flexible [pricing](https://skyvia.com/pricing) starts from $79 for basic integration and ETL scenarios. You may also use the platform for free. Pros Easy to set up . You don\u2019t need to be a DBA to get rolling. Great UI/UX . Most platforms are clean, guided, and full of helpful prompts. Automation-ready . Built-in scheduling, error handling, and incremental loads. Cons Limited customization . Advanced users might find the customization options limited compared to scripting solutions. Less control . You trade some hands-on precision for convenience. Platform lock-in . Some tools want you to stay in their ecosystem. Key Challenges and How to Address Them No migration is ever totally pain-free, but knowing the usual trouble spots and how to handle them makes a big difference. Here\u2019s a quick rundown of the most common headaches when moving from MySQL to SQL Server and how different tools stack up when handling them. Data Type Mapping Data type mismatches are among the first things that\u2019ll trip you up. MySQL and SQL Server don\u2019t always see data eye to eye. For example : ENUM in MySQL? SQL Server doesn\u2019t have it. You\u2019ll usually map it to a string or a lookup table. DATETIME ? MySQL allows funky values and looser precision than SQL Server will tolerate. How tools handle it: Manual scripting. You\u2019re on your own here. Expect to tweak column types line by line. SSMA. Flags incompatible types in its assessment report and suggests conversions. ODBC Driver. Think of this as the raw data pipe that just moves bytes. There\u2019s no type validation or mapping help built in, so if a MySQL field doesn\u2019t align nicely with SQL Server\u2019s expectations, you\u2019ll hit errors or get weird data unless you handle the mapping yourself in queries. Skyvia. Handles many common conversions behind the scenes, especially with standard data types, but it\u2019s less transparent about every detail, so test results are key. Schema Object Conversion Tables and columns are one thing, but when you start talking about stored procedures , triggers , and functions, that\u2019s a different game. MySQL and SQL Server use completely different procedural languages (MySQL SQL/PSM vs. T-SQL), and direct translations are rarely perfect. How tools handle it: Manual scripting. You\u2019ll need to rewrite these manually, especially if they contain control-of-flow logic (IFs, loops, etc.). SSMA. It flags what it can\u2019t convert, gives you a heads-up, and lets users copy out the raw MySQL code to start reworking it in T-SQL. ODBC Driver. Completely out of scope. It\u2019s not even aware these objects exist. Skyvia. Keeps things simple. It focuses on migrating schemas and data , not procedural logic. So, if you\u2019ve got stored procs and triggers, plan to move those separately. Data Integrity and Validation Just because the data \u201cmigrated\u201d doesn\u2019t mean it landed correctly. You need to confirm: Row counts match. Key data fields are intact. No nulls, duplicates, or corrupted values. How tools handle it: Manual scripting. It\u2019s up to you to write post-migration checks and validation scripts. SSMA. No built-in validation, but it gives enough transparency for you to write your own. ODBC Driver. No validation at all. It executes your query, and that\u2019s it. You\u2019ll need custom checks to confirm anything. Skyvia. Provides clear logs, row-level stats, and error tracking for every import. Downtime Management Nobody wants extended downtime. You\u2019ve got two ways to go: Complete cutover. Migrate everything at once; switch over. Phased. Migrate in stages, sync data as needed, then flip the switch. How tools handle it: Manual scripting. Slow, manual, and more complex to automate, but gives you full control over timing. SSMA. Better for test-and-switch cutovers than phased syncs. ODBC Driver. It works best for partial loads or ongoing access but is not designed for full migrations or sync scenarios. Skyvia. Made for this. You can schedule syncs, automate retries, and minimize downtime. Conclusion Migrating from MySQL to SQL Server might sound like a tall order, but it\u2019s totally doable, even smooth with the right tools and a clear plan. We looked at four solid methods: Manual Export/Import gives you full control but is hands-on and stress-free. SQL Server Migration Assistant (SSMA) offers a guided path with helpful diagnostics and schema conversion tools. ODBC Driver is lightweight and flexible, ideal for targeted data pulls or linking systems. But it\u2019s raw and low-level. Skyvia might be a shortcut. It\u2019s cloud-based, no-code, and moves information quickly, reliably, and stresslessly. Each method is valid, depending on the team\u2019s skills, the complexity of data, and how much downtime you can afford. If you\u2019re looking to skip the grunt work and avoid getting buried in scripts or config files, Skyvia brings a lot to the table. You get automation, scheduling, built-in validation, and a clean UI from the browser. F.A.Q. for MySQL to SQL Server How long does it take to convert MySQL to SQL Server? It depends on the database size and complexity. Small databases can take minutes; larger ones might take hours or more, especially if you\u2019re rewriting logic or validating data. Can I convert MySQL stored procedures to SQL Server automatically? Not fully. Tools like SSMA can identify and extract them, but you\u2019ll usually need to rewrite the logic manually since MySQL and SQL Server use different procedural languages. What is the easiest tool to migrate MySQL to SQL Server? For no-code, hassle-free migrations, Skyvia is one of the easiest options. If you\u2019re okay with a guided but more technical approach, SSMA is also pretty solid . Is SSMA for MySQL free? Yes, it\u2019s a free tool from Microsoft designed to help migrate MySQL databases to SQL Server, including schema and data conversion. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-convert-mysql-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=MySQL+to+SQL+Server+Migration%3A+A+Comprehensive+Tutorial&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-convert-mysql-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-convert-mysql-to-sql-server/&title=MySQL+to+SQL+Server+Migration%3A+A+Comprehensive+Tutorial) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-export-and-import-csv-files-into-redshift-in-several-different-ways/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Redshift Data Loading: 5 Ways to Import & Export CSV Files By [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) [0](https://skyvia.com/blog/how-to-export-and-import-csv-files-into-redshift-in-several-different-ways/#respond) 4743 February 20, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-and-import-csv-files-into-redshift-in-several-different-ways%2F) [Twitter](https://twitter.com/intent/tweet?text=Redshift+Data+Loading%3A+5+Ways+to+Import+%26+Export+CSV+Files&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-and-import-csv-files-into-redshift-in-several-different-ways%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-export-and-import-csv-files-into-redshift-in-several-different-ways/&title=Redshift+Data+Loading%3A+5+Ways+to+Import+%26+Export+CSV+Files) Summary . Efficient CSV import and export processes are crucial for getting the most from Amazon Redshift. Discover five reliable methods to handle CSV data transfers, each explained with clear advantages and potential drawbacks. Follow recommended best practices to streamline workflows and enhance data accuracy within your Redshift environment. Businesses are generating more data than ever and moving their infrastructure to the cloud. But what does this mean in practice? On one hand, it enables better analysis and leads to increased revenue. On the other, it brings challenges like data errors, duplication, and complexity. With various cloud infrastructure solutions available, it\u2019s important to have the right [tools for data management](https://skyvia.com/blog/best-data-management-tools/) . In this article, we will focus on Amazon Redshift , a fully managed cloud data warehouse offered by Amazon Web Services (AWS). When it comes to loading data into Redshift, things can get tricky. There are too many components to get along \u2014 check compatibility, maintain data accuracy, or manage the size of files. These challenges can slow down workflows and affect the results. This is where the GIGO (Garbage In, Garbage Out) principle comes into play: the quality of the output depends on the input. If the data isn\u2019t clean and well-prepared, it\u2019s hard to expect meaningful insights or results. In this article, we will learn how to work with CSV files in Redshift. We will look at the ways of importing data into the Redshift cluster from an S3 bucket and exporting it from Redshift to S3. The instructions are aimed at beginners and intermediate users and assume a basic understanding of AWS and Python. Table of contents What is Amazon Redshift? How to Load Data to and from Amazon Redshift Load Data from Amazon S3 to Redshift, Using COPY Command Auto Import Data into Amazon Redshift with Skyvia Load Data from S3 to Redshift Using Python Export Data from Redshift, Using UNLOAD Command Unload Data from Redshift to CSV with Skyvia Best Practices for Redshift CSV Operations Conclusion What is Amazon Redshift? [Amazon Redshift](https://aws.amazon.com/redshift/) is one of the world\u2019s most popular data warehouses. It supports Massively Parallel Processing Architecture (MPP), which allows users to process multiple nodes simultaneously. According to a study by [AWS & IDC](https://pages.awscloud.com/GLOBAL-field-DL-Redshift-Whitepaper-2021-learn.html) , Redshift users\u2019 analytics teams are 30% more productive, and their data management platform expenses are 47% lower. Redshift supports distributed workloads on large datasets and can store up to [16 petabytes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in a single cluster. Users can load and transform data and make it available for analytics or business intelligence tools. Key features include: Flexible Scaling [Concurrency Scaling](https://aws.amazon.com/blogs/big-data/scale-read-and-write-workloads-with-amazon-redshift/) automatically modifies query processing power to prevent unexpected surges in workload. For hundreds of concurrent inquiries, it adds resources in seconds and stops them when they are no longer required. Additionally, each cluster receives free concurrent scaling credits for up to one hour daily. Query Performance Redshift supports numerous users and analyzes information from operational databases, data lakes, and warehouses while providing consistently high [performance](https://docs.aws.amazon.com/redshift/latest/dg/c_challenges_achieving_high_performance_queries.html) at any size. To improve query execution time and cut costs, it makes use of zone maps, columnar storage, query optimizer, result caching, and compression techniques like LZO and AZ64 encoding. Data Sharing With real-time [data sharing](https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html) across accounts, organizations, and regions, Amazon Redshift improves access to data and partner collaboration. Query Editor V2 helps developers and analysts to explore, analyze, and share queries. This supports cross-functional analysis and organization-wide data exchange. AWS Integration Native [integration](https://docs.aws.amazon.com/redshift/latest/dg/using-redshift-with-other-services.html) with other AWS services like S3, Dynamo DB, SSH, and AWS DMS allows users to securely move, alter, and load their data. Built-In AI Tools Users can build, train, and implement SageMaker models using SQL queries with [Amazon Redshift ML](https://aws.amazon.com/redshift/features/redshift-ml/) . It allows users to extract useful information from their reports for financial forecasting, churn detection, personalization, and natural language processing. How to Load Data to and from Amazon Redshift In this article, we will explore different methods for importing and exporting CSV files to and from Amazon Redshift , including the benefits and drawbacks of each. The industry-standard flat file CSV (Comma-Separated Values) is among the most widely used formats in Amazon Redshift. In CSV, columns are separated by commas, and the data is stored in plain text. Because of its compatibility and ease of use, CSV is a preferred choice for data transfer and storage. Loading Methods Using the COPY Command To Transfer Data from Amazon S3 to Redshift . [COPY](https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html) is a command that allows users to import large volumes of data from S3 buckets. Since Redshift supports parallel imports, it doesn\u2019t require much time to ingest large datasets. Auto Import Data into Redshift using third-party tools . [ETL tools](https://skyvia.com/blog/etl-tools/) such as Skyvia, Fivetran, or AWS Data Pipeline automate the CSV import process for Redshift. Users can set up an import task with no code, map CSV fields to tables, and schedule recurring loads. This is ideal for businesses that require continuous data synchronization with minimal oversight. Building Custom Data Pipeline with Python . [Python](https://docs.aws.amazon.com/redshift/latest/dg/udf-python-language-support.html) allows advanced users to create custom ETL pipelines to transfer data into Redshift. Developers can connect to Redshift, extract information from multiple sources, modify it as needed, and then move it using libraries like psycopg2 and boto3. This method works best for large-scale automation and custom ETL routines. Unloading Methods Extracting CSV from Redshift Using UNLOAD Command . The [UNLOAD](https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html) command allows you to quickly extract data to an S3 bucket. It has options to filter and separate the exported rows. This method is commonly used for backup, reporting, and sharing with other systems. Automated Export from Redshift to CSV using third-party tools . ETL tools such as Skyvia, Hevo, or Airbyte enable automated Redshift exports to CSV. These tools help businesses schedule exports with a user-friendly interface, filtering, and mapping features. Load Data from Amazon S3 to Redshift, Using COPY Command One of the most common ways to import from a CSV to Redshift is by using the native COPY command. Using it, you can import flat files directly to the Redshift data warehouse. For this, the CSV file needs to be stored within an S3 bucket in AWS. S3 , short for [Simple Storage Service](https://aws.amazon.com/s3/) , allows you to store files of any type. To import from a CSV to Redshift using the COPY command, follow these steps: Create the schema on Amazon Redshift. Load the CSV file to the Amazon S3 bucket using AWS CLI or the web console. Import the CSV file to Redshift using the COPY command. Note. Generate AWS Access and Secret Key to use the COPY command. Let\u2019s see a few examples of how to use the Redshift COPY command. REDSHIFT COPY COMMAND EXAMPLES Create a cluster in Redshift. Create the schema as per your requirements. We will use the same sample CSV schema as mentioned above. To create the schema in Redshift, you can simply create a table using the following command. Load data into an S3 bucket which can be done by either using the AWS CLI or the web console. If your file is large, consider using the AWS CLI. Now, when the CSV file is in S3, you can use the COPY command to import the CSV file. Head over to your Redshift query window and type in the following command. COPY table_name\nFROM 'path_to_csv_in_s3'\ncredentials\n'aws_access_key_id=YOUR_ACCESS_KEY;aws_secret_access_key=YOUR_ACCESS_SECRET_KEY'\nCSV; Once the COPY command has been executed successfully, you will receive the output as in the above screen print. Now, you can query your data using a simple select statement. If you do not want to import all the columns from the CSV file into your Redshift table, you can specify only the needed ones while using the COPY command. In this case, only the data from selected columns will be imported. As you can see in the above figure, you can explicitly mention the names of the commands that need to be imported to the Redshift table. REDSHIFT COPY COMMAND TO IGNORE HEADER FROM TABLE Another scenario using the COPY command is that your CSV file might contain a header you want to ignore while importing into the Redshift table. In such a case, you must add a specific parameter IGNOREHEADER to the COPY command and specify the number of lines to be ignored. Usually, if you want to ignore the header, which is the first line of the CSV file, you need to provide the number 1. PROS OF LOADING DATA USING COPY COMMAND Optimized for bulk loading; Supports various file formats \u2013 CSV, JSON, AVRO, and more; Parallel processing; Direct AWS integration. CONS OF LOADING DATA USING COPY COMMAND Complex setup; No built-in scheduling; Error handling is limited; Requires AWS credentials. Auto Import Data into Amazon Redshift with Skyvia Regularly moving data into Amazon Redshift manually can be time-consuming, and it\u2019s important to minimize errors and guarantee data consistency. Third-party ETL tools like Fivetran, Integrate.io, or Skyvia automate the process and make data integration more reliable. However, not all ETL tools are the same \u2014 some demand coding expertise, while others are expensive or lack special features or flexibility. Skyvia provides a no-code solution that works for large enterprises and SMBs with limited technical resources. It integrates with 200+ sources and can perform ETL/ELT and Reverse ETL, build data pipelines, share data via REST API, and more. To start the process with Skyvia, [sign up](https://app.skyvia.com/) for a free trial. Then, follow these 3 simple steps: Set up an [Amazon Redshift connection](https://docs.skyvia.com/connectors/databases/redshift_connections.html) ; Configure import and mapping settings between CSV file and Redshift; Schedule import. STEP 1. CONNECTION SETUP From the main menu in Skyvia, click Create New > Connection . Select Redshift from Skyvia\u2019s list of data warehouses. In the connection window, enter the required parameters like Server , Port , User ID , Password, and Database . Click Advanced Settings and set parameters for connecting to Amazon S3 storage service. Among them are S3 region to use and either AWS Security Token or AWS Access Key ID and AWS Secret Key . Check whether the connection is successful and click Create . You have completed the first step and connected to [Amazon Redshift](https://skyvia.com/connectors/redshift) . STEP 2. IMPORT SETTINGS AND MAPPING In Skyvia\u2019s main menu, click Create New > Import . Select CSV as source and Redshift connection as target. You can upload CSV files from your PC or a file storage service like Dropbox, Box, FTP, etc. In a Task area, click Add new . You are free to add as many tasks as you need. Skyvia allows performing several import tasks in one package and, thus, importing several CSV files to Redshift in a single import operation. In the task editor, upload a prepared CSV file. As the file is uploaded, Skyvia displays a list of detected columns and allows you to explicitly specify column data types. Next, select the target object in Redshift, and choose an operation type \u2014 Insert, Update, Upsert, or Delete. Columns with the same names in CSV and Redshift are mapped automatically. Map all required source columns to target ones using expressions, constants, and lookups, then save the task. You can add another task in case you have another CSV file. Read more about [CSV import to Redshift](https://skyvia.com/data-integration/redshift-csv-file-import-and-export) with Skyvia. STEP 3. JOB AUTOMATION Automate uninterrupted data movement from CSV to Redshift on a regular basis by setting a schedule for your import package. Click Schedule and enter all required parameters. For the first time, we recommend running import manually to check how successfully your package has been executed. If some of your columns are mapped incorrectly, you will see errors in logs and will be able to adjust settings. Moreover, Skyvia can send error notifications to your email. PROS OF AUTO IMPORT DATA USING SKYVIA User-friendly interface and no coding required; Flexible mapping; Automated scheduling; Multiple data sources \u2013 can import from CSV, databases, cloud apps, and file storage. CONS OF AUTO IMPORT DATA USING SKYVIA Dependent on Third-Party Service; Supports only the CSV format for file importing. Load Data from S3 to Redshift Using Python Python is one of the most popular programming languages in the modern data world. Almost every service on AWS is supported with the Python framework, and you can build your integrations with it. You can use available libraries to connect to Redshift using Python. To do this, you need to use a library \u201cpsycopg2\u201d. This library can be installed by running the command as follows. pip install psycopg2 Once the library is installed, you can start with your Python program. You need to import the library into your program as follows and then prepare the connection object. It is prepared by providing: the hostname of the Redshift cluster; the port on which it is running; the name of the database; and the credentials to connect to the database. conn=rd.connect(\nhost=\u2018host_example\u2019\nport=\u2018port_example\u2019\ndatabase=\u2018database_example\u2019\nuser=\u2018user_name\u2019\npassword=\u2018user_password\u2019\n) Once the connection is established, you can create a cursor that will be used while executing the query on the Redshift cluster. cursor=conn.cursor() In the next step, you need to provide the query that needs to be executed to load data into Redshift from S3. This is the same query that you have executed on Redshift previously. COPY test.sample_csv (EmployeeID, EmployeeName, Department)\nFROM \u2018file_path\u2019\ncredentials \u2018aws_access_key_id=your_key;aws_secret_access_key=your_key\u2019\nIGNOREHEADER 1\nCSV; Once the query is prepared, the next step is to execute it. You can execute and commit the query by using the following commands: cursor.execute(query)\nconn.commit() Now, you can go back to your Redshift cluster and check if the data has been copied from the S3 bucket to the cluster. PROS OF LOADING DATA USING PYTHON Highly customizable; Integration with other Python libraries; No third-party dependency. CONS OF LOADING DATA USING PYTHON Requires development effort; Performance optimization needed; No built-in scheduling. Export Data from Redshift, Using UNLOAD Command Loading data out of Amazon Redshift can be done using the UNLOAD command. You can simply select the data from Redshift and provide a valid path to your S3 bucket. You can also filter data in the select statement and then export it as required. Once the query is ready, use the following command to unload from Redshift to S3: UNLOAD ('SELECT * FROM test.sample_csv')\nTO 's3://csv-redshift-221/Unload_'\ncredentials 'aws_access_key_id=AKIA46SFIWN5JG7KM7O3;aws_secret_access_key=d4qfQNq4zYL39jcy4r4IWAxn4qPz4j8JgULvKa2d'\nCSV; Once the UNLOAD command is executed successfully, you can view the new file created under the S3 bucket. The file is now available in the S3 bucket and can be downloaded. PROS OF EXPORTING DATA USING UNLOAD COMMAND Optimized for large exports; Supports parallel exporting; Supports custom query. CONS OF EXPORTING DATA USING UNLOAD COMMAND Requires AWS credentials; No automation for periodic exports; Large exports can be expensive; Manual cleanup is required. Unload Data from Redshift to CSV with Skyvia With Skyvia, you can unload data from Redshift just as you loaded it. To do this: [Sign in](https://app.skyvia.com/) to Skyvia. Click Create New > Export . Select Redshift as a source and storage or manual downloading as a target. Filter data you want to export and configure other file settings. Set a schedule for your integration and run the export. Read more about [Redshift export to CSV](https://skyvia.com/data-integration/export) . PROS OF UNLOADING DATA USING SKYVIA User-friendly for those without coding skills; Automate tasks with scheduled workflows; Exporting related objects and binary data; Data filtering; Integration with 200+ pre-built sources. CONS OF UNLOADING DATA USING SKYVIA Dependent on external services; Limited customization compared to native Redshift UNLOAD. Best Practices for Redshift CSV Operations Large datasets might take a long time to load and require a lot of processing capacity. Make the most of loading and unloading data by using these approaches. Choose the right loading method. Amazon Redshift supports multiple loading options, including the COPY command for bulk imports, INSERT for small datasets, and AWS Data Pipeline or third-party ETL tools for more complex workflows. Select the method that best suits your data volume and uploading frequency. Compress your files. Compressing data files before loading speeds up the process. Amazon Redshift supports various compression formats, such as GZIP and BZIP2. Use parallel processing. When using COPY to load data or UNLOAD to export it, break large datasets into smaller files to enable parallel processing. This will reduce the time needed for transfers. Verify data. Before loading, validate file formats, column mappings, and encoding to prevent errors. When unloading, confirm that exported files maintain the correct structure and consistency across multiple destinations. Schedule and monitor data transfers. Automate loading and unloading tasks using AWS Lambda, Amazon Redshift Scheduler, or external ETL tools. Monitor logs for errors and resource usage. Conclusion In this article, we\u2019ve described several ways to import CSV to Redshift and vice versa. The COPY and UNLOAD commands are built-in options to handle large data transfers, while Python scripts allow customized workflows with more flexibility. If you\u2019re looking for an automated solution, ETL tools like Skyvia make the process easier with an intuitive interface and scheduling features. Try out [Skyvia\u2019s free plan](https://app.skyvia.com/) and discover how it can work for you. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-and-import-csv-files-into-redshift-in-several-different-ways%2F) [Twitter](https://twitter.com/intent/tweet?text=Redshift+Data+Loading%3A+5+Ways+to+Import+%26+Export+CSV+Files&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-and-import-csv-files-into-redshift-in-several-different-ways%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-export-and-import-csv-files-into-redshift-in-several-different-ways/&title=Redshift+Data+Loading%3A+5+Ways+to+Import+%26+Export+CSV+Files) [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) A dedicated technical writer, Liudmyla brings extensive experience in creating and managing diverse learning materials. Passionate about user-centered documentation, she thrives on enhancing user experiences through clear, engaging, and accessible content. With a keen analytical mindset and a collaborative approach, Liudmyla excels in bridging information gaps and simplifying complex concepts. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/how-to-export-jira-data-to-excel/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to export Jira data to Excel By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-export-jira-data-to-excel/#respond) 2236 February 16, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-jira-data-to-excel%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+export+Jira+data+to+Excel&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-jira-data-to-excel%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-export-jira-data-to-excel/&title=How+to+export+Jira+data+to+Excel) Everyone in the software development industry knows Jira as a popular tool that allows teams to manage agile projects and track issues to review the project\u2019s progress. Despite its native dashboard and reporting abilities offering a wealth of data on projects, tasks, and workflows, it has limitations in data sharing, migration, backup, and custom reporting. In this article, we\u2019ll compare two data export methods from Jira to Excel, considering their key differences and ideal use cases. Table of Contents Method 1: Standard Jira Export Functionality Method 2: Excel Plugins for Jira Export Comparison of Methods Conclusion Exporting Jira projects\u2019 data to Excel is a common practice. It covers primary project management and data analysis goals and offers a pool of advantages in different data processing areas, such as: Data Analysis and Reporting Customized Analysis : Advanced formulas, pivot tables, and custom filtering help analyze project data deeply. Custom Reports : Design reports unavailable in Jira, including various visual data representations, aiding in better data interpretation. Data Manipulation and Organization Sorting and Filtering : flexibly sort and filter the Jira data with Excel. Data Consolidation : Consolidate data from multiple Jira projects or different sources into a single Excel document for a unified view. Sharing and Collaboration Ease of Sharing : Share Excel files with team members or stakeholders who may not have access to Jira. Accessibility : Excel files can be accessed offline, providing an advantage in environments where Jira access is limited or unavailable. Familiar Format : Many users are familiar with Excel, making it a convenient tool for sharing and collaborating on project data. Historical Data Tracking and Archiving Record Keeping : Exporting data to Excel allows historical data tracking. Data Archiving : Excel files can be an archive for project data, valuable for future reference and compliance purposes. Integration with Other Business Processes Advanced Business Analysis : Teams may conjunct exported data with other business tools for broader analysis and planning. Workflow Integration : Integration of Excel data into business workflows and processes beyond Jira\u2019s capabilities. Project Tracking and Management Project Overview : Excel can provide a quick snapshot or overview of project status, progress, and resource allocation for effective project management. Method 1: Standard Jira Export Functionality Jira allows the export of issues, users, groups, boards, etc., into various file formats, including CSV, which you can easily use in Excel. Let\u2019s do it in a few simple steps: Go to Jira and select the project to export the data from (for example, issues). Click Issues on the left part of the screen. The complete list of issues related to this board will appear. Select Export Issues -> Export Excel CSV to download the CSV file to your PC or laptop automatically. Go to Excel and drag the previously uploaded CSV file into your Excel workspace. So, your issues are exported into Excel and ready for creating reports, cleaning up, and sending to other team members or third-party ones. Limitations This method is simple and practical for those who need direct data export from Jira to Excel from time to time and have a workflow to make this data cleaned up and \u2018talk\u2019 to the user. However, there are a few limitations. You may not export the whole issue content, like comments and attachments. The manual export is static, so you\u2019ll see the actual picture for the export moment, but not a few seconds later. The maximum number of issues for export is limited to 1000. You cannot export data from Excel to Jira back with this method. The exported data might need cleaning up to become clear to the reader. Case Studies Let\u2019s consider how this method works in real-life scenarios. Agile Project Reporting in a Tech Company Background A tech company uses Jira for agile project management but needs to provide weekly progress reports to stakeholders not using Jira. Solution The project manager manually exported the current sprint\u2019s task data from Jira to a CSV file and imported it into Excel. Results A custom report in Excel with visualizations that clearly show the sprint\u2019s progress to share with stakeholders. Compliance Auditing in a Financial Services Company Background A financial services company is seeking to audit its project management practices to comply with internal policies. Solution The compliance officer exports relevant project data from Jira to a CSV file and uses Excel to review and audit the activities against compliance checklists. Results The Excel-based review helps the officer identify non-compliance areas and suggest improvements. Method 2: Excel Plugins for Jira Export Using the Excel plugins for data export from Jira to Excel is also a simple and convenient method. Jira offers its native add-in for such an approach. However, JIRA Cloud plugin functionality is only available for users with an Office 365 subscription, which is an essential limitation. Compared to Jira, the plugin provided by Skyvia supports a wide variety of databases and cloud apps, including Office 365. Let\u2019s consider the integration scenario step-by-step with the [Skyvia Excel add-in](https://skyvia.com/excel-add-in/) . Go to [Microsoft AppSource](https://appsource.microsoft.com/en-us/home) . Enter Skyvia in the search bar. Click Get it now. You will be redirected to another page. Click Open in Excel. Go to the Home tab in Excel and click the Skyvia Query icon. Click Query on the Skyvia Query screen. Select the workspace, connector, and the object to import. Specify the filtering and order criteria if needed. And now your Excel spreadsheets get populated with data from Jira. Limitations Add-ins for exporting data from Jira to Excel offer automation, customization, and advanced data manipulation capabilities directly within Excel. However, there are several limitations and considerations to keep in mind: The subscription fee for some powerful add-ins might be a stop factor for small businesses and individual users. They can be too complicated for end users, starting in the data integration area. The dependency on third-party software for critical data processes may concern data security, privacy, and service reliability. Case Studies The scenarios below show how to use this method in business reality. Software Development Reporting Background A software development company tracks its processes in Jira. The management team wants to see detailed weekly progress reports to monitor project milestones, bug fix rates, and resource allocation. Solution By implementing an add-in to connect Jira with Excel, they automate the export of issue data, sprint progress, and team workload into Excel spreadsheets. Results With automated Excel reports, the management team can gain insights into development efficiency, helping identify bottlenecks and improve sprint planning. The reports also enhance transparency with stakeholders. Customer Support Ticket Analysis Background A technology company uses Jira to manage customer support tickets. The customer service team needs to track ticket resolution times, customer satisfaction, and recurring issues. Solution Using an add-in, the team exports data from Jira to Excel to perform in-depth analysis and trend identification. Results The analysis in Excel reveals patterns in customer issues, guiding improvements in product development and support resources. It also helps in preparing detailed customer service reports for executive review. Comparison of Methods You may find the key differences and ideal use cases for each method in the table below. Manual Export to Excel Using Add-ins for Export Key Differences \u2013 Manual export is straightforward, requiring no additional tools beyond Jira and Excel. \u2013 Exports are snapshots of data at a specific point in time without dynamic updates. \u2013 Data exported manually may require significant post-export manipulation in Excel for formatting and analysis. \u2013 Add-ins automate the data export process, enabling scheduled exports and real-time data synchronization. \u2013 Many add-ins offer advanced data manipulation and transformation features directly within Excel. \u2013 Supports the creation of custom reports with dynamic data updates, often with more decadent formatting options than manual exports. Ideal Use Cases \u2013 Suitable for one-time or infrequent reports where dynamic data updates are unnecessary. \u2013 Effective for small projects with manageable data sizes that don\u2019t justify the complexity or cost of add-ins. \u2013 For users who prefer the simplicity of CSV files and manual data manipulation without needing real-time data or automated processing. \u2013 Ideal for scenarios requiring regular, up-to-date reports without manual intervention each time. \u2013 Best suited for complex projects where advanced data analysis, manipulation, and visualization are needed. \u2013 For environments that benefit from tightly integrated data workflows between Jira and Excel, enabling seamless data updates and manipulations. Conclusion Selecting the data exporting method depends on your business scenario and the specific requirements of your data analysis and reporting needs. In occasional cases, when data export is needed from time to time, manual CSV export is totally fine. But for more frequent and complicated business scenarios, add-ins are a must-have. In this case, you may use the [Skyvia Add-in](https://skyvia.com/excel-add-in/jira) , avoiding native Jira to Excel data export limitations. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-jira-data-to-excel%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+export+Jira+data+to+Excel&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-export-jira-data-to-excel%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-export-jira-data-to-excel/&title=How+to+export+Jira+data+to+Excel) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-import-csv-file-into-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Import CSV File into MySQL Table in 5 Different Ways By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-import-csv-file-into-mysql/#respond) 10246 January 25, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-file-into-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+CSV+File+into+MySQL+Table+in+5+Different+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-file-into-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-import-csv-file-into-mysql/&title=How+to+Import+CSV+File+into+MySQL+Table+in+5+Different+Ways) Efficient data management starts with importing CSV files into MySQL tables. This streamlines workflows by automating data transfer, reducing manual effort, and ensuring accurate, up-to-date information for better decision-making. Consolidating data in MySQL enables advanced analysis, uncovering insights to drive growth and revenue. With the flexibility of CSV files and MySQL\u2019s scalability, this integration bridges tools and teams, fostering seamless collaboration and preparing businesses for future data challenges. This tutorial explores the versatile methods for importing CSV files into MySQL , tailored to a wide range of expertise and preferences. By the end of this guide, you\u2019ll understand them well and discover ways to automate the import process without needing to code. Table of contents How to Import CSV Files to MySQL Tables Method 1: Importing CSV into MySQL Using Command Line Method 2: Importing CSV into MySQL Using Workbench Method 3: Importing CSV Using phpMyAdmin Method 4: Importing CSV to MySQL Using Python Method 5: Importing CSV into MySQL Automatically Using Skyvia Summary How to Import CSV Files to MySQL Tables Let\u2019s explore the simple and effective ways we can use to import CSV files into MySQL tables. Command Line Import Are you comfortable with a tech-focused approach? The command line is a fast and efficient way to import CSV files. With just a few [SQL commands](https://www.codecademy.com/article/sql-commands) , data is ready to be used. It\u2019s ideal for users who enjoy a hands-on experience. MySQL Workbench If you prefer a user-friendly interface, [MySQL Workbench](https://www.mysql.com/products/workbench/) is a great option. Its simple wizard makes importing CSV files intuitive and straightforward. This method works well for beginners and those who prefer a visual tool over command lines. phpMyAdmin [phpMyAdmin](https://www.phpmyadmin.net/) is a good solution if you\u2019re working through a web browser. The platform\u2019s import feature simplifies uploading CSV files without requiring any coding. Upload the file, map your fields, and let phpMyAdmin handle the rest. Python Scripts If automation is your thing, [Python](https://www.python.org/) is a fantastic tool. Using libraries like [Pandas](https://pandas.pydata.org/) and [SQLAlchemy](https://www.sqlalchemy.org/) , you can write scripts to import CSV files into [MySQL](https://www.mysql.com/) seamlessly. However, this method requires knowledge of Python and technical expertise, so it might be challenging for business users. If you\u2019re unfamiliar with Python scripts, selecting a data integration tool to import such files automatically is better. No-Code Platforms Looking for a hassle-free solution? No-code tools like [Skyvia](https://skyvia.com/data-integration) make the process effortless. Users can integrate data from various systems into their data storage with just a few clicks. Such platforms cater to both technical experts and business users. For instance, when business users need an ad-hoc report but can\u2019t wait for analysts, they can quickly handle it using these intuitive tools. Method 1: Importing CSV into MySQL Using Command Line This approach addresses common challenges like handling large datasets efficiently and avoiding the limitations of GUI tools. It offers precise control over the import process, ensuring faster performance and reduced errors, especially for bulk data operations. It also provides a powerful and resource-friendly solution for users seeking a reliable and straightforward way to manage server-side imports without relying on third-party tools. Follow these steps for smooth data import. Step 1: Prepare the CSV File Ensure your CSV file is structured correctly. You must include column headers that match the column names in the MySQL table. Use consistent delimiters (e.g., commas for separating fields). Example: id,name,email\n1,John Doe,johndoe@example.com\n2,Jane Smith,janesmith@example.com Save the file with a .csv extension, such as data.csv. Keep this file accessible on the same server as your MySQL instance. Step 2: Create a MySQL Table Open your terminal or command prompt and enter the following command: mysql -u your_username -p Then, replace your_username with your MySQL username. Enter your password when prompted. Once inside the MySQL command line, choose the database where you want to import the data: USE your_database_name; Here, replace your_database_name with the name of your target database. Define the table structure to match the CSV file columns: CREATE TABLE your_table_name (\n id INT,\n name VARCHAR(100),\n email VARCHAR(100)\n); Replace your_table_name with the desired table name. This table will hold the imported data. Step 3: Place the CSV File Ensure the CSV file is saved on the server where MySQL is installed and note the absolute file path (e.g., /path/to/data.csv). You will need this path for the next step. If the CSV file is located on your local machine, upload it to the server using tools like SCP, SFTP, or any file transfer method supported by your server. Step 4: Import the CSV File While still in the MySQL command line, use the LOAD DATA statement to import the CSV file: LOAD DATA INFILE '/path/to/data.csv'\nINTO TABLE your_table_name\nFIELDS TERMINATED BY ',' \nENCLOSED BY '\"'\nLINES TERMINATED BY '\\n'\nIGNORE 1 ROWS; Enter this SQL command directly into the MySQL prompt. Explanation of options: FIELDS TERMINATED BY \u2018,\u2019: Specifies that fields in the file are separated by commas. ENCLOSED BY \u2018\u201d\u2018: Indicates text fields are wrapped in double quotes (if applicable). LINES TERMINATED BY \u2018\\n\u2019: Sets the line break format for each row. IGNORE 1 ROWS: Skips the header row in the CSV file. Note: Ensure the MySQL user has sufficient privileges to access the file location. Step 5: Verify the Data After the import, run a query to confirm that the data has been correctly loaded into the table: SELECT * FROM your_table_name LIMIT 10; Here, replace your_table_name with the name of your table. This command will display the first 10 rows, allowing you to quickly verify that the data matches the CSV file. Double-check the table structure, CSV formatting, and the LOAD DATA statement if the data is incorrect or incomplete. Limitations While the command-line method is powerful and efficient, it requires users to understand SQL syntax and server permissions, which can be challenging for non-technical users . Additionally, it lacks a user-friendly interface, making it less suitable for those who prefer GUI-based or automated solutions for managing data imports. Best For This method is ideal for businesses managing large-scale imports, such as migrating bulk records or performing regular updates on extensive datasets. It\u2019s particularly well-suited for scenarios requiring precise control over the process, such as handling complex structures or automating server-side operations. Bypassing GUI limitations offers a fast, resource-efficient solution for technical users working with high-volume or frequent transfers. Method 2: Importing CSV into MySQL Using Workbench This method addresses the need for a user-friendly, GUI-based approach to data imports. It simplifies the process for users who prefer visual tools, making it easier to map columns and validate data before importing. It fits small- to medium-sized datasets and is for those looking for a straightforward way to manage imports without writing SQL commands. Let\u2019s set it up, create a target table, and import CSV data. Step 1: Install MySQL Workbench Before starting, download the [Community Edition](https://dev.mysql.com/downloads/workbench/) and open it. Then, navigate to Database> Manage Connections from the top menu. Create a new connection by providing the following details: Connection Name : Any name you prefer. Hostname : Your server\u2019s IP address or hostname. Port : Use the default port 3306 unless configured otherwise. Username and Password : Credentials for accessing the Database. Afterward, go to the Advanced tab in the connection settings and add OPT_LOCAL_INFILE=1 in the Others text input field. This step ensures Workbench can upload data from local files. Click Test Connection to verify it. If successful, save the connection settings by clicking OK . Step 2: Create a Table in MySQL Workbench Prepare Your CSV File. Date,Band,ConcertName,Country,City,Location,LocationAddress,\n2023-05-28,Ozzy Osbourne,No More Tours 2 - Special Guest: Judas Priest,Germany,Berlin,Mercedes-Benz Arena Berlin,\"Mercedes-Platz 1, 10243 Berlin-Friedrichshain\",\n2023-05-08,Elton John,Farewell Yellow Brick Road Tour 2023,Germany,Berlin,Mercedes-Benz Arena Berlin,\"Mercedes-Platz 1, 10243 Berlin-Friedrichshain\",\n2023-05-26,Hans Zimmer Live,Europe Tour 2023,Germany,Berlin,Mercedes-Benz Arena Berlin,\"Mercedes-Platz 1, 10243 Berlin-Friedrichshain\",\n2023-07-07,Depeche Mode,Memento Mori World Tour 2023,Germany,Berlin,Olympiastadion Berlin,\"Olympischer Platz 3, 14053 Berlin-Charlottenburg\", In the main dashboard, click on the + icon next to MySQL Connections to create a new connection to your database in Workbench. A dialog box, Setup New Connection will appear. Enter : The connection name (e.g., \u201cMyDatabaseConnection\u201d), The IP address or hostname of your database server (e.g., localhost or 127.0.0.1 for a local server) The default port for MySQL is 3306. Leave it as is unless your database uses a different port. The username you use to log in to your database (e.g., root). Click the Store in Vault\u2026 button to securely save your password, or leave it blank to enter the password manually each time you connect. Then, \u200b\u200bclick the Test Connection button to verify it. If successful, you\u2019ll see the confirmation pop-up. If not, review the entered details for accuracy and ensure your database server is running. Click OK to save the details. Your new connection will now appear on the main dashboard. Select your database schema in the Schemas panel, right-click on Tables, and choose Create Table . Name the table (e.g., concerts) and define columns to match the CSV file: Date: DATE Band: VARCHAR(100) ConcertName: VARCHAR(200) Country: VARCHAR(100) City: VARCHAR(100) Location: VARCHAR(200) LocationAddress: VARCHAR(255) When finished, click Apply to save the table. Step 3: Import the CSV File Right-click on the target table (e.g., concerts) and choose Table Data Import Wizard . Then, select the CSV file to import and click Next . Here, you must ensure each column in the CSV file maps correctly to a corresponding table column. Adjust the mappings if necessary and proceed by clicking Next . Review the settings and click Start Import . You may monitor the progress in the wizard window. If successful, a confirmation message will appear. In case of errors, review the log messages for explanations. Step 4: Verify the Import When the Import Wizard is closed, right-click on the table in the Schemas panel and choose Select Rows . Confirm that the data matches your CSV file, and voila, success. The data is now imported into MySQL and ready for use. Limitations Despite being user-friendly, this method may struggle to handle large datasets efficiently , making it less suitable for bulk imports. Additionally, it requires manual intervention for each import , which can be time-consuming and impractical for recurring or automated data workflows. Best For Such a method is a good choice for businesses that need a user-friendly, GUI-based solution for managing small to medium-sized datasets. It especially fits for non-technical users or teams working on ad-hoc data imports, like uploading customer lists, product inventories, or transactional records. This method provides a straightforward interface with minimal setup, making it suitable for occasional imports or scenarios where simplicity and ease of use are priorities. Method 3: Importing CSV Using phpMyAdmin This approach provides a web-based solution for managing database imports. It benefits users working in shared hosting environments or without direct access to command-line tools. This method is perfect for quick, ad-hoc imports, offering an accessible interface for uploading and mapping data. Follow these steps to import data effortlessly. Step 1: Prepare the CSV File Before starting, ensure that each column in the CSV file matches the structure of the target table in the database. Use consistent delimiters (e.g., commas) to separate values. Include a header row if required (e.g., id, name, email). Save the file with a .csv extension, for example, data.csv. Example: id,name,email\n1,John Doe,john.doe@example.com\n2,Jane Smith,jane.smith@example.com\n3,David Johnson,david.johnson@example.com Step 2: Access phpMyAdmin Open your web browser and navigate to the phpMyAdmin login page. This is usually located at http://yourdomain.com/phpmyadmin or http://localhost/phpmyadmin for local installations. Enter your username and password , then click Go to log in. Step 3: Select the Database In the left-hand menu, click on the Database you want to import the CSV file. If you need to create a new table, follow the steps in Step 4 . Step 4: Create a Table (if not already created) If the table you want to import data into already exists, click on the SQL tab in phpMyAdmin and run a CREATE TABLE statement to create it. Example: CREATE TABLE your_table_name (\n id INT,\n name VARCHAR(100),\n email VARCHAR(100)\n); Click Save to create the table. Step 5: Import the CSV File In the phpMyAdmin interface, click on the Import tab in the top menu. After that, under the File to Import section, click the Browse button and select your CSV file from your computer. The next step is to configure the import options: Format : Select CSV from the dropdown menu. Fields terminated by : Enter , if commas separate the fields in your CSV file. Fields enclosed by : Enter \u201d if text values are enclosed in double-quotes. Lines terminated by : Keep the default \\n for line breaks. Column names : If your CSV file has a header row, tick the option to ignore the first line. After finishing, click Go to start the import. Step 6: Verify the Import Once the import is complete, phpMyAdmin will display a success message indicating the number of rows imported. To verify, go to the Browse tab of the target table and review the imported records. If the import failed or the data appears incorrect: Double-check the CSV structure and import settings. Correct any issues and repeat the process. Step 7: Troubleshooting (if needed) If you encounter errors, phpMyAdmin will provide a log with details. Common issues and fixes : Ensure the table columns match the CSV file structure. Ensure your CSV file uses UTF-8 encoding. Limitations The solution may not handle large datasets effectively , often timing out during bulk imports. Additionally, its web-based interface can be less secure for sensitive data and lacks advanced automation features , making it unsuitable for recurring or complex import tasks. Best For Importing CSV files using phpMyAdmin is an excellent choice for small to medium-sized data imports where a simple, GUI-based approach is preferred. It\u2019s okay for businesses or teams without extensive technical expertise, such as managing customer records, uploading product catalogs, or updating inventory data. This method works well for ad-hoc imports and is particularly useful for users who want a straightforward solution without relying on command-line tools or additional software. Method 4: Importing CSV to MySQL Using Python This method is perfect for users looking to automate repetitive imports, transform data during the process, or integrate imports into larger workflows. Python\u2019s powerful libraries (pandas and mysql.connector) give developers precise control over how data is read, cleaned, and inserted into MySQL databases. This guide explains using Python with the pymysql and pandas libraries to handle CSV-to-MySQL imports. Step 1: Prepare Your Environment Download Python from [python.org](https://www.python.org) and install the mysql-connector-python and pandas libraries using pip: pip install mysql-connector-python pandas mysql-connector-python : A Python library to connect to MySQL databases. pandas : A library for data manipulation and handling CSV files. Step 2: Prepare Your CSV File Your CSV file should have headers that match the MySQL table column names. Save it as a .csv file, for example, data.csv. Example: id,name,email\n1,John Doe,john.doe@example.com\n2,Jane Smith,jane.smith@example.com\n3,David Johnson,david.johnson@example.com Step 3: Create a MySQL Table Open MySQL and log in with your credentials. mysql -u your_username -p Use the database where you want to import the data. USE your_database_name; Ensure the table matches the CSV structure. CREATE TABLE your_table_name (\n id INT,\n name VARCHAR(100),\n email VARCHAR(100)\n); Step 4: Write the Python Script Open your preferred text editor or IDE and create a new Python script file (e.g., import_csv.py). Copy and paste the following code: import mysql.connector\nimport pandas as pd\n\n# Database connection\nconnection = mysql.connector.connect(\n host='localhost', # Replace with your host\n user='your_username', # Replace with your MySQL username\n password='your_password', # Replace with your MySQL password\n database='your_database' # Replace with your database name\n)\n\ncursor = connection.cursor()\n\n# Load the CSV file into a pandas DataFrame\ncsv_file = 'data.csv' # Replace with the path to your CSV file\ndata = pd.read_csv(csv_file)\n\n# Iterate through the DataFrame and insert records into the MySQL table\nfor index, row in data.iterrows():\n cursor.execute(\n \"INSERT INTO your_table_name (id, name, email) VALUES (%s, %s, %s)\",\n (row['id'], row['name'], row['email'])\n )\n\n# Commit the transaction\nconnection.commit()\n\nprint(\"Data imported successfully!\")\n\n# Close the connection\ncursor.close()\nconnection.close() Then, replace your_username, your_password, your_database, and your_table_name with your database credentials and table details. Step 5: Run the Script In this step, you need to save the Python script file, open a terminal or command prompt, navigate to the folder containing your script, and run it: python import_csv.py Step 6: Verify the Import Log in to your MySQL database and verify the imported data by running a query: SELECT * FROM your_table_name LIMIT 10; Check that the records from your CSV file are correctly loaded into the table. Limitations Using Python to import CSV files requires programming knowledge , which can be a barrier for non-technical users. Setting up and maintaining scripts for imports can be time-consuming and prone to errors, especially for those unfamiliar with Python or database operations. Best For Importing CSV files into MySQL using Python fits businesses that require automation, scalability, and flexibility in their workflows. This method works well for technical users with large volumes, recurring imports, or intricate transformations. It allows seamless integration into broader pipelines, perfect for use cases like automated reporting, real-time processing, or syncing information across platforms. Method 5: Importing CSV into MySQL Automatically Using Skyvia Importing CSV files into MySQL automatically using [data integration platforms](https://skyvia.com/blog/data-integration-tools/) like [Skyvia](https://skyvia.com/data-integration/import) solves the pain points of managing recurring data imports without technical expertise. It provides a no-code solution for users who want to: Automate the process. Eliminate manual work. Ensure consistent, error-free imports. With its [intuitive interface](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) and cloud-based platform, Skyvia simplifies handling large datasets and integrates seamlessly with cloud storage services like Google Drive or Dropbox. Using Skyvia, you can automatically [import](https://skyvia.com/data-integration/mysql-csv-file-import-and-export) CSV files into MySQL with minimal setup. Follow these steps to complete the process. Step 1: Sign Up and Log In to Skyvia The first step is to visit the [Skyvia website](https://skyvia.com/) and sign up for a free account if you don\u2019t have one. Then, log in to your account to access the dashboard. Step 2: Create a New Integration Task From the dashboard, click on Integration, select + Create New, and choose the Import option to set up a data import process. Step 3: Connect to the MySQL Database Click New Connection, select MySQL from the available options as a target, and enter the required credentials here . Next, click Create Connection , test it, and save. Then, return to the import setup screen and select the MySQL connection you created. Step 4: Prepare the CSV File In this step, you must ensure the CSV file has a header row with column names matching the MySQL table structure. Store the CSV file in a location that is accessible for uploading or connecting to a cloud storage service like Google Drive or Dropbox. Example CSV file preview : id,name,email,phone\n1,John Doe,john.doe@example.com,1234567890\n2,Jane Smith,jane.smith@example.com,9876543210\n3,David Johnson,david.johnson@example.com,5678901234\n4,Lisa Brown,lisa.brown@example.com,4561237890 Step 5: Create a New Import Package Here, click on Integration \u2192 +Create New \u2192 Import from the dashboard. Select the CSV file as the source. Upload the file or link to a cloud storage account. Finally, preview and confirm the file structure. The following action is to select the MySQL connection created earlier and specify the target table where the data will be imported. Step 6: Configure Import Settings Now, select import mode: Insert: Adds new rows. Update: Updates existing rows based on a matching key. Upsert: Combines insert and update. Enable options like error logging and duplicate record handling if needed. Step 7: Automate the Import To schedule the import, navigate to the Schedule tab during package setup. Set a frequency (daily, weekly, monthly, or custom intervals) and specify the time and timezone. After that, enable the schedule and save. Limitations Skyvia is highly user-friendly and requires no coding, but it relies on a stable internet connection since it\u2019s a cloud-based platform. Additionally, it may not offer the same granular control or customization level as scripting solutions like Python for highly specific or complex import scenarios. Best For Importing CSV into MySQL automatically using Skyvia is a fine choice for businesses that value automation, ease of use, and cloud-based solutions. This method is perfect for non-technical users or teams that need to schedule recurring imports, synchronize data from cloud storage services, or handle integrations between platforms without manual intervention. Skyvia\u2019s no-code interface and powerful automation features make it an excellent choice for streamlining operations like customer data updates, inventory management, or regular reporting. Summary We can import CSV files into MySQL tables through various methods, each tailored to different expertise levels, use cases, and business requirements. Whether companies prefer command-line efficiency, GUI-based tools, or code-driven approaches, there\u2019s a solution for everyone: Command Line Import is i deal for advanced users looking for performance, precision, and control over large-scale imports directly from the server. MySQL Workbench offers a user-friendly GUI option for small-to-medium data imports and visual database management. phpMyAdmin fits for shared hosting environments for quick and straightforward imports directly through a browser. Python Scripts are best for developers creating automated pipelines or handling complex data transformations. No-Code platforms like Skyvia are perfect for non-technical users, offering a secure, scalable, and automated solution for seamless data integration without writing code. Choosing the proper method depends on your technical expertise, project scale, and need for automation. With the proper approach, importing CSV files into MySQL becomes a streamlined process, enabling teams to manage data efficiently and smoothly. FAQ for CSV File into MySQL Which method is best for handling large CSV files? The command-line approach (LOAD DATA INFILE) is optimized for performance and is the best choice for large datasets. It operates directly on the server, avoiding GUI-related overhead and ensuring faster imports. Can I schedule automatic imports of CSV files into MySQL? Yes, automation is possible with tools like: Skyvia offers built-in scheduling for recurring imports. Python Scripts can be integrated with cron jobs for automated execution. Command Line Scripts allow using LOAD DATA INFILE in shell scripts combined with scheduling tools. These methods enable seamless, hands-free data imports into MySQL on a schedule. What makes Skyvia a good choice for importing CSV files into MySQL? Skyvia is a no-code platform designed for simplicity and automation: It supports large datasets and recurring imports with scheduling. It integrates with cloud storage services like Google Drive. It provides detailed logs for error tracking and monitoring. Skyvia is perfect for non-technical users or those looking for a scalable, hands-free solution. How do I ensure data accuracy when importing CSV files into MySQL? To ensure data accuracy: Validate the CSV file . Check for missing values, mismatched data types, and duplicate entries. Match columns correctly . Ensure that the column headers in your CSV match the structure of the MySQL table. Preview before import . Many tools like Skyvia and MySQL Workbench allow you to preview data and mappings before importing. Use error logs . Review import logs provided by tools like Skyvia or command-line scripts to catch and resolve errors. What are the most common errors when importing CSV files into MySQL? Some common errors include: Data Type Mismatches . Ensure that the data types in the CSV match the column data types in MySQL (e.g., no text in numeric fields). Encoding Issues . Use UTF-8 encoding for the CSV file to avoid character misinterpretation. Incorrect Delimiters . Verify that the delimiter in the CSV matches the one specified in the import settings. Permission Denied . Ensure the MySQL user has the required permissions to import files, and the file is accessible to the MySQL server. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-file-into-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+CSV+File+into+MySQL+Table+in+5+Different+Ways&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-file-into-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-import-csv-file-into-mysql/&title=How+to+Import+CSV+File+into+MySQL+Table+in+5+Different+Ways) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/how-to-import-csv-into-quickbooks-online-and-desktop/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Import CSV Into QuickBooks Online and Desktop By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-import-csv-into-quickbooks-online-and-desktop/#respond) 3179 January 31, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-into-quickbooks-online-and-desktop%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+CSV+Into+QuickBooks+Online+and+Desktop&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-into-quickbooks-online-and-desktop%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-import-csv-into-quickbooks-online-and-desktop/&title=How+to+Import+CSV+Into+QuickBooks+Online+and+Desktop) Hey there, financial adventurers! Ever wondered how to import CSV files into QuickBooks without hassle? If the very thought of it makes you procrastinate, don\u2019t worry \u2013 this blog post is here to brighten your day. Uploading CSV to QuickBooks may sound tricky, but it\u2019s a game-changer for organizing your financial data. In this article, we\u2019ll guide you through the process step by step, breaking it down into plain, simple terms. Whether it\u2019s about income reports or vendor payments, you\u2019ll see how easy it is to format, upload, and manage your data. So, let\u2019s roll up our sleeves and dive in! Table of Contents What is QuickBooks Online? What is QuickBooks Desktop? QuickBooks Online vs. Desktop: Which One is Right for You? What\u2019s a CSV File, and How Does It Differ from Excel? How to Prepare CSV Files for Importing into QuickBooks? Import CSV to QuickBooks: What Transactions Can You Import? Method #1: Import CSV to QBO Method #2: Import CSV to QBD Method #3: Automagically Import CSV to QuickBooks with Skyvia Why Is Importing CSV Files Essential for Businesses? How to Handle Errors During the Import Process? Conclusion What is QuickBooks Online? Think of QuickBooks Online like a digital organizer for your money matters. It\u2019s like having a virtual wallet that you can access anywhere with the internet. You can track your cash flowing in and out. You can keep tabs on your business finances without carrying a physical ledger. And you can do this on your computer or phone. What is QuickBooks Desktop? It is like an old, reliable toolbox sitting on your computer. It\u2019s packed with all the financial tools you need \u2013 crunching numbers, creating invoices, and managing expenses \u2013 all without needing the internet. Everything\u2019s right there, ready to help you fix up your finances. Just keep in mind it doesn\u2019t get updates as often as QuickBooks Online, so you might miss out on some of the latest features and tricks. QuickBooks Online vs. Desktop: Which One is Right for You? It\u2019s like choosing between a smartphone and a trusty old flip phone. Both get the job done, but one might fit your style better. Let\u2019s dive in and find out which of the [QuickBooks](https://quickbooks.intuit.com/global/) products matches your financial needs! QuickBooks Online (QBO) Pros Anywhere, anytime. It\u2019s like having your money office in your pocket. Access your finances from anywhere with the internet. Automatic updates. Like a self-updating app, QBO gets new features regularly without hassle. Collaboration ease. Perfect if your team needs to dance with your financial data together. Cons Monthly cost. Just like a subscription service, you pay regularly to keep the virtual office open. Internet hang-ups. If the net\u2019s down, so is your access. Learning curve. Braving it to the online playground might take a bit longer. Best for : If you\u2019re a business on the move and need team access. And you don\u2019t mind paying a little extra for the convenience. QuickBooks Desktop (QBD) Pros Standalone champion. Like a trusty old desktop computer, QBD doesn\u2019t rely on the internet \u2013 it\u2019s always there. No subscriptions. Pay once, and it\u2019s yours \u2013 no regular money leaks. Mature features. Like a wise wizard, it\u2019s got a bunch of tools from years of financial magic. Cons Location bound. You can\u2019t work at home or at your favorite coffee shop \u2013 no financial globetrotting. Manual updates. No auto-magic here \u2013 you update when you choose. Team tango trouble. Collaborating can be a bit more like a slow dance. Best for : If you\u2019re okay with staying put and want full control without a subscription. And you have a thing for old-school reliability. What\u2019s a CSV File, and How Does It Differ from Excel? Let\u2019s stroll down Data Lane and chat about CSV and Excel files and how they fit right into QuickBooks. A CSV file, short for \u201cComma-Separated Values,\u201d is basically a simple spreadsheet where commas separate each piece of information \u2013 like a neat, plain shopping list. Each row in the CSV represents a transaction, while the columns hold all the details: date, amount, description, and more. Now, Excel is like the Swiss Army knife of spreadsheets. Unlike the simple, no-frills CSV plain text format, Excel is a fully-fledged spreadsheet software that saves files in .xlsx (modern Excel files) or .xls (older version) formats. It\u2019s packed with tools \u2013 formulas, charts, and fancy formatting \u2013 that let you analyze, organize, and present data like a pro. Think of it as your go-to for everything from crunching numbers to crafting colorful reports. CSV vs Excel: main differences To better understand the difference between these tools, let\u2019s have a concise breakdown of their core features: Criteria CSV Excel File format A plain text file with data separated by commas. It\u2019s lightweight and simple, with no formatting, special characters, or images. Uses a binary file format (.xls) or a modern XML-based file format (.xlsx) with advanced features like formulas, formatting, and charts. Purpose Designed for data transfer and compatibility across platforms. Built for data analysis and presentation. Structure Only raw data, no formatting or formulas. Includes formatting, styling, and complex calculations. Compatibility Universally compatible with most software. Requires software like [Microsoft Excel](https://www.microsoft.com/en-us/microsoft-365/excel) or similar tools. File size Smaller due to its plain text nature. Larger because of additional features. As businesses often use both CSV and Excel to organize their financial data, their seamless integration with QuickBooks remains a top priority. The software handles CSV files natively, treating each line as a separate transaction and organizing the details perfectly \u2013 just like fitting puzzle pieces into place. As for Excel, you can either upload data directly or convert your worksheet to a CSV file first: that\u2019s as easy as selecting \u201cSave As\u201d and choosing \u201cCSV (*.csv).\u201d How to Prepare CSV Files for Importing into QuickBooks? Getting your files ready is fairly easy. Just follow a few simple rules, and your CSV import will run like clockwork. Set the Stage with the Right Columns Ensure your CSV file has the essential columns: Date : When the transaction took place. Description : A brief note about the transaction. Amount : The transaction value. Note: for some imports, you might need columns like Credit and Debit instead of a single Amount column. 2. Keep the Formatting Simple and Clean Dates : Use a consistent format, like MM/DD/YYYY. Numbers : Avoid currency symbols and commas; just the digits and decimal points. Text : Steer clear of special characters that might trip up QuickBooks. 3. Mind the Details No Blank Rows : Empty rows can confuse the import process. Consistent Columns : Each column should have a clear, consistent purpose. 4. Save with Care Once your data is polished, save the file in CSV format. 5. Give It a Once-Over Before importing, open the CSV file with a text editor like Notepad to ensure everything looks tidy and in order. Import CSV to QuickBooks: What Transactions are Supported? Overall, QB allows importing the most common transaction types via CSV files, such as bills, invoices, and checks. But there is a catch \u2013 some data types require a bit of manual setup. Read on to discover which transactions are QuickBooks-friendly and which ones are a no-go. What Can You Import? QuickBooks supports importing these types of transactions from a CSV file: Invoices Bills Checks Expenses Payments Credit Card Charges Deposits Journal Entries Money Transfers What Can\u2019t You Import? Let\u2019s face it \u2013 nothing is perfect. Here\u2019s the list of data types that just don\u2019t make the cut: Payroll Records Why? Payroll involves sensitive and complex data. QuickBooks prefers you set up payroll within the system to ensure accuracy and compliance. Sales Orders Why? Sales orders are not directly importable. You\u2019ll need to recreate them from a scratch to maintain proper workflow and tracking. Invoice Templates and Other Custom Templates Why? Custom templates don\u2019t transfer over. You\u2019ll get to flex your creative muscles by redesigning them again. Memorized Transactions Why? These personalized shortcuts won\u2019t make the journey. You\u2019ll need to set them up anew in QuickBooks. Budgets Why? Budget data doesn\u2019t import directly. Instead, consider inputting your financial plans directly into the system. Attachments Why? Documents or files attached to transactions won\u2019t come along. Keep them handy to reattach as needed. Non-Posting Entries (e.g., Estimates) Why? Entries that don\u2019t impact your accounts, like estimates, need to be recreated within QuickBooks. Audit Trail Why? Your historical audit trail stays behind, but you can start a fresh log for all future activities within the system. Reconciliation Reports Why? Past reconciliation reports won\u2019t import. Save them elsewhere and continue reconciling in QuickBooks from where you left off. Remember, while these data types need manual entry, QuickBooks offers reliable features to handle them once on board. Method #1: Import CSV to QBO If you\u2019re looking for a simple way to manage your financial data, the QuickBooks Online import CSV feature makes it easy to upload transactions, invoices, and more directly into your account. And guess what? It is plain simple! Best for : Users of QBO doing a one-time import. Before You Start First things first \u2013 before you upload your CSVs, here\u2019s what you need to prepare: CSV File: This is your recipe card. Make sure it\u2019s formatted correctly with columns for the date, description, and amount. Double-check for any typos. Backup Plan: Like an extra umbrella for a rainy day, have a backup of your QuickBooks data ready. You probably won\u2019t need it, but it\u2019s better to be safe than sorry. Piece-of-Cake Steps to Import CSV Login to QuickBooks Online. Find and click on the gear icon at the top left of the page. Then, click Import Data . Pick the type of data to import. Be it customers, vendors, or bank data, it\u2019s all in QuickBooks. You can only pick one at a time for every import. Select CSV as the file type you want to upload. Locate your file. Click the button to find your CSV file on your computer. Map Columns. This is like matching puzzle pieces. The software will ask you to match the columns in your CSV to the right places. Review and Import. Click Review . Then, QBO will show you what it plans to do. If everything looks good, proceed to Import. Method #2: Import CSV to QBD Ready to take on the CSV journey in QuickBooks Desktop? Fear not. It\u2019s as easy as ordering your favorite pizza! Best for : Users of QBD who need a one-time import. Before You Start The checklist is the same as for the Import CSV to QuickBooks Online. Easy-Peasy Steps to Import CSV Open QuickBooks Desktop. Click File and select Utilities . Choose Import . Choose the type of data, like Customers, Vendors, and more. Choose CSV as the file type to import. Locate your CSV file from your computer. Match your CSV columns with their QuickBooks equivalents. Click Review , then Import . Method #3: Automagically Import CSV to QuickBooks with Skyvia The two methods described above are already good at importing CSV data. But there\u2019s a catch. If you have another CSV with the same column information, you must repeat all the steps. What if you have a fixed place where your CSVs live, like a folder in a shared drive? And let software import the data on a recurring schedule? That\u2019s where a cloud solution like Skyvia can come in handy. It\u2019s like construction power tools that can do the job faster. And Skyvia can deal with [QuickBooks Online](https://skyvia.com/data-integration/quickbooks-csv-file-import-and-export) as well as [QuickBooks Desktop](https://skyvia.com/data-integration/quickbooksdesktop-csv-file-import-and-export) . Best for : Users of both QuickBooks platforms that work on either a one-time or recurring CSV import. If you have data in other formats from services like [Salesforce](https://skyvia.com/blog/salesforce-quickbooks-integration/) or Zendesk, Skyvia can handle it, too. Before You Start Here\u2019s your bucket list of Skyvia import stuff: [Sign up for Skyvia](https://app.skyvia.com/) . This is free, and once you\u2019re in, you can start immediately. Write down your QBO or QBD credentials. You\u2019ll need them when creating the connection. Prepare your CSV files. The CSV files should be stored in a single location that Skyvia can access consistently. For example, if you store them in Google Drive, you\u2019ll need a Google account to connect Skyvia to that storage location. Again, backup. Though marrying QuickBooks and Skyvia is an easy feat, things can go wrong when you\u2019re starting. Secure your data with [Skyvia backup](https://skyvia.com/backup) or any preferred solution of your choice before importing anything. The ABCs of Importing CSV to QuickBooks Using Skyvia Let\u2019s make your CSV-to-QuickBooks integration smooth sailing. Here\u2019s how: 1. Make a New Connection to QB Skyvia has connectors for both QuickBooks Desktop and QuickBooks Online. For CSV and QuickBooks to say \u201cI do,\u201d Skyvia is the knot, and this is the first part. Let me show you how to start making a QBO connection. Click NEW -> Connection . Then, select QuickBooks Online . Configuring the QBO connector is just as easy. Now, you don\u2019t need to share with Skyvia your QuickBooks User ID and password: you simply log in to the QB platform, and it will give Skyvia an access token. The QBD connection, on the other hand, adds a few more steps. You need the Skyvia Agent to enable Skyvia to access it. Because Skyvia is on the internet and your QuickBooks Desktop is on your company computer. For more information, [visit this\u202ftutorial page](https://docs.skyvia.com/connectors/cloud-sources/quickbooksdesktop_connections.html#connection) . 2. Make a New Connection to Your CSV Files The second part of the knot is making a connection to where your CSV files are. You can put your CSVs online using Google Drive or your company\u2019s FTP folder: Skyvia can connect to these and many other online storages, such as SQL Server, PostgreSQL, Oracle, Snowflake, Amazon Redshift, and Azure Synapse Analytics. Let\u2019s say you want it on Google Drive. So, here are the easy steps. [Sign in to Skyvia](https://app.skyvia.com/) . If you\u2019re already inside Skyvia, skip this step. Make a new Google Drive connection. Click NEW from the upper menu. Then, choose Connection . Type in\u202fGoogle Drive\u202fto the search box. Then, click Google Drive . Log in to Google. Then, Google will give an access token to Skyvia. Test, save, and name your new connection \u2013 you\u2019ll need it for the next step. As you can see, the process is almost the same as with the QuickBooks connection but much simpler. You will see a similar screen below once you finish the steps: 3. Create a Skyvia Import Before doing the Import, check the CSV file for red flags. Make sure the size of columns fit those in QuickBooks. Then, it\u2019s time to tie the knot. The Skyvia Import scenario will copy the CSV records to QuickBooks. Check out this animated step-by-step guide on setting up a new import integration below. Set the source and target connections: Click Add New to create an import task. This will open the task editor window. Configure the Source settings on the Source Definition page. Since importing a CSV file from Google Drive, select the file from the CSV Path drop-down list. Define the Target settings on the Target Definition page. Choose the object you want to import data into and select one of the available operations: INSERT, UPDATE, UPSERT, or DELETE. Next, map the CSV columns to QuickBooks Customer. Note that only fields with the same data type can be mapped. Use Column mapping to match target fields to source fields without transformations. If you need to convert a source column\u2019s data type to match the target field type, use Expression mapping, like, for example, the DisplayName field. Finally, save the task and run the integration. And you can check out what\u2019s inside the CSV file. Expect this to be in QuickBooks later. Then, it\u2019s time to run your new Import integration. Check it out below. Finally, you can check it in QuickBooks. Here\u2019s what happened to the customer list: You all know these names. So, comparing it with the CSV records earlier will be painless. 4. Schedule the Import Skyvia can schedule your imports so you can set up the import once and forget it. It will run on its own without your intervention. Simply click\u202fSchedule\u202ffrom the upper left of the Import integration. Then, set up when you want it to run. See a sample below: That\u2019s it. Mark your import CSV to QuickBooks task as resolved. Great job! Why Is Importing CSV Files Essential for Businesses? Now, picture yourself as a small business owner with a mountain of receipts, invoices, and expenses that need to be digitalized. Instead of manually entering everything, you can save hours of typing in just a few clicks! And that\u2019s not all. Importing CSV files directly into QuickBooks gives the following perks: Seamless transition : ideal when upgrading from manual spreadsheets to accounting software. Organized data : QuickBooks automatically sorts your data into the right places \u2013 no fuss, no muss. Fewer errors : reduce typos and improve data accuracy by bypassing manual entry. Integrated historical data : import records for a clear, comprehensive view of your financial performance. Effortless setup : skip most of the system setup and get started faster. How to Handle Errors During the Import Process? Alright, things happen \u2013 even the best carts sometimes tip over. So, what should you do if you hit a snag during the import process? Exclude mapping issues . Incorrect column mapping is the first suspect during CSV imports. To keep things running smoothly, double-check that your CSV columns align perfectly with those in QuickBooks. Keep it consistent . Give your CSV file a thorough check, looking for unsupported characters, missing values, or duplicate entries \u2013 like a detective on the case! Your CSV format matters! Check your column headers, adjust data formats, and ensure all the required fields are filled in properly. A template or a well-formatted worksheet can save you from many common formatting headaches. If all that fails, take a cue from Captain Obvious \u2013 use manual entry to circumvent those especially pesky errors. Conclusion Using Methods 1 and 2, you can import CSV files directly into QuickBooks. But with Skyvia, you can set up the import once and let it run automatically for as long as you need. Plus, Skyvia offers more than just the QuickBooks and CSV scenario \u2013 its vast [gallery](https://skyvia.com/gallery) of predefined integrations spans the most common cases of synchronizing cloud apps. From creating Mailchimp lists from Salesforce contacts to more complex scenarios, it is a jack-of-all-trades for your data. Skyvia is a cloud data platform that lets you integrate any data in various ways. [Signing up](https://app.skyvia.com/register?) is free, and you can start as quickly as now. All you need is a working email. And, of course, your data. So, why not give it a try today? Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-into-quickbooks-online-and-desktop%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Import+CSV+Into+QuickBooks+Online+and+Desktop&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-import-csv-into-quickbooks-online-and-desktop%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-import-csv-into-quickbooks-online-and-desktop/&title=How+to+Import+CSV+Into+QuickBooks+Online+and+Desktop) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/how-to-integrate-mailchimp-to-your-shopify-store/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Integrate Mailchimp to Shopify Store By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/how-to-integrate-mailchimp-to-your-shopify-store/#respond) 2854 October 5, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mailchimp-to-your-shopify-store%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Mailchimp+to+Shopify+Store&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mailchimp-to-your-shopify-store%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-integrate-mailchimp-to-your-shopify-store/&title=How+to+Integrate+Mailchimp+to+Shopify+Store) Ever wondered how to seamlessly integrate Shopify with Mailchimp to create a symphony of customer engagement? You\u2019re in for a treat if you have a bit of tech know-how. Picture this: MailChimp and Shopify, two giants in the digital realm, teaming up like a power duo to rock your customer engagement. Don\u2019t worry; you don\u2019t need to be a tech virtuoso to master this integration. In fact, you\u2019ve got your skilled team by your side. Think of it as a collaboration that\u2019s as smooth as a well-rehearsed melody. This article will be your guide through the world of integrating Shopify with MailChimp, packed with practical insights and tips. Get ready to amplify your business\u2019s tune and make your customers love you! Here\u2019s a heads-up on our discussion flow: Table of Contents How Can Shopify and Mailchimp Work Together? What Types of Data Can You Integrate Shopify and Mailchimp With? The Advantages of Integrating Shopify and Mailchimp How to Integrate Shopify and Mailchimp with the Mailchimp Shopify App Cloud Solutions to Integrate Shopify and Mailchimp 12 Best Practices to Integrate Shopify and Mailchimp Final Thoughts So, get ready, and let\u2019s begin. How Can Shopify and Mailchimp Work Together? MailChimp is like a friendly postman for emails. It\u2019s a popular tool that helps you send emails to lots of people without breaking a sweat. You can design cool newsletters, track who\u2019s reading them, and jazz up your emails with spiffy graphics. Shopify, on the other hand, is like a bustling digital market square. It\u2019s where you set up shop online to sell your wares. From snazzy sneakers to handcrafted hats, Shopify\u2019s got your back. It helps you showcase your stuff, manage orders, and even handle payments with a smile. Now, when these two pals join forces, it\u2019s like peanut butter and jelly making the perfect sandwich. MailChimp and Shopify can shake hands through integration. This means your email list and customer info can have a chit-chat. They share details like what folks bought and when they did it. So, you can fire off awesome emails to your customers without missing a beat. It\u2019s like teamwork for boosting your business! There are two rock-solid ways to make them work. First, there\u2019s the\u202fMailchimp app in the Shopify store. Then, there are cloud integration solutions. Today, we will discuss both so you can have buyer engagements in no time. What Types of Data Can You Integrate Shopify and Mailchimp With? Here\u2019s the lowdown on what data you can share: Customer Data . This includes who your customers are, what they\u2019ve bought, and where they live. It\u2019s like sharing your customer\u2019s favorite cake recipe with the next-door baker. Purchase History .\u202fThis is all about what items your customers have bought. It\u2019s like telling your friend about that amazing burger joint you tried last week. Product Info .\u202fThese are details about your products, like names, descriptions, and prices. It\u2019s like showing off your cool gadget collection to fellow tech enthusiasts. Order Status .\u202fThis is where you know if orders are still in the oven or out for delivery. It\u2019s like keeping tabs on your pizza order to know when it\u2019s arriving. In a nutshell, integrating MailChimp and Shopify is like having two besties who keep each other updated. They\u2019re secure pals, but you should be thoughtful about what you share and keep an eye on their fun conversation. Safety and teamwork for the win! The Advantages of Integrating Shopify and Mailchimp Check out the perks of this power duo: Supercharged Marketing Integrating MailChimp and Shopify is like adding rocket boosters to your marketing. You can send tailored emails to customers based on what they\u2019ve bought. It\u2019s like giving them a VIP shopping experience, making them feel special, and boosting your sales. Smoother Operations Think of it as having a well-oiled machine. When MailChimp and Shopify team up, your order and customer data dance harmoniously. This means fewer manual tasks and more time to focus on growing your business. Personalized Magic Imagine you\u2019re a magician tailoring tricks for each audience member. With integration, you can send emails that address customers by name and recommend products they\u2019re likely to love. It\u2019s like having a personal shopper for each customer. Newbies\u2019 Welcome Welcoming new customers is like greeting a guest at your home. Integration helps you send automated welcome emails to new Shopify customers, making them feel valued right from the start. Win-Back Campaigns People sometimes drift away, like a balloon escaping into the sky. The integration lets you spot inactive customers and create win-back campaigns. It\u2019s like sending a friendly reminder that says, \u201cHey, we miss you!\u201d Cart Recovery Charm Sometimes customers abandon their carts like forgotten treasures. But fear not! Integration can send follow-up emails to remind them about their abandoned carts. It\u2019s like having a polite but persuasive salesperson bring them back to finish their shopping. Segment Smarts It\u2019s like sorting a deck of cards into different suits. Integration lets you create specific customer segments, such as location-based groups or those who purchased certain products. This way, you can target your emails precisely. Insights Galore It\u2019s like having a crystal ball for your business. Integration provides data on which emails work best and what products are a hit. This helps you fine-tune your strategies and make smarter decisions. A/B Testing Advantage It\u2019s like experimenting with different cake recipes to find the tastiest one. Integration lets you run A/B tests on different segments, testing subject lines, visuals, and content. You\u2019ll discover what resonates best with each group. Growth Spark Integration doesn\u2019t just make things smooth; it ignites growth. The more you engage customers with personalized emails, the more they\u2019ll stick around and spread the word. It\u2019s like planting seeds that turn into a lush garden of loyal customers. Continuous Refinement Like a sculptor perfecting their masterpiece, integration lets you continuously refine your targeting. As you gather more data and learn about your customers, you can adjust your segments and strategies for even better results. In a nutshell, integrating MailChimp and Shopify is like a match made in e-commerce heaven. It\u2019s a recipe for personalized marketing, seamless operations, and a boost in your business\u2019s star power. So, go ahead and let these two buddies work their magic! Let\u2019s start with the Mailchimp App for Shopify. How to Integrate Shopify and Mailchimp with the Mailchimp Shopify App If you\u2019re wondering what this app looks like in the Shopify store, take a look above. That\u2019s your first option to integrate Shopify to Mailchimp. But first, you need to install it. STEP 1: Install the Mailchimp App Login to Shopify, then go to the Shopify App Store using this address:\u202fhttps://apps.shopify.com/mailchimp. The page that will appear is the one above. Then, click the Install button to install the Mailchimp app as seen below: Then, follow the next prompts. Read the privacy agreement, and when you\u2019re done, click the Install App button to continue. Check out the page for that below: Step 2: Connect Mailchimp Account After installation, you need to tell Shopify about Mailchimp by providing Mailchimp credentials. You need to log in to Mailchimp to do this. It\u2019s like matching the two for their first date. When you\u2019re done, another page will appear so you can review your settings. After reviewing the info, click\u202fSync Now. Check out a sample below: Then, syncing proceeds. How long this will run depends on the size of the data from your Shopify store. Once this is done, scroll down the page to see the dashboard. You will see numbers about customers, products, and more. When these numbers appear, congratulations! You have shared your Shopify data with Mailchimp. See a sample below using a newly created store in Shopify with one product and one customer: At this point, your integration of the two is good to go. Step 3: Go to Mailchimp You can visit your Mailchimp account to confirm the syncing. And under\u202fIntegrations, you will see Shopify as one of the connected apps. See a sample below: Now that your Shopify customers are in Mailchimp, you can set up campaigns and automations. Below is a sample page about creating an automated Welcome Email for new customers: Connecting your Shopify store to Mailchimp is only a piece of the puzzle. You need to monitor the performance of your automation. Then, review and adjust as you see fit. You will gain insights as your confidence in using the tools and your customers grow. So, to boost buyer love, improve your strategies by adding more automations in Mailchimp. There\u2019s more you can do in the Mailchimp App for Shopify and Mailchimp itself. Try to explore each and if you\u2019re stuck on something, you can always contact the Mailchimp support. Cloud Solutions to Integrate Shopify and Mailchimp Cloud solutions to integrate Shopify and Mailchimp offer more flexibility in data mapping, transformation, sync scheduling, and more. It\u2019s like having power tools at your disposal instead of using conventional ones. And the one we recommend is Skyvia. Overview of Skyvia [Skyvia](https://skyvia.com/) is like a wizard that magically connects different apps and databases. It\u2019s a cloud-based integration platform that makes data flow between your favorite apps, just like a river flowing smoothly. Skyvia can connect not just Shopify and Mailchimp but also other apps and databases. And this includes Zendesk for Help desk support. Or Google Analytics for tracking website performance. For a full list of connectors, please [visit\u202fthis link](https://skyvia.com/connectors/) . Advantages of Integrating Shopify and Mailchimp Using Skyvia Easy Setup :\u202fSkyvia is user-friendly, so you don\u2019t need to be a tech guru to set things up. It\u2019s like following a recipe\u2014step by step. Automated Data Sync :\u202fIt\u2019s like having a synchronized dance routine. Skyvia keeps your customer data in Shopify and MailChimp in perfect harmony, so you\u2019re always up-to-date. Flexible Mapping :\u202fSkyvia lets you decide what data goes where. It\u2019s like arranging puzzle pieces\u2014you create the picture that makes sense for your business. Scheduled Sync :\u202fImagine an automated reminder that keeps you organized. Skyvia can schedule regular data syncs, so you don\u2019t have to lift a finger. Transformations :\u202fIt\u2019s like having a translator for data. Skyvia can reshape info to fit the format you need, making integration smoother. Security Measures :\u202fSkyvia takes security seriously. It\u2019s like having a trusty lock on your digital treasure chest, keeping your data safe. No Coding Required :\u202fYou don\u2019t need to be a coding whiz. Skyvia\u2019s user interface is like a friendly guide, making integration a breeze. Scalability :\u202fAs your business grows, Skyvia grows with you. It\u2019s like a reliable partner that can handle more as your needs expand. In a nutshell, integrating MailChimp and Shopify with Skyvia is like having a smooth conveyor belt for your data. It brings together the power of both platforms without the hassle, keeping your information in sync and your business running like clockwork. While the Mailchimp app for Shopify provides a straightforward integration for basic needs, Skyvia offers a more comprehensive and versatile solution for businesses with more complex data integration and synchronization requirements. How to Integrate Shopify and Mailchimp Using Skyvia Integrating the two powerhouses in Skyvia is easy. Here are the steps: Step 1: Create the Connections to Shopify and Mailchimp You need two connections. One for Mailchimp and the other for Shopify. Whichever you create first does not matter. To create a Mailchimp connection, see below: Creating a Shopify connection is almost similar. See the configuration below: The rest of the steps remain the same. Note that Skyvia does not store your Username and password. In fact, both Shopify and Mailchimp are big on security. And they provide access tokens to Skyvia, like secondary keys to a secured area of your house. So, rest assured your accounts are safe. Step 2: Create Integrations for Shopify and Mailchimp There are a variety of ways to let Shopify and Mailchimp talk together in Skyvia. There\u2019s the Import integration like the one featured next. Skyvia Import One simple solution is using the Skyvia Import, like the one you see below: To do this, you need the following steps: Click NEW . Under INTEGRATION , select Import . Then, choose Data Source as the Source Type . Next, choose the Shopify connection you made as your Source . Then, choose the Mailchimp connection you made as your Target . Add Tasks to specify the information you want to import, like customers. Then, follow the prompts to map fields and specify if you want to insert, update, or delete. Then, you can create a schedule for the Import so you don\u2019t have to do this again and again. Check out the sample below: This one lets you do the syncing between Mailchimp and Shopify every day at 12:30 AM. For more information about the Import integration, see the [official documentation\u202fhere](https://docs.skyvia.com/data-integration/import/) . Skyvia Control Flow and Data Flow A fun and advanced way to sync Shopify and Mailchimp is through Skyvia\u202fControl Flow. It allows you to do integration tasks in a specific order. With Skyvia Control Flow, you have a clean designer of your data pipeline. It also has a set of different components to organize your process flow. One of the components in the Control Flow is the\u202fData Flow. A Data Flow component lets you integrate one or more data sources while enabling advanced data transformations. The Control Flow defines which of the Data Flows will run first and which one will follow. Skyvia Control Flow and Data Flow Example to Integrate Shopify and Mailchimp Below is a sample of a Control Flow that backs up customer info in Mailchimp. Then, Shopify and Mailchimp sync follows. It uses 2 Data Flow components to do that. The backup Data Flow runs first. Then, the Data Flow for Shopify and Mailchimp runs next. And the backup process dumps the customer info into a CSV file. Check out the Data Flow setup of the backup procedure below: After extracting the data from Mailchimp, it writes all the information in a CSV file in Google Drive. But CSV files in a Google Drive is not the only option. You can choose whatever target you wish, like a relational database. Meanwhile, here\u2019s an unfinished simple Data Flow setup to sync Shopify to Mailchimp. The Shopify customer list is the source, while Mailchimp is the target. You can do a Control Flow in Skyvia without coding using the steps below: Click NEW . Under INTEGRATION , click Control Flow . Then, add Actions and Data Flows to the Control Flow and configure. Finally, create a schedule like the one in the Import integration. For more details, check out the official documentation for [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . 12 Best Practices to Integrate Shopify and Mailchimp Here are some best practices to ensure silky smooth and successful integration: 1. Plan Ahead Like plotting a treasure hunt, plan what data you want to sync between MailChimp and Shopify. Decide which customer segments to target with your email campaigns. 2. Keep Data Clean Before integration, tidy up your data. Make sure customer information is accurate and consistent on both platforms. It\u2019s like prepping a canvas before painting. 3. Segment Strategically Don\u2019t go overboard with segments. Keep it focused and relevant. Targeting smaller, well-defined groups is like delivering personalized messages that hit the bullseye. 4. Test in Stages Like cooking a complex dish, start with a small batch. Test the integration with a subset of data to ensure it\u2019s working as expected before syncing everything. 5. Maintain Permissions Only share what\u2019s necessary. Make sure you\u2019re complying with privacy regulations and that you have the right permissions to use customer data. 6. Regular Monitoring Keep an eye on your integration. Check if data is syncing correctly, and monitor email campaign performance. It\u2019s like tending to a garden to ensure everything grows smoothly. 7. Use Automation Wisely Automate where it makes sense, like sending welcome emails or abandoned cart reminders. But don\u2019t forget the human touch\u2014some communications might still require personalization. 8. Keep Platforms Updated Stay up to date with changes and updates in both MailChimp and Shopify. It\u2019s like maintaining your tools for optimal performance. 9. Data Security Ensure that any sensitive customer data is securely transferred between platforms. Skyvia and other integration tools provide encryption, but it\u2019s wise to double-check. 10. Monitor Results Keep track of how your integration impacts your business. Monitor open rates, click-through rates, and conversion rates to understand what\u2019s working and what needs improvement. 11. Educate Your Team If you\u2019re working with a team, make sure everyone understands the integration process. It\u2019s like making sure all the players know their roles on the field. 12. Adapt and Evolve Just like a ship adjusting its sails, be ready to adapt your integration strategy based on results and changing business needs. Remember, integrating MailChimp and Shopify is about boosting customer love. Following these best practices will help you make the most out of this dynamic partnership. Final Thoughts Making besties out of Shopify and Mailchimp is simple. You\u2019ve seen two options to do that here. One is using the Mailchimp app for Shopify. And the other is using a more robust solution like Skyvia. You will feel a bit uneasy if this is your first time. But once this is in place, you will only monitor results and, if needed, refine the process. This is something you can do. Others have made it. So, you are not alone. It\u2019s free to start with Skyvia if you\u2019re considering integration now. Why not [try it today](https://id.skyvia.com/core/register) and see how you can make better integrations between Shopify and Mailchimp? Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mailchimp-to-your-shopify-store%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Mailchimp+to+Shopify+Store&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mailchimp-to-your-shopify-store%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-integrate-mailchimp-to-your-shopify-store/&title=How+to+Integrate+Mailchimp+to+Shopify+Store) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-integrate-mysql-and-google-bigquery/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Integrate MySQL and Google BigQuery: A Comprehensive Guide By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/how-to-integrate-mysql-and-google-bigquery/#respond) 2586 March 22, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mysql-and-google-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+MySQL+and+Google+BigQuery%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mysql-and-google-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-integrate-mysql-and-google-bigquery/&title=How+to+Integrate+MySQL+and+Google+BigQuery%3A+A+Comprehensive+Guide) If there were ever a classic in data integrations, MySQL to BigQuery would surely take the lead. And it\u2019s no surprise \u2013 combining the structured storage of MySQL with BigQuery\u2019s powerful analytics won\u2019t give businesses a clairvoyant vision. Still, it does provide the data-driven edge needed to stay ahead of the competition. In this guide, we\u2019ll break down MySQL and BigQuery\u2019s key capabilities and compare their multiple integration methods \u2013 both manual and automated. Read on to know which one is the best fit for your needs! Table of Contents MySQL Explained: An Overview and Key Benefits Google BigQuery: Key Features and Benefits What You Can Migrate from MySQL to BigQuery? Methods to Connect MySQL to BigQuery Method 1: Connect MySQL to BigQuery Manually Method 2: Automate MySQL to BigQuery Integration with Skyvia Methods\u2019 Comparison Challenges in Integrating MySQL with BigQuery Why Use Skyvia to Sync MySQL to BigQuery? Conclusion MySQL Explained: An Overview and Key Benefits MySQL is an open-source relational database management system owned by [Oracle](https://www.oracle.com/mysql/what-is-mysql/) . It is fast, reliable, and free to use, which adds to its popularity. With SQL for data management and support of [ACID-compliant transactions](https://www.databricks.com/glossary/acid-transactions) , MySQL is equally suitable for transactional and structured records. It\u2019s scalable and easily integrated with other systems, which makes it a popular choice for web services, e-commerce platforms, SaaS solutions, and business applications. Key benefits High performance & speed : optimized for fast read and write operations, making it ideal for online transaction processing workloads. Scalability : supports vertical and horizontal scaling to handle increasing traffic efficiently. Ensures data integrity : ACID compliance makes MySQL second to none for financial systems where accuracy is critical. Multi-platform & cloud compatibility : runs on Windows, Linux, macOS, and major cloud platforms like AWS, Google Cloud, and Azure. Easy data warehouse (DWH) integration : MySQL is a common component of ETL (Extract, Transform, Load) pipelines, where it is used to store structured transactional data before moving it to a DWH for analytics. Google BigQuery: Key Features and Benefits BigQuery is Google\u2019s contribution to the [data warehouse](https://skyvia.com/learn/what-is-data-data-warehouse) family \u2013 a serverless, scalable, and fully managed cloud solution that is part of the Google Cloud Platform (GCP). Unlike traditional databases, BigQuery follows a decoupled architecture, where storage and computing are independent. This allows each resource to scale separately, resulting in better performance: since storage doesn\u2019t impact query processing, BigQuery can handle massive datasets without bottlenecks. Being a cloud storage, BigQuery shares most common features with its kinfolk from AWS (Amazon Redshift) and Azure (Azure Synapse Analytics), including: Handling massive datasets through distributive computing; Support of SQL-based querying; Cloud-native scalability; Integration with data pipelines; Enterprise-grade security. With that said, BigQuery boasts several unique features that set it apart from competitors: Serverless architecture : It was the first major cloud DWH to be fully serverless. No infrastructure management \u2013 Google handles everything automatically, including patches, updates, and auto-scaling. The [pricing](https://cloud.google.com/bigquery/pricing/) is based on a pay-per-query model, eliminating the need to maintain always-on servers. Real-time querying : Thanks to the built-in support of Google\u2019s [Live Stream API](https://cloud.google.com/livestream/docs/overview) , BigQuery can ingest and analyze streaming data, providing instant insights and live dashboards. Unlike competitors, real-time querying works out of the box \u2013 no additional services or external tools are required. ML support : With BigQuery ML, Google\u2019s native ML feature, users can train and deploy machine learning models directly inside BigQuery with simple SQL commands \u2013 no need for complex ML frameworks like TensorFlow or PyTorch. What You Can Migrate from MySQL to BigQuery? Not all MySQL components can be directly transferred to BigQuery. When planning such an integration, it is important to evaluate each component of your database and plan the migration strategy accordingly. Below is a breakdown:\u200b Not all MySQL components can be directly transferred to BigQuery. When planning such an integration, it is important to evaluate each component of your database and plan the migration strategy accordingly. Below is a breakdown:\u200b MySQL component Transferability Comment Data rows Yes Data types Yes Some data types require transformation, see Challenges in Integrating MySQL with BigQuery . Table schemas Yes Indexes No MySQL indexes are not applicable in BigQuery. Stored procedures No Cannot be directly migrated; requires re-implementation using BigQuery\u2019s SQL syntax. User-defined functions No Must be rewritten with BigQuery\u2019s SQL syntax. Triggers No BigQuery does not support triggers. Views Yes Transactional logic No Needs to be redesigned within BigQuery. User Permissions No Appropriate access controls should be set up in BigQuery. Methods to Connect MySQL to BigQuery Connecting MySQL with BigQuery requires an [ETL pipeline](https://skyvia.com/learn/etl-vs-elt) \u2013 an automated process that extracts data from MySQL, transforms it, and loads it into BigQuery for analytics. This integration is commonly practiced by businesses that seek to improve operational performance and gain advanced analytical insights. Its benefits include: Optimized performance for large datasets . Relational DBs are built for fast lookups and transactions, but they may stall when processing heavy analytical queries. BigQuery, on the other hand, is optimized for fast ingestion of large data streams, making it a top choice for real-time and batch pipelines. Consistency of transactional and analytical operations . Running analytics on a live transactional database inevitably impacts app performance for end users. Keeping analytical workloads separate in a DWH ensures that transactional operations remain uninterrupted, maintaining smooth business processes. Below, we\u2019ll explore the most effective methods for integrating MySQL with BigQuery. Method 1: Connect MySQL to BigQuery Manually Dump & Load This is a traditional approach to data transfers. It involves exporting records from MySQL with mysqldump, uploading them to Google Cloud Storage (GCS), and then importing them into BigQuery. Although manually effortful, this method is straightforward and reliable, best suited for one-time data transfers or infrequent updates . Below is a step-by-step guide:\u200b Set up the Google Cloud Platform (GCP) environment: Create a [Google Cloud account](https://console.cloud.google.com/) . Enable BigQuery Service: within your GCP project, ensure that the BigQuery API is enabled.\u200b Install the [Google Cloud SDK](https://cloud.google.com/sdk/docs/install) , the toolkit for interacting with GCP services. After installation, run gcloud init to configure the SDK with your GCP account and set the default project. Export data from MySQL: Use the mysqldump command to export MySQL databases or tables. To export a specific table: mysqldump -u [username] -p [password] -h [hostname] [database] [table_name] > [table_name].sql Replace the placeholders in [ ] brackets with your MySQL credentials and the specific table you wish to export. Upload data to GCS: In the GCP Console, create a new bucket in GCS to hold your files.\u200b Upload your files to a newly created bucket using the gsutil command-line tool: gsutil cp [table_name].csv gs://your-bucket-name/ Create a new dataset in BigQuery: Using the bq utility, run the command: bq mk [dataset_name] Import data into BigQuery: Define the table schema: BigQuery requires a schema definition for the data. Create a JSON file (schema.json) that describes the structure of your table. For example: [\n {\"name\": \"column1\", \"type\": \"STRING\"},\n {\"name\": \"column2\", \"type\": \"INTEGER\"},\n {\"name\": \"column3\", \"type\": \"FLOAT\"}\n] Adjust the column names and types to match your records. Use the bq tool to load data from GCS into BigQuery: bq load --source_format=CSV [dataset_name].[table_name] gs://[your_bucket_name]/[table_name].csv schema.json Ensure that the source_format matches the format of your file (e.g., CSV). Using Google Cloud Data Fusion [Cloud Data Fusion](https://cloud.google.com/data-fusion?hl=uk) is a fully managed service for building cloud-native ETL pipelines. With a convenient GUI and drag-and-drop wizard for creating dataflows, it is beginner-friendly and accessible to all kinds of users. Steps to integrate MySQL with BigQuery: Set up a Cloud Data Fusion instance: In the Google Cloud Console, create a new Data Fusion instance.\u200b Grant the instance necessary permissions to access both MySQL and BigQuery.\u200b Configure MySQL as a source: Within Data Fusion, establish a connection to your MySQL database by providing the required credentials and connection details.\u200b Test the connection to verify accessibility.\u200b Design the ETL pipeline: Use the visual interface to create a new pipeline. Add a MySQL source component to read data from the desired tables.\u200b Incorporate any necessary transformation logic to align formats and schemas.\u200b Add a BigQuery sink component to load the transformed data into the target dataset.\u200b Run and monitor: Deploy the pipeline and initiate the transfer process.\u200b Check the progress and set up alerts for any errors or issues. Using Python Scripts for Data Replication For those comfortable with programming, developing Python scripts is an excellent choice to customize the data replication process. This method allows for tailored transformations and scheduling to meet specific business needs.\u200b Steps to Integrate MySQL with BigQuery: Set Up the Development Environment: Ensure Python is installed on your system.\u200b Install necessary libraries such as mysql-connector-python for MySQL interactions and google-cloud-bigquery for BigQuery operations.\u200b Extract records from MySQL: Establish a connection to the MySQL database using the MySQL connector.\u200b Execute SQL queries to retrieve data from the desired tables.\u200b Transform data using [pandas](https://pandas.pydata.org/) or [Spark](https://spark.apache.org/) : Load the extracted records into a preferred DataFrame for transformation.\u200b Perform necessary cleaning and formatting to ensure compatibility with BigQuery\u2019s schema.\u200b [Load data into BigQuery](https://skyvia.com/blog/how-to-load-data-into-bigquery/) : Authenticate with the Google Cloud API using service account credentials.\u200b Utilize the BigQuery client library to load the transformed data into the target BigQuery table.\u200b Automate the Process: Schedule the Python script to run at desired intervals using tools like cron (Linux) or Task Scheduler (Windows).\u200b Alternatively, deploy the script using Google Cloud Functions and use Cloud Scheduler to trigger it, achieving a serverless automation solution. Method 2: Automate MySQL to BigQuery Integration with Skyvia While the manual methods discussed above provide greater control over exported data, they can be resource-intensive and require technical expertise. In contrast, the next method \u2013 integrating MySQL and BigQuery with Skyvia \u2013 is fast , straightforward , and fully automated . Step 1. Create Connections First, let\u2019s establish a connection to MySQL. [Sign in](https://app.skyvia.com/) to Skyvia, or, if you don\u2019t have an account yet, create it for free. Click +Create New , select Connection , and choose MySQL. Note : Skyvia supports two connection methods for MySQL: direct and with an agent. Use a direct connection if your MySQL server is accessible via the internet. If it is on a local computer or network, you\u2019ll need to install the [Skyvia Agent](https://skyvia.com/agent) application to make a secure connection. For this case, we\u2019ll create a direct connection. Provide MySQL credentials: server address, port, user ID, password, and database name. Click Create Connection . \u0421onnecting Skyvia to BigQuery involves the same steps as with MySQL: In Skyvia, go to Connections and click +Create New . On the Select Connector page, choose BigQuery. Select the preferred authentication method. Note : Skyvia supports two authentication methods for Google BigQuery: OAuth authentication (User Account) and Service Account authentication. When using OAuth authentication, you sign in with your Google account without sharing your credentials with Skyvia. Instead, Skyvia uses OAuth 2.0 to generate a secure token, which is bound to the connection. For Service Account authentication, you need to provide the Private Key JSON. Enter your BigQuery credentials: Project Id and DataSet Id to connect to. You can retrieve these in the [Google API console](https://console.cloud.google.com/) . Specify a Google Cloud Storage Bucket (optional unless you are planning for bulk import and replication operations). Click Create Connection . Step 2. Create an Import Integration Once both connections are ready, let\u2019s implement a scenario of moving MySQL records to BigQuery. In the top menu, click +Create New and select Import . Set the corresponding source type. For the MySQL to BigQuery import, choose Data Source database or cloud app . Select the MySQL connection as the Source and the BigQuery connection as the Target. Click Add new to create an import task. On the Source Definition page, select a table to import data from: Note : Each table must be imported as a separate task. You can add as many tasks as needed. On the Target Definition page, select an object to import your data to, and one of the available operations. Click Next step and proceed to mapping. Step 3. Mapping On the Mapping Definition page, map target columns (on the right) to source columns (on the left). To map a target column, click it in the table, select the corresponding source column, and choose the desired mapping type. Note : If the source and target columns have different data types, use Expression Mapping to transform them and make them fit the target data structure. Click Save task and run the integration. Step 4. Monitor You can run the import task on schedule or manually. To automate integration, click Schedule and configure the timing. You can track its progress in the Monitor or Logs tabs. If errors occur, click the run results to review the failed records. When the run is completed, a copy of your MySQL data will be available in BigQuery. Methods\u2019 Comparison Table Criteria Methods Dump & Load Cloud Data Fusion Python scripts Skyvia Ease of use Manual, requires SQL knowledge User-friendly visual interface Requires programming skills. No-code, user-friendly Speed Slow (manual process, batch mode) Fast (optimized pipeline execution) Varies (depends on script efficiency) Fast (optimized cloud processing) Cost Low (only storage & query costs) Higher (GCP service charges apply) Medium (GCP usage + development time) Affordable (subscription-based pricing) Automation No Yes Partial (requires custom scheduling) Fully automated & scheduled Best for One-time transfers, small datasets Enterprises needing scalable ETL Custom transformations & developers Automated, regular integrations Pros Simple, no extra tools required \u2013 Real-time integration support\u00a0\u2013 Fully managed- Scalable- Centralized management of data pipelines \u2013 Highly customizable\u00a0\u2013 Supports complex logic \u2013 No-code setup\u00a0\u2013 Automation- Support of ETL, reverse ETL, and ELT pipelines Cons \u2013 Time-intensive- Lack of automation \u2013 Requires GCP knowledge- Limited custom transformations- Less control over infrastructure Requires coding & maintenance Requires paid subscription to access advanced options. Challenges in Integrating MySQL with BigQuery While MySQL to BigQuery integration offers clear advantages in terms of analytics and scalability, the process itself comes with certain challenges and limitations that users should be aware of. Schema differences & data type mismatches . MySQL and BigQuery handle data types differently, which can lead to schema mismatches during migrations. Some MySQL data types have no direct equivalent in BigQuery and require manual mapping. MySQL Data Type BigQuery Equivalent Potential Issue TINYINT(1) BOOLEAN Needs conversion DATETIME TIMESTAMP Time zones may cause inconsistencies DECIMAL NUMERIC Precision differences TEXT/BLOB STRING Potential size limits ENUM/SET STRING May require transformation Performance bottlenecks with large datasets . Since MySQL is optimized for fast transactions, extracting large volumes of data from it can result in a slowdown and degraded performance. Real-time syncing complexity . MySQL does not natively support real-time streaming to BigQuery. [Batch processing](https://skyvia.com/blog/batch-etl-processing/) is common, but for real-time analytics, additional tools are needed. ETL pipeline complexity & maintenance . ETL workflows require ongoing monitoring and updates. Schema changes in MySQL, such as adding/dropping columns or data type changes, can easily break it. Why Use Skyvia to Sync MySQL to BigQuery? [Skyvia](https://skyvia.com/) is a cloud data integration platform whose diverse functionality makes it a one-stop solution for all data operations. The benefits of using Skyvia include: Automation & scalability . Skyvia automates the entire ETL pipeline, handling large-scale routine tasks so you don\u2019t have to. Data transformation . The platform\u2019s advanced mapping options allow you to transform and restructure data before loading, resolving schema mismatches between MySQL and BigQuery. Versatile integration options . It goes beyond basic import. With Skyvia, you can replicate MySQL objects into BigQuery, perform complex transformations using conditional logic, query data directly, and even set up automated [backups](https://skyvia.com/backup) . Ease of use . The platform\u2019s no-code interface makes data integration accessible to all users, from business analysts to engineers. And if you need more control, the [Query](https://skyvia.com/query) product lets you either use a drag-and-drop visual designer or write complex SQL queries manually. Watch this video tutorial to find out more about how to integrate MySQL with BigQuery using Skyvia. Conclusion Although both MySQL and BigQuery are used for storing data, they are optimized for different needs. Their integration creates a powerful solution \u2013 combining efficient transaction processing with scalable analytics. There are multiple integration options available, and choosing the right one depends on your technical expertise and business needs: If you prefer coding and CLI tools, a manual process may be the best fit. For those needing a no-code interface, Skyvia is the ideal choice. But whatever method you choose, integrating MySQL with BigQuery is a strategic move that allows you to maximize data potential and drive better business decisions. FAQ for MySQL and Google BigQuery Is it possible to connect MySQL to BigQuery without coding? Yes! You can use no-code [ETL tools](https://skyvia.com/blog/etl-tools/) like Skyvia, Fivetran, Hevo Data, or Stitch to automate the integration without writing code. These platforms provide pre-built connectors for seamless MySQL to BigQuery migration. What are the advantages of using BigQuery instead of MySQL? BigQuery is a serverless, highly scalable DWH optimized for big data analytics. Unlike MySQL, it supports real-time streaming, fast SQL-based querying on large datasets, built-in ML models, and pay-per-query pricing for cost efficiency. Can I use Google Cloud SQL instead of MySQL for BigQuery integration? Yes! Google Cloud SQL (managed MySQL) can be directly integrated with BigQuery using Cloud SQL federated queries or ETL tools like Dataflow or Cloud Data Fusion, enabling real-time or batch data transfers. What is the easiest way to move MySQL data to BigQuery? The easiest way is to use a fully automated tool like Skyvia, Hevo, or Fivetran. These tools handle all steps of the ETL process \u2013 extraction, transformation, and loading \u2013 ensuring seamless transfer with minimal effort. Which ETL tools support MySQL to BigQuery migration? Popular ETL tools for MySQL to BigQuery include Skyvia, Hevo Data, Fivetran, Stitch, Google Cloud Data Fusion, and Airbyte. These tools automate data extraction, transformation, and loading, reducing manual effort. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mysql-and-google-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+MySQL+and+Google+BigQuery%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-integrate-mysql-and-google-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-integrate-mysql-and-google-bigquery/&title=How+to+Integrate+MySQL+and+Google+BigQuery%3A+A+Comprehensive+Guide) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-load-data-into-bigquery/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Load Data into BigQuery: Step-by-Step Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/how-to-load-data-into-bigquery/#respond) 4573 February 12, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-load-data-into-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Load+Data+into+BigQuery%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-load-data-into-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-load-data-into-bigquery/&title=How+to+Load+Data+into+BigQuery%3A+Step-by-Step+Guide) Data warehousing is one of the most effective approaches to handling big data effectively. So, the market offers many commercial and open-source DWH tools for various business needs. However, Google BigQuery remains among the top solutions, as it effectively allows users to store and process large data volumes with near-to-zero delays. Anyway, it\u2019s important to load data into BigQuery correctly to take the most value out of it. This article discovers key methods to transfer and upload data into this data warehouse. Table of contents What is Google BigQuery? Different BigQuery Data Load Types Advantages of BigQuery Disadvantages of BigQuery Methods for Loading Data into BigQuery Data Preparation for BigQuery Upload Method 1: Upload CSV File Data Method 2: Input Data from JSON Documents Method 3: Use Google Cloud Storage to Upload Data Method 4: Loading Data Using BigQuery Web UI (Cloud Console) Method 5: Loading Data Using the CLI Method 6: Loading Data Using REST API Method 7: Streaming Data Ingestion Method 8: Data Transfer Service Method 9: Loading Data Using Skyvia Conclusion What is Google BigQuery? [Google BigQuery](https://console.cloud.google.com/bigquery) is a serverless data warehouse platform that helps businesses scale easily and at an affordable cost. Since it\u2019s part of the global Google infrastructure, it offers extended functionality that goes beyond standard data warehousing options: Gemini. This is an AI-based assistant that facilitates the overall user experience with Google tools. Multiple data types support. This DWH allows users to consolidate versatile data within a single workspace, including structured and unstructured types. It also supports data formats from legacy and open-source systems. Real-time analytics. Integrated Apache Flink and Apache Kafka engines enable users to build and run real-time streaming pipelines. Built-in machine learning (ML). Inside Google BigQuery, there are embedded ML models used to derive predictions and insights. Enterprise features. Cross-region disaster recovery, operational health monitoring, data backup, etc., are among the most popular options that facilitate day-to-day enterprise operations. Different BigQuery Data Load Types Given the universality of this data warehouse, there are different ways and methods for bringing data there. It\u2019s possible to import data into BigQuery from files of the following formats: CSV JSONL ORC [Avro](https://avro.apache.org/) [Parquet](https://parquet.apache.org/) [Datastore exports](https://cloud.google.com/bigquery/docs/loading-data-cloud-datastore) and [Filestore exports](https://cloud.google.com/bigquery/docs/loading-data-cloud-firestore) . Importing datasets from Google Cloud Storage units. Integrating data from other Google systems (Google Ads Manager, Google Analytics, Google Drive, etc.). Getting datasets from external sources (Cloud Spanner, Apache Spark, cloud-based MySQL, cloud-based PostgreSQL, etc.). Executing SQL queries against data sources of your choice. Loading Python Notebooks from Google Collaboratory. Advantages of BigQuery 1. Simplified Management BigQuery is a fully managed serverless platform that eliminates the need to manage, update, and optimize the underlying infrastructure . How is it possible? It\u2019s because Google handles all those aspects and ensures excellent performance, availability, and scalability. 2. Advanced Features BigQuery offers built-in options offering advanced functionality: Geospatial data types and functions (BigQuery GIS) Natural language processing (Data QNA) ML and AI integrations (BigQuery ML) Visualization tools (Google Data Studio). With all these functions, you can easily transform and analyze data without going through complex configuration or normalization processes. 3. Disaster Recovery BigQuery offers a range of protective mechanisms to keep data safe. Thanks to automated backups and recovery, you can promptly restore data upon request. It\u2019s also possible to compare historical data from previous backups if there is a need to. Moreover, [Google stores data in various physical data centers](https://www.google.com/about/datacenters/data-security/#:~:text=Rather%20than%20storing%20each%20user's,a%20single%20point%20of%20failure.) in dispersed locations, which allows organizations to easily meet data storage compliance requirements. Disadvantages of BigQuery 1. Need for Strong SQL Skills It\u2019s necessary to have extensive knowledge of SQL, including dialects for DDL and DML, to query data with BigQuery. Profound SQL skills are also required for query optimization (unoptimized queries can lead to excessive costs). 2. Limited Integrations Even though BigQuery connects to services within the Google Cloud Platform and other external data sources, it generally doesn\u2019t offer many connectors. Therefore, third-party data integration solutions are needed. One of them is Skyvia, a cloud-based data platform that allows users to connect BigQuery with pre-built [200+ data sources](https://skyvia.com/connectors) (databases, cloud applications, flat files, storage systems, etc.) in a few clicks and with no coding. This tool also offers API endpoint creation to connect to a variety of services, which might be a good option for engineers with a high level of technical expertise. Methods for Loading Data into BigQuery Manual Data Loading The most straightforward way to load data is to insert it manually. For instance, you can use DML operations to add data to a table with INSERT statements either via the BigQuery Web UI or programmatically via a client library. It\u2019s also possible to add data to the DWH by simply uploading flat files of supported formats, which is very simple and rather fast. This could be an ideal option if you need to perform such an operation occasionally. In case you need to scale data loads, consider automating the process with the solutions provided below. Automated Methods Batch Data Loading This method is useful for loading a dataset in a single operation. Google offers the BigQuery Data Transfer Service to batch-loading from Google SaaS apps and third-party systems. Batch loading is also possible by extracting data from cloud storage or local files in a supported format. Besides, you can also use Firestore and Datastore exports to import data in batches. ETL (Extract, Transform, and Load) tools are also popular for batch data ingestion. They allow you to schedule one-time or recurring jobs. Streaming Data This approach is well-suited for real-time or continuous data flows. It enables you to stream data as individual records or batches directly into a BigQuery table. Streaming API or Google Dataflow with Apache Beam are common for streaming pipelines. Third-Party Solutions Some third-party applications, such as Skyvia, can load data into BigQuery. Such solutions allow users to import data in batches as well as in streams. All data pipeline configurations take place in a user-friendly visual interface. A tangible advantage of Skyvia and other similar solutions is that they allow users to keep all integrations in one place. So, you can configure data loading to and from BigQuery, involving multiple sources, as well as setting up data transfers between other applications, all within the same dashboard. This adds convenience for businesses to keep track of day-to-day data-related operations. Data Preparation for BigQuery Upload Each company has its own IT infrastructure, software toolkit, data volumes, etc. Therefore, pay attention to each of these factors before importing any data into the Google BigQuery data warehouse and select the appropriate methods. For example, if you are going to upload files, check which formats are supported. In case you need to batch load data from other sources, check the list of [supported data sources](https://cloud.google.com/bigquery/docs/dts-introduction#supported_data_sources) . To stream data into BigQuery, make sure to have a subscription in [Pub/Sub](https://cloud.google.com/pubsub/docs/overview) . Note that BigQuery also offers built-in [data preparation with Gemini](https://cloud.google.com/bigquery/docs/data-prep-get-suggestions) to preprocess data for specific purposes. Choosing File Format When extracting data from a file, make sure it corresponds to one of the supported formats: ORC, CSV, JSON, Parquet, and Avro. Also, Filestore exports and Datastore exports are available. Support for Schema Identifying BigQuery can automatically detect the table schema with self-describing formats. On the other hand, users need to explicitly provide a schema or use a schema auto-detection feature for formats like JSON or CSV. Flat Data vs. Nested and Repeated Fields Arvo, JSON, ORC, and Parquet files, along with Firestore exports, support nested and repeated fields. If you need to build a hierarchical data structure inside a DWH, feel free to use any of these formats. Embedded Newline BigQuery requires JSON files to be newline-delimited, which means they need to contain a single record per line. Encoding BigQuery\u2019s primary encoding format is UTF-8, which supports both flat and nested or repeated data. Additionally, it supports ISO-8859-1 encoding for CSV files containing flat data. Data and Time When loading CSV or JSON files, make sure that the DATE columns use the dash (-) separator. Also, the date must be in the following format: YYYY-MM-DD (year-month-day). Check the TIMESTAMP column values to make sure they use a dash (-) or slash (/) separator for the date portion of the timestamp. The date must be in one of the following formats: YYYY-MM-DD (year-month-day) or YYYY/MM/DD (year/month/day). The timestamp\u2019s hh:mm:ss (hour-minute-second) part must use a colon (:) separator. Creating a Dataset For some data loading methods, it will be necessary to create a dataset first. In BigQuery, it\u2019s a top-level object that is used to organize and control access to the tables and views. Navigate to [Google BigQuery Console](https://console.cloud.google.com/bigquery?) and click the Create data set option on the project. Indicate the name and data region, which corresponds to the physical location of the data. To create a table, click on the dataset and select the Create Table option from the context menu or from the panel. Choosing a Sample Data Set Later in this article, we review some practical examples of how to load data into BigQuery. Therefore, we prepare this sample dataset, provided below, for all the data loading operations. user_id first_name last_name age address 001546 Barry Stevens 21 82, Kings Road 002546 Harry Potter 30 785/88B, Lakewood 020054 Sam Smith 25 010, Mud Lane 015778 Kevin Lee 31 875, Melrose Avenue 002336 Matt Harris 27 99/110, Lake Drive Before loading production data into BigQuery, you might also consider experimenting with the sample data set first. Method 1: Upload CSV File Data Uploading a CSV file from a local computer or cloud storage creates a new table in the selected dataset. In this example, we review the first case. We rely on the Web UI of the BigQuery console since it\u2019s a user-friendly and convenient option, especially for those new to BigQuery. It allows users to easily upload files, create and manage BigQuery resources, run SQL queries, etc. The only downside of this Web UI is its slower performance compared to other available options. Now, let\u2019s look at the procedure for uploading a CSV file into BigQuery via Web UI. Click Add and select the Local file option. Provide all the requested details in the window that appears: Select the CSV file from your computer. Select the dataset or create a new one. Indicate the name of the new table created upon CSV file load. Specify schema or select the Automatic schema detection option. Provide other details if needed. Check all the details once again and click Create table . Go to the newly created table and click Preview to observe the loaded data. Method 2: Input Data from JSON Documents Click Add and select the Local file option. Provide all the requested details in the window that appears: Select the JSONL file from your computer. Select the dataset or create a new one. Indicate the name of the new table that will be created upon JSONL file load. Specify schema or select the Automatic schema detection option. Provide other details if needed. Check all the provided details and click Create table . Go to the newly created table and click Preview to observe the loaded data. Method 3: Use Google Cloud Storage to Upload Data Click Add and select the Google Cloud Storage option. Click Browse and select the bucket from your Google Cloud Storage space. Then, select the files from that bucket. Note : It\u2019s possible to get data from the bucket containing files supported by BigQuery (ORC, CSV, JSON, Parquet, and Avro). Otherwise, it will be grayed out. Specify the dataset to store data in and indicate the table name for the data. Check all the details once again and click Create table . Go to the newly created table and click Preview to observe the loaded data. Method 4: Loading Data Using BigQuery Web UI (Cloud Console) For this case, it\u2019s necessary to first create a database before getting started. This operation was described in the step-by-step instructions earlier in this article. Then, follow the steps provided below. Create an empty table and define a schema. To start loading data via DML, use the following INSERT query within the BigQuery SQL workspace. INSERT\n `test-applications-315905.test_dataset.user_details` (user_id,\n first_name,\n last_name,\n age,\n address)\nVALUES\n (001546, 'Barry', 'Stevens', 21, '82, Kings Road'),\n (002546, 'Harry', 'Potter', 30, '785/88B, Lakewood'),\n (020054, 'Sam', 'Smith', 25, '010, Mud Lane'),\n (015778, 'Kevin', 'Lee', 31, '875, Melrose Avenue'),\n (002336, 'Matt', 'Harris', 27, '99/110, Lake Drive'); Method 5: Loading Data Using the CLI The [bq command-line tool](https://cloud.google.com/bigquery/docs/reference/bq-cli-reference) is a Python-based CLI that is a preferred method for processing large datasets. Users can create CLI commands and store them as scripts to simplify interactions with BigQuery. The scripted approach is particularly beneficial for dataset troubleshooting and monitoring. The only downside of CLI is that it requires in-depth knowledge of the BigQuery platform and the underlying data structure to use the CLI interface. Now, let\u2019s see how to use this bq CLI to load a JSON data set into BigQuery. As shown below, the command explicitly defines the table schema. NOTE: Make sure that the JSON file is in Newline-Delimited format, which is also known as JSONL. Add JSON file details and provide details needed to connect to the dataset. bq load --source_format=NEWLINE_DELIMITED_JSON \\\ntest_dataset.user_details_json user_details.json \\\nuser_id:integer,first_name:string,last_name:string,age:integer,address:string Query the \u201cuser_details_json\u201d table to verify that data was loaded successfully. bq show test-applications-315905:test_dataset.user_details_json Method 6: Loading Data Using REST API Another method for loading data into BigQuery is by using the [REST](https://cloud.google.com/bigquery/docs/reference/rest) API. Integrating BigQuery with your proprietary software, applications, or scripts is an ideal option. The best way to utilize API is through the client libraries for different programming languages provided by Google. These libraries carry out the following functions: Directly communicate with the REST API. Handle all the low-level communications (authentications, for instance). Eliminating the need to create API calls from scratch. NOTE: This method requires strong programming skills and a deep understanding of the client library functionality. The following code is an example of loading data with the Python client library using a CSV file stored in the Google Cloud Storage Bucket. from google.oauth2 import service_account\nfrom google.cloud import bigquery\n \n# Create Authentication Credentials\nproject_id = \"test-applications-xxxxx\"\ntable_id = f\"{project_id}.test_dataset.user_details_python_csv\"\ngcp_credentials = service_account.Credentials.from_service_account_file('test-applications-xxxxx-74dxxxxx.json')\n \n# Create BigQuery Client\nbq_client = bigquery.Client(credentials=gcp_credentials)\n \n# Create Table Schema\njob_config = bigquery.LoadJobConfig(\n schema=[\n bigquery.SchemaField(\"user_id\", \"INTEGER\"),\n bigquery.SchemaField(\"first_name\", \"STRING\"),\n bigquery.SchemaField(\"last_name\", \"STRING\"),\n bigquery.SchemaField(\"age\", \"INTEGER\"),\n bigquery.SchemaField(\"address\", \"STRING\"),\n ],\n skip_leading_rows=1,\n source_format=bigquery.SourceFormat.CSV,\n)\n \n# CSV File Location (Cloud Storage Bucket)\nuri = \"https://storage.cloud.google.com/test_python_functions/user_details.csv\"\n \n# Create the Job\ncsv_load_job = bq_client.load_table_from_uri(\nuri, table_id, job_config=job_config\n)\n \ncsv_load_job.result() Now, a new table has been created within the dataset. Method 7: Streaming Data Ingestion This approach is suitable for continuous real-time data processing, inserting one record at a time. This is done using the \u201ctabledata.insertAll\u201d API reference. Streamed data is first written to the streaming buffer and then written to the actual table in columnar format. This code block adds a new record to the \u201cuser_details\u201d table. from google.cloud import bigquery\nfrom google.oauth2 import service_account\n \n# Create Authentication Credentials\nproject_id = \"test-applications-xxxxx\"\ntable_id = f\"{project_id}.test_dataset.user_details\"\ngcp_credentials = service_account.Credentials.from_service_account_file('test-applications-xxxxx-74dxxxxx.json')\n \n# Create BigQuery Client\nbq_client = bigquery.Client(credentials=gcp_credentials)\n \n# Data to Insert\nrows_to_insert = [\n{u\"user_id\": 254475, u\"first_name\": u\"Alice\", u\"last_name\": u\"Marie\", u\"age\": 32, u\"address\": u\"45, Lex Drive\"}\n]\n \n# API Call\nbq_client.insert_rows_json(table_id, rows_to_insert) Note that users cannot update or delete records while the data is in the streaming buffer. Due to that, these streaming inserts are more suitable for data streams that do not require immediate alterations to the data. Below, find an example of streaming data to the \u201cuser_details\u201d table. If you try to delete the row immediately, it will throw an error since the data is still in the streaming buffer. Method 8: Data Transfer Service This fully managed [Data Transfer Service](https://cloud.google.com/bigquery-transfer/docs/introduction) ingests data from other Google SaaS applications such as Google Campaign, Ad Manager, YouTube reports, external storage providers like AWS S3, and data warehouses like AWS Redshift or Teradata. This is one of the best options to tackle big data extraction and migration. In the BigQuery Console, open the left menu and select Data transfers . You will be asked to enable Data Transfer API to use this service if you haven\u2019t done that previously. Select the data source of your interest from the drop-down list. Provide all the requested details in the Create transfer window. For instance, for the YouTube Channel source, it\u2019s necessary to indicate its display name, repeat frequency, dataset, and table suffix. Method 9: Loading Data Using Skyvia [Skyvia](https://skyvia.com/connectors/google-bigquery) is one of the third-party integration solutions that enable data loading from external sources into BigQuery. It can also transfer data to other data warehouses, databases, and cloud apps. To use Skyvia, you do not need any locally installed [software](https://skyvia.com/blog/best-data-pipeline-tools/) except for a web browser and a [registered account](https://app.skyvia.com/register?) . Using Skyvia is simple since it doesn\u2019t require any programming knowledge or coding skills. This platform also allows you to automate data transfer to BigQuery by setting schedules. There are three steps to confirm data transfer to BigQuery with Skyvia: Step 1: Create a Connection to BigQuery in Skyvia To create a connection, go to +Create New \u2013 > Connection and select Google BigQuery from the list of available data sources. In the BigQuery connector window, sign in via OAuth 2.0 and specify other parameters required (Project ID, DataSet ID, Cloud Storage Bucket). Before loading data to BigQuery, you need to follow the same procedure to create a connection to the data sources you are interested in. Step 2: Create an Import Task to Load Data From the upper menu, go to +Create New -> Import . Find the needed tool from the Source drop-down list. It can be a CSV file from file storage or from your local computer, a database like SQL Server, or a cloud app like BigCommerce, HubSpot, QuickBooks, etc. Select BigQuery from the Target drop-down list. Click Add task on the right panel to create the import logic. Select the source data first. Then, pick up the target object in BigQuery to load data to and specify the DML operation type (INSERT, UPSERT, UPDATE, DELETE). Afterward, configure the mapping settings between the source and BigQuery and save the task. Repeat steps 4-5 to add tasks for other data objects. Step 3: Schedule Your Package for Automatic Execution If you need to execute imports automatically, without human intervention, consider the scheduling options. For that, click Schedule and specify the exact time at which a one-time data load needs to take place. Alternatively, you can set an interval at which the data packages will run regularly into BigQuery. Once the package is scheduled, click Run to start the data loading processes. Check the Monitor tab to see the outcomes. Conclusion Google BigQuery introduces multiple methods for data ingestion and interaction. In this article, we have discussed popular and easy-to-use data import techniques. In a real-world scenario, use the approach that meets your specific business case. Load latency, data change frequency, and reliability are some aspects that impact the selection of a data load [method](https://skyvia.com/blog/salesforce-quickbooks-integration/) . In some cases, you might need to combine several techniques to optimize the overall data pipeline. In addition to the built-in options, third-party solutions, like Skyvia, enable users to automate data import. What\u2019s more, you can extract data from 200+ data sources in several clicks via a web interface. FAQ for Load Data into BigQuery What are the different methods to load data into BigQuery? There are several principal ways for importing data into Google BigQuery: \u2013 Uploading supported files from your local computer or cloud storage. \u2013 Using REST API \u2013 Using Cloud Console \u2013 Using Google CLI \u2013 Streaming Data Ingestion \u2013 Data Transfer Service \u2013 Skyvia and other third-party [data integration tools](https://skyvia.com/blog/data-integration-tools/) How can I ensure my CSV file is correctly formatted for import? Before loading CSV files into BigQuery, check the following: \u2013 No nested or repeated data in CSV. \u2013 Remove BOM characters. \u2013 Follow the [DATE and TIMESTAMP formats](https://cloud.google.com/bigquery/quotas#load_jobs) when loading data from CSV files. \u2013 Meet the CSV file size limits described in the [load jobs limits](https://cloud.google.com/bigquery/quotas#load_jobs) . What file formats does BigQuery support for data loading? BigQuery supports the following file formats: \u2013 CSV \u2013 JSONL \u2013 ORC \u2013 [Avro](https://avro.apache.org/) \u2013 [Parquet](https://avro.apache.org/) \u2013 Cloud Datastore Backup \u2013 Iceberg \u2013 Delta Lake How does BigQuery ensure data security during the loading process? BigQuery encrypts data at rest using AES-256 encryption. In transit, the data is encrypted using TLS. Access controls enforced through IAM enable organizations to manage and control who can access and manipulate data within BigQuery. Can I integrate data from multiple sources into BigQuery? Yes. You can even import data from multiple sources simultaneously using Skyvia\u2019s [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) tool. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-load-data-into-bigquery%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Load+Data+into+BigQuery%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-load-data-into-bigquery%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-load-data-into-bigquery/&title=How+to+Load+Data+into+BigQuery%3A+Step-by-Step+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-mass-delete-records-from-salesforce-using-sql-or-data-loader/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Salesforce Mass Record Deletion: Best Methods & Step-by-Step Guide By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/how-to-mass-delete-records-from-salesforce-using-sql-or-data-loader/#respond) 4748 March 7, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-mass-delete-records-from-salesforce-using-sql-or-data-loader%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Mass+Record+Deletion%3A+Best+Methods+%26+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-mass-delete-records-from-salesforce-using-sql-or-data-loader%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-mass-delete-records-from-salesforce-using-sql-or-data-loader/&title=Salesforce+Mass+Record+Deletion%3A+Best+Methods+%26+Step-by-Step+Guide) Managing data in Salesforce inevitably requires keeping it clean, accurate, and up-to-date. Regular data cleanups involve removing duplicate records and clearing outdated data. The clean up process may require deleting hundreds of records in multiple Salesforce objects. For example, you may need to perform mass delete reports in Salesforce or mass delete leads records. The main concern here is to find the best way to do that. There are several methods to perform bulk delete operations in Salesforce: native tools, custom SQL or SOQL queries, and various third-party tools. In this article, we dive deep into Salesforce data deletion specifics, explore all the available mass delete methods, and demonstrate step-by-step guides highlighting best practices. Table of contents Understanding Salesforce Data Deletion Rules Ways to Mass Delete Records in Salesforce Methods Comparison How to Mass Delete Records in Salesforce (Step-by-Step Guide) Method 1: Salesforce\u2019s Mass Delete Wizard Method 2: Salesforce Data Loader Method 3: Using Workbench Method 4: Using Skyvia Query Method 5: Apex Code for Mass Deletion Method 6: Using Third-Party Apps How to Mass Delete Salesforce Records Using Skyvia Data Loader Bulk Delete Salesforce Records by CSV Using Skyvia Data Loader Batch Remove Salesforce Contacts by Filters Using Data Loader Best Practices for Mass Deletion in Salesforce Common Errors & How to Fix Them Conclusion Understanding Salesforce Data Deletion Rules Salesforce records mass delete is a risky action that can potentially lead to data loss, compliance issues, or unwanted business consequences. Before doing it, you should understand how the bulk delete works in Salesforce. Soft Delete vs. Hard Delete Salesforce supports two types of delete operations: Soft Delete : When you delete records, they move to the Recycle Bin, and they can be restored. [The records remain in the Recycle Bin for 15 days](https://developer.salesforce.com/docs/atlas.en-us.salesforce_large_data_volumes_bp.meta/salesforce_large_data_volumes_bp/ldv_deployments_techniques_deleting_data.htm) , after which they are deleted permanently. This type helps to avoid accidental deletions. Hard Delete : The records are permanently removed from Salesforce and become unrecoverable. Hard deletion bypasses the Recycle Bin and is typically used when data needs to be cleared immediately for compliance or storage reasons. Permission & Profile Considerations Before performing data cleanup, make sure you are following the requirements: Ensure the \u201cModify All Data\u201d or \u201cDelete\u201d permissions for the relevant objects are granted. Enable API access if you use tools like Data Loader, Workbench, or Skyvia Query. Ways to Mass Delete Records in Salesforce Let\u2019s overview several options for Salesforce mass delete. All of them are helpful for different needs and levels of technical expertise. Mass Delete Wizard Salesforce includes a built-in [mass delete records tool](https://help.salesforce.com/s/articleView?id=xcloud.admin_massdelete.htm&type=5) in the Setup menu. The tool enables the mass deletion of Salesforce accounts, contacts, leads, and cases. It doesn\u2019t support custom objects bulk deletion and is [limited to 250 records per operation](https://help.salesforce.com/s/articleView?id=xcloud.admin_massdelete_notes.htm&type=5) . Pros Easy to use No installation required Supports standard objects Cons Limited to 250 records per operation Does not support custom objects Lacks advanced filtering options Best for Users who need a simple, built-in solution to mass delete records in standard objects. Salesforce Data Loader [Data Loader](https://developer.salesforce.com/tools/data-loader) is a client application that enables bulk deletion using CSV files. Users export the records they want to delete, list their IDs in a CSV file, and then use Data Loader to process the deletions. This method requires installation and setup. Pros Supports both standard and custom objects A free tool provided by Salesforce Cons Requires manual file preparation Needs installation on a local machine Best for Advanced users who need to bulk delete large volumes of data, including custom objects. SOQL or SQL Query Another method of Salesforce mass delete using [SOQL or SQL](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/) queries. You can write a custom SOQL command in Workbench or use Skyvia Query to manage Salesforce data using SQL. Workbench (SOQL) [Workbench](https://workbench.developerforce.com/) is a web-based tool that allows users to write SOQL queries to find and delete records. It requires a solid understanding of SOQL, which is helpful for Salesforce administrators or developers. Pros Supports complex queries Enables customization Available online Cons Requires SOQL knowledge Not embedded in Salesforce UI Best for Users who are comfortable with queries. Skyvia Query (SQL) [Skyvia Query](https://skyvia.com/sql-for-cloud-apps) , on the other hand, allows users to execute SQL queries against Salesforce data. It supports deleting records in standard and custom objects, and requires no installation. Skyvia Query offers a visual query builder for non-tech users. Its drag and drop interface can help you\u00a0 generate complex queries even without deep SQL knowledge. Pros Supports complex queries Enables customization Cloud-based and user-friendly Cons May require basic SQL knowledge for complex queries. Not embedded in Salesforce UI Best for Users who want more flexibility and look for customization opportunities. Apex Code for Mass Deletion Another method to delete records in bulk is for those comfortable with coding. You can run [Apex scripts](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_intro.htm) in the Salesforce Developer Console. This method provides flexibility and customization options but requires developing skills. Pros Highly customizable Can automate deletions Cons Requires developer expertise High potential for mistakes if not carefully implemented Best for Developers who need full control and automation over the deletion process. Third-Party Apps & Automation Tools There are a number of tools provided by [third parties](https://appexchange.salesforce.com/appxSearchKeywordResults?keywords=delete) that can solve data clean-up tasks. Besides the mass delete feature, these tools often include additional functions like data deduplication, synchronization, and backup. Pros More features than built-in Salesforce tools offer Supports advanced filtering and automation Cons Some tools require paid subscriptions Can have a learning curve Best for Businesses that need advanced deletion features and automation beyond Salesforce\u2019s native tools. Methods Comparison We compared all the methods in the table below for your convenience. Method Ease of Use Requires Coding Features Best For Mass Delete Wizard Easy No Built-in tool, limited to 250 records Deleting a small number of standard objects Salesforce Data Loader Moderate No Supports both standard and custom objects Deleting large volumes of data via CSV files Workbench (SOQL) & Moderate Yes Supports complex filtering and bulk deletion Users comfortable with queries Skyvia Query (SQL) Easy No Query builder Users looking for flexibility and customization Apex Code for Mass Deletion Complex Yes Fully customizable, high automation potential Developers who need full automation control Third-Party Apps (e.g., Skyvia, Cloudingo, DemandTools) Easy No Automation, deduplication, scheduled deletions Businesses needing advanced features Let\u2019s look at some of them in detail. Below, you will find the step-by-step guides on mass delete in Salesforce. How to Mass Delete Records in Salesforce (Step-by-Step Guide) In this section, we demonstrate the methods of bulk delete in step-by-step guides. Method 1: Salesforce\u2019s Mass Delete Wizard The easiest way to bulk delete Salesforce records is a mass delete wizard. It\u2019s a built-in tool available in Salesforce Classic and Lightning. To mass delete Salesforce contacts, accounts, or other standard object records, follow the below steps: Go to Setup . Look for \u2018Mass Delete Records\u2019 In the Quick Find box. In the Mass Delete Records window, select whether to delete accounts, activities, contacts, cases, leads, products, or reports. Indicate the criteria for the records to be deleted and click Search . On the list of found records, select the needed records and click Delete . This method is quite easy and fast. However, the maximum number of records to remove at once is 250, so it might be necessary to perform dozens of such operations to eliminate thousands of records. This is not convenient and is usually time-consuming, so using alternative services becomes inevitable in such cases. Method 2: Salesforce Data Loader This method is helpful in the following cases: To delete a large number of records. For bulk delete Salesforce records in objects not supported by Mass Delete Wizard. To schedule regular data deletions. [Data loader supports other operations](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) besides deletions. You can also use it to import data. Prerequisites Before using Data Loader, ensure that the API is enabled and that the appropriate user permissions are granted in Salesforce. If you perform bulk hard delete, enable the Bulk API Hard Delete for your organization. Downloading and installing Data Loader To start using the Salesforce Data Loader, perform the following steps. Install Java Runtime Environment (JRE) version 17 or later for your operating system. Download the most recent version of Data Loader at [https://developer.salesforce.com/tools/data-loader](https://developer.salesforce.com/tools/data-loader) . In the Data Loader folder, find and open the installation file. Specify a folder to install the Data Loader. Answer the prompts and select your preferences to complete the installation. Exporting records with SOQL queries To delete records with Data Loader, you must extract them from Salesforce first. Open the Data Loader on your computer and click Export . To export archived activity records and soft-deleted records, click Export All instead. Sign in with Salesforce. Choose an object. If your object name isn\u2019t listed, select Show all objects to see all the available objects you can access. Select the CSV file to export the data to. Choose an existing file or create a new one. The export replaces the contents if you choose the existing file. Create a SOQL query for the data export using only the record ID as the desired criterion. For example, the query below selects the IDs of contacts created during the last calendar year: After the operation is complete, a confirmation window will display the export results. To view the CSV file, click View Extraction or O\u041a to close. Deleting records in bulk using CSV files To perform mass delete records in Salesforce, do the following. Launch the Data Loader and click Delete. Sign in with Salesforce. Choose the object to delete data from. Select the CSV file with the list of records to be deleted. Map the ID fields between the selected object and the CSV file. Select the folder where to place the log files when the operation is complete. Method 3: Using Workbench [Workbench](https://workbench.developerforce.com/) is a web-based tool that helps Salesforce developers and administrators manage data. It supports deleting large data volumes from standard and custom Salesforce objects. Prerequisites Before using Data Loader, ensure API is enabled and that the appropriate user permissions are granted in Salesforce. If you perform bulk hard delete, enable the Bulk API Hard Delete. Exporting records with SOQL queries You have to export the records to be deleted first. For this, do the following. In [Workbench](https://workbench.developerforce.com/) , select the environment, API version, and click Login with Salesforce. Select an action to perform and Select an object. Specify fields and set filters if needed using the visual query wizard. In this case, Workbench will generate a SOQL query automatically. You can also write your custom SOQL query. Note: The ID field is required for deletion. Set View as to BulkCSV and click Query. Download the result file. Deleting records in bulk using CSV files Now you can delete records: Click Data in the top menu and select Delete. Choose the file exported earlier. Map the Id field in the Salesforce object to the Id in the CSV file and click Map fields . Confirm mapping and select Confirm Delete. Method 4: Using Skyvia Query [Skyvia Query](https://skyvia.com/query) is an online SQL client and query builder tool for retrieving and managing cloud and relational data with SQL statements. It provides a visual Query Builder to compile queries via UI and an SQL editor for users who are familiar with SQL. Prerequisites To run queries in Skyvia, you need an active Skyvia account and a valid [Salesforce](https://skyvia.com/connectors/salesforce) connection. Not registered yet? Go for it now. Deleting Salesforce Records with SQL Statement To mass delete Salesforce records using SQL command, do the following. Go to Skyvia and click +Create New . Find Query and choose SQL. Select the existing Salesforce connection or create a new one. Once selected, all the available objects will be displayed on the left. Type the mass delete SQL statement and click Execute . Check the result preview and click Apply All if everything is right. For example, the following command deletes contacts containing Test in their names: DELETE FROM Contact WHERE Name LIKE \u2018%Test%\u2019; Method 5: Apex Code for Mass Deletion [Apex](https://developer.salesforce.com/docs/atlas.en-us.254.0.apexcode.meta/apexcode/apex_dev_guide.htm) enables users to access the Salesforce back-end to perform various data manipulation tasks and create custom applications. In terms of mass delete, Apex allows you to do simple one-time deletions or create a reusable delete script, which you can run by schedule. See below for instructions on how to run a simple one-time deletion code. Writing a simple Apex script for bulk deletion To delete Salesforce records using Apex, do the following. In Salesforce, click the gear icon and open Developer Console . Click Debug and select Open Execute Anonymous Window . Paste the script and click Execute . For example, the script below deletes the contacts containing test in their names. List contactsToDelete = [SELECT Id FROM Contact WHERE Name LIKE '%Test%'];\nif (!contactsToDelete.isEmpty()) {\n delete contactsToDelete;\n System.debug(contactsToDelete.size() + ' contacts deleted.');\n} else {\n System.debug('No contacts found with \"Test\" in their name.');\n} Method 6: Using Third-Party Apps If you are not interested in built-in tools, coding methods, and writing [SOQL and SQL](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/) queries, you can always use a third-party tool for Salesforce mass delete. There are a number of third-party tools on the market. Below, we compare five top-rated tools. [DemandTools](https://www.g2.com/products/demandtools/reviews) is a data quality toolset for Salesforce CRM, offering functionalities such as de-duplication, normalization, standardization, comparison, import, export, and mass delete. [Cloudingo](https://www.g2.com/products/cloudingo/reviews) is a [Salesforce data quality](https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/) tool focusing on deduplication and data cleansing. It can detect duplicates, mass delete unwanted records, automate maintenance, and help visualize data. [Skyvia](https://www.g2.com/products/skyvia/reviews) is a cloud data platform that provides data integration, backup, and management solutions. It allows users to perform mass deleting Salesforce records using a user-friendly interface. It also provides diverse solutions for data manipulation and data orchestration. [Insycle](https://www.g2.com/products/insycle/reviews) is a data management platform that provides an intuitive interface for Salesforce users to manage, cleanse, and delete data in bulk. It can mass delete Salesforce records, detect duplicates, automate data maintenance, and track changes for audits. [Dataloader.io](http://dataloader.io) is a cloud-based data loader for Salesforce, allowing users to import, export, update, and delete data in bulk. Tool Ease of Use Requires Coding Best For Additional Features DemandTools Moderate No Data deduplication and mass deletion Data standardization, import/export capabilities Cloudingo High No Data cleansing and deduplication Automated maintenance, data quality dashboards Skyvia High Optional (for SQL) Mass delete operations and data integration Data integration and automation, cloud backup Insycle High No Mass editing and deletion tasks Supports standard and custom objects, advanced filtering Dataloader.io High No Bulk data operations Scheduling capabilities, secure performance How to Mass Delete Salesforce Records Using Skyvia Data Loader This method helps with deleting Salesforce records via CSV files. It\u2019s really useful for removing a huge number of records in standard and custom objects.\u00a0 Below, we show step-by-step instructions on how to do it with Skyvia via Import tool. In our example, we delete accounts from Salesforce. You can also mass delete tasks leads, reports, or any other data using the same approach. Prerequisites To mass delete Salesforce records with Skyvia Import, you need an active Skyvia account, a valid [Salesforce](https://skyvia.com/connectors/salesforce) connection, and a CSV file with record IDs to be deleted. Explore ways to [export data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) in the Skyvia blog. To keep your data clean with Skyvia, don\u2019t hesitate to get a two-week trial. Bulk Delete Salesforce Records by CSV Using Skyvia Data Loader Click +Create New in the top menu and select Import in the Integration column. Set the source type to CSV upload manually . To get a CSV file from the supported storage service, choose CSV from the storage service . Select Salesforce connection as Target. Click Add new to create an import task. Upload your CSV file. And proceed to the next step. In the Target drop-down list, select the object where the records will be deleted. Then, select the Delete operation and proceed further. In the last step of the task editor, map the source file columns to the target fields and save the task. Save the integration and run it. Check the integration results on the Monitor and Log tabs. Batch Remove Salesforce Contacts by Filters Using Data Loader If the records for deleting can be received from Salesforce itself, Skyvia offers an alternative and much simpler way to mass delete them. You just need to create an import integration and select the same Salesforce connection as both the source and the target. This method doesn\u2019t require CSV files; just do the following. Create a new import integration from the menu. Set the data source type as Data Source database or cloud app and select the same Salesforce connection as a Source and Target. Add a new import task. Select the source object in the drop-down list and set filters if needed. For example, we delete the contacts containing \u2018test\u2019 in their names. Select the target object and choose the Delete operation. Map the fields. In our case, they are mapped automatically. Save the integration and run it. You can schedule it to run automatically at a specific time or within a specific period. In the same way, you can mass delete Salesforce accounts, mass delete Salesforce opportunities, or other records in the standard or custom Salesforce objects. Best Practices for Mass Deletion in Salesforce When performing bulk delete in Salesforce, you must consider the importance of this operation and you have to be aware of the consequences. Define the deletion objectives and consider security concerns.\u00a0 Plan the data deletion logic carefully and evaluate potential risks. We collected the best practices for mass records deletion. Backup First \u2013 Always create a full backup before deleting records to prevent unrecoverable data loss. You can try secure Skyvia [Backup](https://skyvia.com/backup) . Check Relationships \u2013 Understand how deleting a record affects related objects and records. Use Filters Carefully \u2013 Double-check filters before executing a mass delete to avoid accidental deletions. Test in a Sandbox \u2013 Run deletions in a test environment to confirm the expected result. Communicate with Teams \u2013 Inform relevant teams before performing large-scale deletions to prevent workflow disruptions. Monitor Deletion Logs \u2013 Keep track of deleted records using audit logs or change tracking. Common Errors & How to Fix Them If something can go wrong, it will. Thus, we collected several potential problems you may face when performing bulk delete in Salesforce and added some hints to their solutions. Error Cause Solution Insufficient permissions User lacks Delete/Modify All Data rights Check the profile [permissions](https://help.salesforce.com/s/articleView?id=platform.admin_userperms.htm&type=5) and update them if needed. API access restricted API disabled for a user [Enable API access](https://help.salesforce.com/s/articleView?id=xcloud.branded_apps_commun_api_permset.htm&type=5) in Salesforce. Record Locking Related records prevent deletion Check the records relations before deleting. Delete related records first. Conclusion Mass delete of Salesforce records can be done in multiple ways, starting with built-in tools like Mass Delete Wizard and ending with third-party wizard solutions or custom SOQL or SQL commands. While some methods require technical skills, Skyvia offers a more accessible and efficient way to manage bulk deletions with minimal effort. Whether you need to clean up outdated data or remove unnecessary records, choosing the proper method will save time and help keep your Salesforce environment organized. FAQ for Bulk Delete Records in Salesforce Can I mass delete records in Salesforce without admin access? No. Mass deletion requires specific permissions such as \u201cModify All Data\u201d or \u201cDelete\u201d on the object. If you don\u2019t have the necessary permissions, you\u2019ll need to contact your Salesforce admin. What happens to deleted records in Salesforce? When you delete a record, it moves to the Recycle Bin (soft delete) and can be restored within 15 days. However, if you perform a hard delete, the records are permanently removed and cannot be recovered. Can I automate mass deletions in Salesforce? Yes. You can use third-party tools like Skyvia, Cloudingo, or DemandTools to schedule and automate mass deletions based on predefined rules. Can I undo a mass delete in Salesforce? Yes, in case you performed a soft delete. You can recover your records from the Recycle Bin. In case of hard delete, the records are unrecoverable. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-mass-delete-records-from-salesforce-using-sql-or-data-loader%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Mass+Record+Deletion%3A+Best+Methods+%26+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-mass-delete-records-from-salesforce-using-sql-or-data-loader%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-mass-delete-records-from-salesforce-using-sql-or-data-loader/&title=Salesforce+Mass+Record+Deletion%3A+Best+Methods+%26+Step-by-Step+Guide) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/how-to-modernize-data-infrastructure/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Modernize Data Infrastructure: Exploring Benefits, Examples, and Strategies By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/how-to-modernize-data-infrastructure/#respond) 2198 February 23, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-modernize-data-infrastructure%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Modernize+Data+Infrastructure%3A+Exploring+Benefits%2C+Examples%2C+and+Strategies&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-modernize-data-infrastructure%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-modernize-data-infrastructure/&title=How+to+Modernize+Data+Infrastructure%3A+Exploring+Benefits%2C+Examples%2C+and+Strategies) You might think about a technological upgrade when you hear the word \u2018 modernization \u2018 in the context of your company data. That\u2019s right, but not completely. Modernizing data infrastructure is a strategic necessity we cannot avoid to be competitive. It empowers businesses to: Process and analyze data more efficiently. Gain actionable insights. Maintain compliance with regulatory standards. Drive innovation and competitive edge. In other words, you\u2019ll forget about the data silos and be able to create a unified working ecosystem that offers advanced analytics possibilities and increased efficiency in business operations. In this article, we\u2019ll discover how data infrastructure modernization improves businesses and what advantages it may bring. Table of Contents What is Data Infrastructure Modernization? Advantages of Data Modernization for Businesses Key Elements of Data Modernization How to Implement Modern Data Infrastructure The Role of Skyvia in Modernizing Data Infrastructure Real-life Examples of Data Infrastructure Modernization Teesing Improved Data Orchestrating NISO Automated Data Flows for Financial Operations Cirrus Insight: Salesforce-QuickBooks Integration Summary What is Data Infrastructure Modernization? Data infrastructure modernization is updating and enhancing an organization\u2019s data management systems, tools, and methods to better handle the volume, velocity, and variety of data in today\u2019s digital environment. This process clarifies data and makes it more accessible, valuable, usable, and ready for further analytics and decision-making, helping avoid the cons of legacy systems. The data legacy management, solutions, and practices are challenging. To solve these problems, you\u2019ll need new strategies and actions, e.g.: Transition from siloed and inefficient legacy systems to more agile, scalable, and integrated data platforms. Adoption of cloud-based services, data lakes, advanced analytics tools, real-time data processing capabilities, and more robust data governance and security measures. Let\u2019s see how [data integration](https://skyvia.com/blog/best-data-warehouse-tools/) , cleansing, consolidation, transformation, etc., improve data quality and benefit companies in real life. Advantages of Data Modernization for Businesses Organizations must understand the advantages of data modernization to make such infrastructure improvements. It\u2019s like a house repairement. You know it\u2019s necessary and will make your environment convenient and cozy, but just the thought of some innovations scares you. It was working, so why do I need to change things? The answer is that you\u2019ll feel the results immediately. Here are at least six benefits your company obtains with data infrastructure modernization. Optimized Data Management We live in the data analytics times, where streamlined [data management](https://skyvia.com/blog/best-data-management-tools/) is not just essential or beneficial. It\u2019s critical for any company aiming to be competitive in a data-driven world. The first player in this magic game is cloud [data warehousing](https://skyvia.com/blog/best-data-warehouse-tools/) . It helps to consolidate data from various sources into a single repository, enhancing data consistency across the organization. On the other hand, the integration processes (cleaning, deduplicating, and transforming data,\u00a0 etc.) provide higher data quality and reliability for analysis. Better Decision Processes Fast and perfect decisions allow businesses to grow and succeed. Advanced analytics and BI based on real-time data insights are beneficial in this case. Modern [data solutions](https://skyvia.com/blog/top-10-data-analysis-tools/) allow quick data analysis and help businesses be top trendy thanks to unmistakable decisions. Increased Efficiency in Business Operations Outdated systems always have inconsistent data. That\u2019s a rule. Businesses need cloud computing and big data technologies to bypass this inconsistency. Cloud platforms allow users access from anywhere, creating a more flexible work environment. They also enable disaster recovery solutions, ensuring data backup to restore quickly. So companies can plan their business future and reduce downtime during unexpected events. The abilities of big data for predictive analytics are helpful for forecasting trends, demand, and potential issues before they occur. This approach optimizes resource allocation, prevents downtime, and improves efficiency. Stable Data Storage Solutions If your business seeks data quality and accuracy, it needs data modernization. Using advanced data storage solutions means catering to modern enterprises\u2019 efficiency and scalability needs and enhancing the stability and reliability of data management systems. Such technologies ensure companies\u2019 data storage infrastructures are robust, secure, and capable of supporting current and future data demands. Modifiable and Scalable Solutions Data volumes grow each second in the modern world, and legacy systems aren\u2019t designed to meet this challenge. Modular data architecture allows businesses to quickly build data infrastructures that are more adaptable to changing technologies, data volumes, and business needs. It provides a sustainable framework that not only supports current data management requirements but even prepares companies for future growth and innovation. Adopting a modular approach to data architecture is increasingly becoming a strategic imperative in a data-driven world where flexibility and scalability mean success. Advanced Compliance and Data Security Measure In the modern business world, you\u2019re out of the competition without data governance and security that ensures that data assets are managed responsibly, securely, and in a manner that complies with legal and regulatory standards. While your data is growing in volume, variety, and strategic importance, these practices protect your company and enhance its competitive edge by making data a safe, reliable, and accessible resource for all stakeholders. Key Elements of Data Modernization The next step of the data modernization journey includes considering its key elements. Here is the list of the most important ones. Data Consolidation Data consolidation helps companies perform holistic analysis, gaining insights that might not be apparent from isolated datasets. Consolidating data reduces redundancy and simplifies the data infrastructure, making data management processes more efficient. High-Quality Data Standards Cleaning and standardizing data enhances its overall quality and usability. They ensure data is a valuable, trustworthy asset supporting advanced analytics, AI, and data-driven decision-making. Data Storage Solutions Modern data storage technology offers solutions that prioritize scalability, performance, cost-efficiency, and accessibility. Let\u2019s go through the list of them: Cloud Storage : Provides scalable, flexible, and cost-effective remote data access. Solid State Drives (SSDs) : Offer faster data access speeds and improved reliability over traditional HDDs. Object Storage : Ideal for unstructured data, it enhances scalability and data manageability with metadata tagging. Network Attached Storage (NAS) : Enables efficient data sharing and collaboration through centralized network storage. Storage Area Network (SAN) : Delivers high-speed, dedicated network storage access for data-intensive applications. Hyper Converged Infrastructure (HCI) : Simplifies data center management by integrating computing, storage, and networking. Data Lakes : Support big data analytics by storing vast raw data in its native format. Hybrid Storage Solutions : Combine storage types to balance performance, cost, and accessibility. Cloud-Based Technology Cloud computing in modern data infrastructure fundamentally transforms how data is stored, processed, and analyzed. It offers scalable, flexible, cost-effective solutions supporting data-driven companies\u2019 growing demands. Business Data Insights BI tools transform raw data into meaningful information, like reports, dashboards, and visualizations, to make them easy to understand. This process allows companies to identify trends, patterns, and anomalies, facilitating data-driven decision-making. Key benefits of this approach are: Improved operational efficiency Enhanced customer understanding Increased competitive advantage Ability to predict market trends. Such solutions also support real-time analytics, helping businesses to respond quickly to changing conditions. Graphical Data Presentation Modern data analytics needs the transformation of complex datasets into intuitive graphical representations, graphs, and heat maps. It helps to communicate insights, foster an organization\u2019s data-driven culture, and make understanding patterns, trends, and outliers within large volumes of data easier. In today\u2019s data-rich environment, effective data visualization accelerates the analysis, enhances comprehension, and supports the identification of actionable insights for better business strategies and outcomes. Data Management Policies It\u2019s critical to have data governance and security measures to ensure your company\u2019s data integrity, availability, and confidentiality. Effective data governance frameworks define policies, standards, and procedures for managing data across its lifecycle, enhancing data quality and making it a reliable asset for decision-making. Strong security measures protect sensitive information from unauthorized access, breaches, and cyber threats, maintaining trust among stakeholders and customers. In other words, data governance and security are critical to regulatory compliance and risk management and save an organization\u2019s reputation. How to Implement Modern Data Infrastructure If you wish your data management systems to be more scalable, flexible, and efficient, you must implement a strategic approach to redesign and upgrade them. You may start with assessing current data infrastructure, identifying areas for improvement, and defining specific business needs and objectives. The table below displays each step in detail. Step Definition Set Objectives and Goals Clarify objectives to see what to modernize, like data silos reducing and security. Analyze Current Situation Revise the things you\u2019re staying on, like: \u2013 Identify existing data sources, types, and storage. \u2013 Evaluate data quality, accuracy, and completeness. \u2013 Think about data governance practices and compliance with data regulations. \u2013 Analyze data integration, warehousing, and analytics solutions you already use. Determine Main Business Influencers Identify key stakeholders and factors that drive, impact, or are significantly affected by modernization initiatives. Create a Data Strategy When you know your objectives, assessment, and business drivers, you can create a working data strategy, like: \u2013 Data integration and management methods. \u2013 New data architecture scenario. \u2013 KPIs for measurement success. \u2013 Security and compliance checking. Order Initiatives by Priority Prioritization is usually about paying attention to cost, complexity, and expected return on investment. So, you have to identify and prioritize specific data modernization initiatives based on their potential impact and alignment with your data strategy. Choosing Appropriate Tools and Technologies Select the convenient data infrastructure modernization tools according to your organization\u2019s goals, existing infrastructure, and plans to work with, like [data integration platforms](https://skyvia.com/blog/data-integration-tools/) , data lakes, data warehouses, data quality tools, analytics tools, etc. But ensure that the chosen ones can scale with your requirements. The links below might be helpful in the selection. \u2013 [Top 10 Data Analysis Tools](https://skyvia.com/blog/top-data-analysis-tools/) \u2013 [Top 10 Data Warehouse Tools](https://skyvia.com/blog/best-data-warehouse-tools/) \u2013 [Selecting the best Data Movement Tool](https://skyvia.com/blog/top-data-movement-tools/) \u2013 [Best CRM Data Enrichment Tools](https://skyvia.com/blog/best-crm-data-enrichment-tools/) \u2013 [Top 6 Data Orchestration Tools](https://skyvia.com/blog/best-data-orchestration-tools/) \u2013 [Most Popular Salesforce Reporting Tools](https://skyvia.com/blog/top-data-analysis-tools/) Data Management Oversight Implementing data governance policies and standards is the best way to ensure your data\u2019s accountability, quality, security, and compliance. Create a working data governance framework and clarify data ownership, user roles, and abilities for data management oversight. Integrating and Migrating Data Systems Consider migrating data from legacy systems and implementing it into innovative scenarios and data storage to raise data quality and consistency. Securing Data and Ensuring Compliance All the data must be protected and comply with data privacy and industry-specific standards like GDPR, CCPA, HIPAA, ISO 27001, SOC 2, etc. Analyzing Data for Insights Clarify objectives to see what to modernize, like data silos reduction and security. The Role of Skyvia in Modernizing Data Infrastructure Businesses often hesitate while choosing the correct data infrastructure modernization tool. The app must be cloud-based, no-code, simple in usage, supporting a variety of connectors and business scenarios. And, of course, it has to keep a balance between functionality and price. That\u2019s right because if we\u2019re starting the big repair, we need to see what we receive at the endpoint. In this case, [Skyvia](https://skyvia.com/) is a good choice. The platform is cloud-based, [pretty simple in usage](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , and supports [200+](https://skyvia.com/connectors/) sources like cloud apps, databases, and storage services. It provides ETL, ELT, reverse ETL, data migration, one-way and bidirectional data sync, workflow automation, data sharing via REST API, backups for cloud apps. The [pricing](https://skyvia.com/pricing/) is pay-as-you-go and includes the free plan to start with. Basic ($15/mo) Standard ($79/mo) Professional ($399/mo). And, of course, Skyvvia\u2019s Enterprise option tailors companies\u2019 unique scenarios to match its growth. Real-life Examples of Data Infrastructure Modernization The real-life use cases below show how modernizing data infrastructure with Skyvia solution simplifies businesses\u2019 lives and improves decision-making. Teesing Improved Data Orchestrating with Skyvia Data Integration Background : Teesing is the worldwide supplier for the high-tech OEM market, developing and assembling turnkey sub-systems to control the gas and liquids market. They needed to export data from the website to CSV files and import it back. Creating a custom Python script for each data export-import was necessary but also non-productive, tedious, and time-consuming. Another point was that the CRM system could not automatically import data on a scheduled basis, making data transfers from the website to CRM impossible without manual imports. Solution : The company introduces automated data management in its quality control processes, using sensors and machine learning algorithms to detect defects in real-time during manufacturing. The system automatically adjusts machinery or halts production if quality issues are detected. Result : Data-related performance enhanced, data integration automated, human impact decreased, manual work avoided, and time and costs saved. NISO Automated Data Flows for Financial Operations with Skyvia Background: NISO is an outsourced CFO company that provides services to the MCA industry (merchant cash advance), offering its customers outsourced finance and analytics departments. They searched for ways to connect different operational data sources within the cloud infrastructure. Precisely, the company needed to load data from MySQL to QuickBooks Online. Solution: Skyvia has a connection to QuickBooks, so the NISO team would be able to connect services in the cloud directly. Skyvia also has a lot of necessities regarding SQL and MySQL servers and integrating with other products with their specific developing features. For that, it\u2019s the perfect choice. Result : With Skyvia\u2019s integration features, NISO can get a broader reach for customer information and confidently scale. Skyvia & Cirrus Insight: Salesforce-QuickBooks Integration Background: Cirrus Insight is a CRM platform integrating Salesforce with third-party services, like Gmail and Microsoft\u2019s Office 365. They adopted the strategic approach to integrate Salesforce with QuickBooks using Skyvia, aiming to enhance productivity and reduce costs. Solution: Skyvia enables Cirrus Insight Salesforce-to-Salesforce and Stripe-to-Salesforce integration and connecting with QuickBooks, Result : Operational efficiencies are improved, financial reporting is streamlined, and significant costs are saved. Summary To stay competitive in today\u2019s fast-paced, data-driven business environment, companies have to update legacy data systems and processes to more contemporary, efficient, and scalable solutions. The tools they\u2019ll use depend on the needs and scenarios, but the key balance is the solution\u2019s usability, simplicity, and appropriate pricing. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-modernize-data-infrastructure%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Modernize+Data+Infrastructure%3A+Exploring+Benefits%2C+Examples%2C+and+Strategies&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-modernize-data-infrastructure%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-modernize-data-infrastructure/&title=How+to+Modernize+Data+Infrastructure%3A+Exploring+Benefits%2C+Examples%2C+and+Strategies) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/how-to-replicate-dear-inventory-data-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to replicate DEAR Inventory Data to SQL Server By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/how-to-replicate-dear-inventory-data-to-sql-server/#respond) 2555 December 11, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-replicate-dear-inventory-data-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+replicate+DEAR+Inventory+Data+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-replicate-dear-inventory-data-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-replicate-dear-inventory-data-to-sql-server/&title=How+to+replicate+DEAR+Inventory+Data+to+SQL+Server) This is a guest post by Cloudscouts. Learn how to [replicate data](https://skyvia.com/blog/top-data-replication-tools/) from Cin7 Core Inventory (formerly DEAR Inventory) to SQL Server using the cloud-based data integration platform \u2013 Skyvia. Discover the step-by-step process and the benefits of data integration. Table of Contents Importance of Data Integration in Business Benefits of Integrating DEAR Inventory with SQL Server Replication of DEAR Inventory to SQL Server Prerequisites to setup DEAR connection Prerequisites to setup SQL Server connection Establish connection Summary Importance of Data Integration in Business [Data integration](https://skyvia.com/learn/what-is-data-integration) is crucial for today\u2019s rapidly evolving and dynamic business environment. When the data gets integrated, it streamlines processes and reduces manual data entry and reconciliation efforts. This results in improved operational efficiency, as employees can spend more time analyzing insights and less time managing distinct data sources. Integrated data ensures consistency and accuracy in reporting and analytics. It\u2019s essential for generating reliable insights, tracking key performance indicators, and facilitating data-driven decision-making. Integrated data serves as the foundation for effective business planning and predictive analytics. Businesses can uncover trends, patterns, and actionable insights contributing to future planning. In conclusion, data integration is pivotal in ensuring that businesses operate with a cohesive and well-informed approach. It supports various business functions ranging from day-to-day operations to strategic decision-making, ultimately contributing to increased competitiveness and long-term success. Benefits of Integrating DEAR Inventory with SQL Server [Cin7 Core Inventory (formerly DEAR Inventory)](https://dearsystems.com/) is a cloud-based inventory and order management platform, and integrating it with SQL Server, a robust relational database management system, can enhance data management, reporting, and overall operational efficiency. Integration allows for a seamless data flow between DEAR Inventory and SQL Server. It ensures that all relevant information, including inventory levels, order details, and customer data, is consolidated into a unified and structured database. It ensures that data between DEAR Inventory and SQL Server is synchronized in real-time. This timeliness provides accurate and up-to-date information, which is crucial for decision-making, especially in dynamic business environments. [SQL Server](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) has powerful querying and reporting capabilities that can be leveraged to generate custom reports and analytics based on DEAR Inventory data. It gives businesses deeper insights into sales trends, inventory turnover, and other key performance indicators. SQL Server\u2019s scalability features allow the integrated system to grow with the business. As data volumes increase, the platform can be scaled to accommodate larger datasets and a growing number of transactions. It also provides robust security features, including encryption and access controls. By integrating with SQL Server, DEAR Inventory data can benefit from these security measures, ensuring the confidentiality and integrity of sensitive inventory information. Integrating DEAR Inventory with SQL Server minimizes manual data entry, reducing the risk of errors associated with manual processes. Automation of data transfer between systems enhances accuracy and operational efficiency. In summary, integrating DEAR Inventory with SQL Server provides a powerful solution for organizations looking to optimize their inventory management processes. The benefits include improved data accuracy, enhanced reporting capabilities, real-time synchronization, and the ability to create efficient and customized workflows to meet the unique needs of the business. In this blog post, we show how to replicate data from Cin7 Core Inventory (formerly DEAR Inventory) to SQL SERVER without the need for complex coding or manual data entry with the help of Skyvia. Replication of DEAR Inventory to SQL Server For a large FMCG customer based out of Canada, Cloudscouts implemented sales and operational planning and used Skyvia as part of data integration. Skyvia was chosen among other variants as it meets the business requirements (especially the possibility for people with limited technical knowledge to easily work on it). The other benefits include: User-friendly interface. Skyvia offers a web-based interface, making it accessible to users with varying levels of technical expertise. This ease of use can expedite the implementation of data integration processes. Note: Based on usability ratings, [Skyvia has been ranked first in G2\u2019s List of Top 20 Easiest to Use ETL Tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . Cloud-based platform. Skyvia operates as a cloud-based platform, eliminating the need for users to manage on-premises hardware and infrastructure. Scheduling capability. The service enables users to create automated workflows that schedule data integration tasks at specified intervals. Automation can improve efficiency and reduce the need for manual intervention. Safety and security. The platform implements security measures to protect data during integration, including data encryption in transit and storage, access controls, and compliance with data protection standards. Data backup. Skyvia includes backup and restore functionalities, allowing users to safeguard and recover their data in case of accidental deletions or system failures. Cost-effective. The platform operates on a subscription-based model, allowing businesses to pay only for the features and resources they need. Multiple connectors. Skyvia provides [over 170 connectors](https://skyvia.com/connectors/) for popular cloud applications and databases, including Salesforce, Dynamics CRM, MySQL, SQL Server, and more. Prerequisites to setup DEAR connection Account ID and API Key of Dear system.\u00a0Open [https://inventory.dearsystems.com/ExternalAPI](https://inventory.dearsystems.com/ExternalAPI) and navigate to Integration . Click\u202fon API . Click\u202fon the plus button. Specify the integration name (for example, Dear Integration) and click Create . Make a note of the Account ID\u202fand\u202fAPI Application Key. Log in to Skyvia. Enter the Account ID and API Key in the DEAR connection. Prerequisites to setup SQL Server connection User with access to the database. The IP address of the SQL server. Enter the Server, User/Password, and database name details in the SQL Server connection. Establish connection Navigate to Replication in the top menu, choose the Dear connection under Source and SQL server connection under Target. Select the respective object (for example, Finished goods) and save. This will replicate all the data fields from the Finished goods table of the Dear system to SQL Server by creating a new table (for the first time) in the SQL server database. Users will have the option to choose only specific fields from the Dear tables as well. Edit the Object and select the required fields to replicate. Click on Schedule . The following screen appears. By entering all the required parameters and saving them, you activate the schedule immediately or delay an operation later. To make the schedule inactive without deleting it, click Disabled . Click on the Monitor tab. The user can check the results of the last five automation executions and the status of the current automation execution here. Click on the Log tab. This screen displays the details of all the failed and succeeded jobs. Summary Though integration between SQL Server and Cin7 Core Inventory (formerly DEAR Inventory) can be challenging, thanks to Skyvia, Cloudscouts clients could effortlessly connect these two crucial systems, unlocking a world of possibilities for their business. With Skyvia, you don\u2019t need to be a tech guru to achieve this level of integration, and it was a game-changer for Cloudscout\u2019s clients. By applying Skyvia for data integration, users received: seamless data synchronization, improved data accuracy, reduced manual work, enhanced decision-making capabilities. About Cloudscouts [Cloudscouts](https://www.cloudscouts.pro/) specializes in Enterprise performance management, data analytics, and technology services. The people at Cloudscouts are passionate technologists and process specialists. Our in-depth knowledge of the subject area, coupled with decades of implementation experience, makes us one of the best out there in the consulting space. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-replicate-dear-inventory-data-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+replicate+DEAR+Inventory+Data+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-replicate-dear-inventory-data-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-replicate-dear-inventory-data-to-sql-server/&title=How+to+replicate+DEAR+Inventory+Data+to+SQL+Server) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/how-to-successfully-implement-a-data-driven-customer-journey/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Implement a Data-Driven Customer Journey By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/how-to-successfully-implement-a-data-driven-customer-journey/#respond) 2519 December 22, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-successfully-implement-a-data-driven-customer-journey%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Implement+a+Data-Driven+Customer+Journey&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-successfully-implement-a-data-driven-customer-journey%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-successfully-implement-a-data-driven-customer-journey/&title=How+to+Implement+a+Data-Driven+Customer+Journey) When did you last buy something online or in a brick-and-mortar store? Which factors influenced your decision? What process have you undergone before the choice? Answering these questions is the best way to put yourself in the shoes of any customer. Believe it or not, each user passes the same stages before making a purchase \u2013 this process is called a customer journey. This article explores how data helps to investigate a customer journey in detail. It also explains how to design a data-driven customer journey and refine marketing campaigns. Table of Contents What Is a Customer Journey? Tools for Customer Data Collection and Integration Types of Customer Journey Maps How to Create a Data-Driven Customer Journey Map Implementing the Data-Driven Approach Measuring Success and ROI Conclusion What Is a Customer Journey? Centuries ago, people spent days getting to central markets in order to buy even ordinary things. The situation wasn\u2019t much better in the previous century. So, this is where the customer journey term originates because it\u2019s associated with traveling. Today, people don\u2019t even have to go outside to buy things. A smartphone in one hand and a cup of coffee in another \u2013 this is how a modern customer journey starts. Marketing experts define 5 principal stages of the #eCUSTOMER_JOURNEY: Awareness. This is the departure point where a user sees a social media ad, a blog post, or a YouTube video and thus becomes aware of your brand\u2019s existence. Marketers use digital or print media channels to inform the world about their products and services. Consideration. This stage is probably the most important one. At this point, you must convince a customer that your product or service is the best among other alternatives. Personalized emails, short presentations, and how-to articles could reveal the value of your product. Onboarding. Once a user decides to make a purchase, they convert into a customer. During onboarding, a company presents its products via demo sessions or webinars. Retention. Customer retention isn\u2019t less important than their acquisition. Provide engagement for your clients by sending promotional emails with exclusive offers and reasonable discounts. Advocacy. At this stage, your customers become your brand ambassadors by praising your product by word of mouth. Using customer reactions and personal data at any stage helps marketers create suitable activities and select proper communication channels. Tools for Customer Data Collection and Integration The marketing theory states that people react both to external and internal stimuli when planning a purchase. External factors are the [4Ps (product, price, place, and promotion)](https://blog.hubspot.com/marketing/4-ps-of-marketing) designed individually by each company. Political, economic, cultural, and social circumstances impact customer decisions from outside, too. Internal factors are also known as the \u2018black box\u2019 containing the personal values, motivations, and lifestyle of each person. Previously, it was hard to solve the riddle of the \u2018black box\u2019 and read between the lines to find out customer desires. Modern tools allow marketers to open a secret chamber and stop running around in circles. They help to design a data-driven journey for prospects and build long-term customer relationships. Such tools are classified into the following categories: CRM systems \u2013 store customer and prospect data. Customer data platforms (CDPs) \u2013 contain data on all touchpoints and interactions with the company. [Data integration tools](https://skyvia.com/blog/data-integration-tools/) \u2013 bring all the customer-related data stored on different sources into a single source of truth (data warehouse, CRM, etc.). Analytics tools \u2013 derive data-driven insights based on behavioral, transactional, and identity customer data. CRM systems and CDP platforms collect and save data on customer identity and online behavior. However, data scattered across platforms doesn\u2019t contribute to the design of a data-driven customer journey. One of the possible solutions is to consolidate data from various sources within a data warehouse or CRM. Data warehouse. Very often, companies use a data warehouse to copy all data from the marketing-related sources there. That way, it becomes a single source of truth for marketing data, which can be further analyzed on the basis of a DWH. CRM. In some cases, a CRM becomes a single source of truth with all information about customers and marketing incentives. To bring that data from multiple locations to a CRM, use third-party [data integration](https://skyvia.com/data-integration/) tools. Whether you decide to consolidate your data in a DWH or CRM, [Skyvia](https://skyvia.com/) comes into play. Skyvia is a no-code cloud data platform for quick and easy solving of a wide set of data-related tasks. In the context of customer data consolidation, the following Skyvia\u2019s scenarios would be the most applicable: [Import.](https://skyvia.com/data-integration/import) Skyvia can ingest customer data from different marketing apps or databases ( [Salesforce](https://skyvia.com/connectors/salesforce) , [HubSpot](https://skyvia.com/connectors/hubspot) , [Dynamics 365](https://skyvia.com/connectors/dynamics-crm) , [NetSuite,](https://skyvia.com/connectors/netsuite) [Oracle](https://skyvia.com/connectors/oracle) , [MySQL](https://skyvia.com/connectors/mysql) , [PostgreSQL](https://skyvia.com/connectors/postgresql) , etc.) and send it to CRM to make it a place where all the customer-related data is stored. [Replicate](https://skyvia.com/data-integration/replication) . The system sends customer data from selected apps to a data warehouse (Google BigQuery, Amazon Redshift, Snowflake, Azure Synapse Analytics), where it could be further used for analysis. [Data Flow and Control Flow](https://skyvia.com/data-integration/) . These scenarios are designed for building complex integration scenarios with several sources and destinations involving advanced data transformations. Skyvia easily connects to popular modern marketing apps as well as databases, data warehouses, SaaS, file storage services, and analytics tools. Overall, Skyvia has 180+ connectors available. Start using Skyvia for free and select the [pricing plan](https://skyvia.com/pricing/) according to your data operation capacities! Types of Customer Journey Maps Let\u2019s switch from theory to practice and see how to build customer journey maps using data. Before planning a regular touristic trip, you most likely also look at the map before embarking on it. The same goes for customer journeys \u2013 you must design a map and plan an approximate route there. The most popular customer journey maps: Current-state. This is the most common map type as it shows the current impressions of customers from interacting with your brand. They reveal what customers feel, think, and do when they come across your company\u2019s products. Current-state Awareness Acquisition Onboarding Engagement Advocacy What do customers think? What are the customer\u2019s actions? What is the touchpoint What needs to be changed? How to make the change? Day-in-the-life. Similarly to the current-state map type, it shows the customer\u2019s feelings and thoughts. It also depicts the motivations and pain points of the target audience, which opens up opportunities for innovation and improvement. Day-in-the-life Awareness Acquisition Onboarding Engagement Advocacy What do customers think or feel? What are the customer priorities? What are the customer\u2019s pain points? How do customers interact with a brand now? How could the product be utilized better now? Future-state. This map reveals the feelings and actions of consumers in the future. It also indicates what exactly will be changed and how it will be different from the current state. Future-state Awareness Acquisition Onboarding Engagement Advocacy What will customers think? How will the customers act? What will be the touchpoint? How will this be different from the current state? Why will this alter the customer journey? Service blueprint. This diagram looks into the customer experiences during the first three stages of their journey. It also discovers the roots of their pain points. Service blueprint Awareness Acquisition Onboarding Engagement Advocacy What does a customer feel? Why does a customer have this feeling? How to communicate with the customer? What action to take in the background? To be able to give answers to all the above questions, you\u2019ll need data and a technological base for that. How to Create a Data-Driven Customer Journey Map To craft a comprehensive customer journey map based on data, consider the following activities. Step 1. Collect Data Use CRMs, websites, social media platforms, and other sources to gather customer data. They collect personal information as well as contact details of new visitors and existing customers. The cherry on the pie of the data collection is the behavioral patterns of customers. Those are registered in Google Analytics for the website and transactional databases. Understanding user behavior is a key factor in developing a personalized approach and constructing a route on the map. Step 2. Prepare Data Raw data coming from all those sources doesn\u2019t have much value initially. It\u2019s essential to sort, cleanse, and transform data before its aggregation in a data warehouse. Carry out such operations with data integration tools like Skyvia and [data warehouse platforms](https://skyvia.com/blog/best-data-warehouse-tools/) like Amazon Redshift or Google BigQuery. Step 3. Analyze Data At this stage, analytics tools reveal the entering and exit points of existing customers or prospects. They also reveal customers\u2019 behavior on the website, social media, and other company resources. Step 4. Discover Touchpoints Detailed analysis reveals which platforms your audience uses when interacting with your brand. Put this information on a customer journey map to determine which communication channels to put effort into. Step 5. Interact with Customers Once you know your consumers\u2019 choices and preferences, it\u2019s time to develop a personalized journey. Depending on the case, you can invest in targeted ads, social media ads, email campaigns, pop-up notifications, YouTube channel development, etc., to boost conversions. Having all this information at your disposal, filling in the customer journey map is a piece of cake. Implementing the Data-Driven Approach Before benefiting fully from the customer journey map creation, encourage your company to implement a data-driven approach. This means trust in the data analytics results and being ready to innovate. Find the fundamental steps to introduce and promote a data-driven culture within an organization. Leadership. Communicate with your team members about the importance of analytical tools and their benefits. Ensure all team members have a shared vision of the company goals and actions leading to their achievement. Training. Plan training sessions for the team, explaining how to use tools and derive value from data. Hire professionals for new roles to help drive innovation. Tools. Provide a detailed overview of the available software tools or cloud platforms promoting the data-driven approach. Processes. Establish the workflows for data collection, cleaning, and analysis. Results. Define metrics for evaluating the data-driven approach in building customer journey maps. Measuring Success and ROI To make sure you\u2019re moving in the right direction, the evaluation of the data-driven customer journey impact on your business is inevitable. The ROI indicator is the most popular and well-known metric for that, and you must already know how to calculate it. However, there are other important metrics that will help you measure the efficiency of marketing efforts. Impressions This metric shows how often your content appears for users on search engines. Having a high impression rate is critically important in the awareness stage so that potential customers can be acquainted with your brand. Time on page This KPI value reflects how your website visitors interact with each web page. If the time on the page is too low, it might mean that it isn\u2019t informative enough or is poorly designed. In that case, a data-driven approach will help you to optimize such web pages. Conversion rate This metric shows the ratio between the total number of page visitors and those who made a purchase. Retention rate To discover the current retention rate, calculate the ratio of active customers to all customers that ever existed. If it\u2019s not high enough, invest in engagement by offering discounts for next orders, special offers, personalized packaging, and anything else that a data-driven approach suggests. Customer satisfaction rate According to Salesforce, [80% of customers](http://www.salesforce.com/resources/articles/customer-expectations/?_conv_v=vi%3A1*sc%3A5*cs%3A1700748894*fs%3A1700212065*pv%3A7*seg%3A%7B10031564.1%7D*ps%3A1700653495&_conv_s=si%3A5*sh%3A1700748894026-0.125870727195579*pv%3A1) consider their experience with a company to be as important as its products. A high CSAT is vital because it impacts your brand reputation and determines customer loyalty. Conclusion Whether people make conscious or spontaneous buying decisions, they always follow the same route. First, they become aware of a product and decide whether to buy it and remain loyal to a brand. To make more people aware of your company and encourage them to purchase something, design a customer journey map first. It reflects what customers want, think, do, feel, and how they can contact a brand. Marketers rely on a data-driven approach to craft a refined customer journey map. Start with a series of internal communications and the implementation of suitable CRM systems, email automation software, analytics programs, and data integration services. As the data integration tool is an intermedium between all other components of the toolkit, making the right choice matters. Thus, consider that Skyvia does everything necessary to prepare your data for analysis. You can just rest on your laurels and obtain data-driven insights for your business! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-successfully-implement-a-data-driven-customer-journey%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Implement+a+Data-Driven+Customer+Journey&url=https%3A%2F%2Fblog.skyvia.com%2Fhow-to-successfully-implement-a-data-driven-customer-journey%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/how-to-successfully-implement-a-data-driven-customer-journey/&title=How+to+Implement+a+Data-Driven+Customer+Journey) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/hubspot-and-stripe-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) HubSpot and Stripe Integration: Improve Your Customer Interactions By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/hubspot-and-stripe-integration/#respond) 2360 January 17, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-and-stripe-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=HubSpot+and+Stripe+Integration%3A+Improve+Your+Customer+Interactions&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-and-stripe-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-and-stripe-integration/&title=HubSpot+and+Stripe+Integration%3A+Improve+Your+Customer+Interactions) Online payments rose in the early 2000s, but their boost was evident in 2020 when Covid-19 came. As any online purchase is associated with a certain customer or company, it makes sense to register them in the CRM system. Integrating payment systems with CRM illuminates customer habits, automates transactions, and strengthens control over business finance. This approach improves customer loyalty and the overall brand image. In this article, we examine the most popular CRM and payment systems, HubSpot and Stripe. You\u2019ll discover excellent methods for integrating these systems and their benefits. Table of \u0421ontents Introduction to Stripe Introduction to HubSpot Benefits of HubSpot Stripe Integration Instructions for Integration Stripe with HubSpot via Strip\u0435 Data Sync Automate Stripe HubSpot Integration with Skyvia Advantages of Skyvia Cloud Data Integration Tool Who Needs Stripe and HubSpot Integration Conclusion Introduction to Stripe If you\u2019ve ever made an online purchase, you\u2019ve probably used Stripe without noticing it. Stripe is a global SaaS that combines payment gateway and processing functionality. In simple terms, it enables the transfer of money from a customer\u2019s bank account to a business bank account through a debit or credit card transaction. Stripe is a perfect solution for any E-commerce business: Operates 135+ currencies, which makes it easy to do business internationally. Imposes a small transaction fee. Provides superior security for payment transactions as it\u2019s a certified PCI service provider. Supports popular global (Alipay, Apple Pay, Google Pay, Visa, Masterpass, American Express) and local\u00a0 (ACH, Bancontact, EPS, Giropay, iDEAL, etc.) payment types. Presents public API documentation for developers to integrate Stripe into websites and applications. Given that Stripe provides developer instruments for its seamless integration with other tools, it still takes some time and effort to put all that into practice. This article presents easier and faster alternatives for integrating Stripe with HubSpot and other services. Introduction to HubSpot Even non-marketing-related specialists have heard about HubSpot at least once. HubSpot is a fully featured CRM software that can be used for free. Moreover, it\u2019s extremely user-friendly for marketing professionals and easily integrates with hundreds of different services, including Stripe. Initially, HubSpot was designed for small and medium-sized businesses, but now it\u2019s suitable for companies of any type because: This CRM is free regardless of the number of users, contacts/leads, and deals in progress. According to reviews, the usability of HubSpot is excellent, much higher than that of its competitors. Offers customization for adding custom fields of any type. Integrates with websites to track users\u2019 actions to see which web pages they visit and how they land on your website. Integrates with email systems, which makes it possible to send emails right from CRM. Benefits of HubSpot Stripe Integration HubSpot connects easily to 200+ applications through native integration or public APIs, and Stripe is among those. Let\u2019s have a look particularly at [HubSpot and Stripe integration](https://skyvia.com/data-integration/integrate-hubspot-stripe) and its points of strength in order to understand how they can benefit businesses. Automated Payment Activities By going to Sales -> Quote within HubSpot, you can create trackable quotes for customers. This is possible in multiple ways through HubSpot and Stripe integration. Trackable quotes provide customers with the possibility to complete payments directly without the need to switch to other online payment services or offline banking. Once a customer makes a payment, it appears in the Stripe account. Personalization of User Experience Stripe and HubSpot integration works in both directions, so you can send information on purchased products from Stripe to HubSpot. This allows the creation of personalized communication with the customer by sending offers based on consumer purchase history and preferences. Two-way data load is possible with native integration tools as well as third-party services. One of those is Skyvia, a web-based SaaS data platform ideal for [Stripe HubSpot data sync](https://skyvia.com/data-integration/integrate-hubspot-stripe) in both directions. It requires no pre-installations, is easy to use, and suits any data load. Customer Management With the HubSpot Stripe integration, you can generate a payment link for the subscription renewal within the CRM. The system will automatically bill the customer on the required payment date. At the same time, you can configure automated email sending with the information on the upcoming subscription renewal. Thus, customers will understand why their card is charged on the indicated date. Otherwise, they may select an option to change or cancel the subscription. This step is essential for customer management and retention. Streamlined Customer Support Very often, users can\u2019t proceed with the payment because of the incompatibility of their payment methods with those of the online system. Luckily, Stripe supports 135+ currencies and all popular global and local payment types. Integrating CRM with Stripe means that customers from different geographical locations can proceed with the payment easily, which has a positive impact on customer satisfaction. Instructions for Integration Stripe with HubSpot via Strip\u0435 Data Sync With the HubSpot Stripe integration, you can bill your customers directly from their quotes and send all payments to the integrated Stripe account. To set up Stripe integration with HubSpot, using native methods, proceed as follows: In the HubSpot account, press the Marketplace icon and select App Marketplace . Type Stripe in the search bar and select Stripe Data Sync app. Click Install App . Click Connect to Stripe Data Sync . Select an existing Stripe account and click Connect or create a new account. Enter the Stripe account credentials and proceed with other necessary steps. Then, you\u2019ll be redirected to HubSpot and see the following message. Once the connection is established, you can set up: Two-way synchronization between Stripe and HubSpot in real-time. Field mapping. The following objects are available for sync between HubSpot and Stripe: Contacts Invoices Products Limitations of Strip\u0435 Data Sync App Even though Stripe Data Sync is easy to install and configure, it doesn\u2019t fully satisfy users\u2019 needs. This app doesn\u2019t provide payment processing functionality built into HubSpot and syncs only three objects, while users are also looking for ways to synchronize payments, quotes, and other important details. As for the first aspect, there is a Stripe Payment Processing app in the marketplace, which can be installed in a similar way as Data Sync. However, this app isn\u2019t free of charge as it costs around $20/month. As for the limited sync functionality, there are also plenty of solutions for workaround. For instance, specialized [data integration software Skyvia](https://skyvia.com/data-integration/) provides multiple integration scenarios for HubSpot and Stripe so that you can set up data exchange between apps to your taste. Automate Stripe HubSpot Integration with Skyvia Skyvia is a universal cloud SaaS for a number of data integration scenarios: ETL, ELT, Reverse ETL, data synchronization, and workflow automation. The Data Integration product of Skyvia has the following tools suitable for [Stripe HubSpot integration](https://skyvia.com/data-integration/integrate-hubspot-stripe) : [Import](https://skyvia.com/data-integration/import) implementing ETL scenario for loading data from Stripe to HubSpot and vice versa. [Synchronization](https://skyvia.com/data-integration/synchronization) scenario for bi-directional data sync between Stripe and HubSpot. It\u2019s applicable in case one of the systems is empty before sync starts. Data Flow and Control Flow are used to build up complex integration scenarios involving complex data transformations and load conditions. There is also the Export scenario for data extraction in a CSV file from the source and the Replication scenario for loading data into a data warehouse. Integration Scenario in Skyvia Let\u2019s have a look at how Skyvia brings Stripe HubSpot integration to life. In the following example, we show how Skyvia copies the Orders objects from Stripe and loads them in HubSpot. Such an integration helps to consolidate all the data about all purchases associated with a particular customer in HubSpot. As a result, marketing teams obtain an excellent opportunity to analyze the clients\u2019 preferred products. In the Stripe account, click Developers and go to the API Keys tab. Copy the Secret Key from there. [Connect Stripe](https://skyvia.com/connectors/stripe) to Skyvia by navigating to +New->Connection in the main menu and selecting Stripe from the list of connectors. Insert the recently copied API key in the corresponding field and click Create Connection . [Connect HubSpot](https://skyvia.com/connectors/hubspot) to Skyvia by navigating to +New->Connection in the main navigation bar and select HubSpot from the list of connectors. Enter the HubSpot account credentials and click Create Connection . Go to +New->Import in the main navigation bar. Set Stripe as a source and HubSpot as a target. Click Add new to specify data import task settings. Select Orders from the list on the Source tab. Make sure to also select ChargeId and CustomerId below. Select Deals on the target side. Define the mapping settings to match different data structures properly. Click Schedule to set up regular data transfer from Stripe to HubSpot. Click Create to save the integration. NOTE: You can also import Invoices, Quotes, and all other available objects in a similar way. Advantages of Skyvia Cloud Data Integration Tool As you\u2019ve noticed from the example above, the integration setup is rather simple with Skyvia. This is a core competitive advantage of Skyvia due to its user-friendly interface with a graphical wizard. It\u2019s [one of the top data integration tool s](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) on the market in terms of usability. However, this is not the only advantage of Skyvia as it has lots of other strengths: Technically-wise. As this service uses the code-free concept for data integration, it\u2019s suitable for non-technical specialists. At the same time, it has advanced functionality where developers can write their code for query execution and data extraction. Interactive. The platform connects to 180+ cloud apps, databases, and data warehouses in two clicks. Scalable. Skyvia is suitable for any data load capacity, so it\u2019s a perfect fit both for startups and large companies. Start using Skyvia for free and enjoy an ample set of features! You\u2019ll be able to upgrade at any time as your company grows. Who Needs Stripe and HubSpot Integration Three-fourths of all companies are currently using CRM on a daily basis, and this number continues to grow. While CRM systems enhance customer management, their integration with payment gateways can bring the customer experience to the next level. Such integration would be extremely beneficial for organizations operating in the following industries: E-\u0441ommerce Education Healthcare Hospitality As for the types of businesses, Stripe HubSpot integration boosts productivity for the following organizations: SaaS companies B2B and B2C companies SMB Non-profit organizations Such companies can fully experience the best of two worlds by bringing a CRM system with payment processing software. The most tangible of them is the automation of payments, better control over finance, strong security, and better customer satisfaction. Conclusion Each cloud app used in the digital environment is powerful itself, though the combination of tools makes miracles. Integrating Stripe with HubSpot makes the hidden gems of both services visible. Businesses take advantage of automated payments, enhanced user experience, and many other benefits. To make this integration work properly, the Skyvia cloud data platform is there. It helps to set up and automate data import and sync between Stripe and HubSpot in several clicks. Skyvia can also cope with multiple data-related tasks, so you can use it to tune up data warehouses for advanced analytics or sync data between other tools. Moreover, Skyvia is highly scalable and offers plans for any business. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-and-stripe-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=HubSpot+and+Stripe+Integration%3A+Improve+Your+Customer+Interactions&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-and-stripe-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-and-stripe-integration/&title=HubSpot+and+Stripe+Integration%3A+Improve+Your+Customer+Interactions) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/hubspot-dynamics-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) HubSpot and Dynamics 365 Integration Guide By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/hubspot-dynamics-integration/#respond) 3364 April 7, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-dynamics-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=HubSpot+and+Dynamics+365+Integration+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-dynamics-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-dynamics-integration/&title=HubSpot+and+Dynamics+365+Integration+Guide) Imagine a company that uses HubSpot for marketing and Dynamics 365 for sales. Both teams constantly exchange data. If they do it manually in a sporadic way, they will inevitably come to data inconsistency. A proper integration between both CRMs can take away the pain, allowing businesses to: Sync customer data automatically \u2014 no more duplicate or outdated records. Automate lead handoffs between marketing and sales, boosting conversion rates. Have a comprehensive view of the customer journey , improving personalization and engagement. Bringing the best of two worlds together \u2013 HubSpot and Dynamics 365 \u2013 promises the moon and the stars to businesses. In this article, we describe how to integrate these platforms: Native HubSpot connector. Third-party integration tools like Skyvia. Custom API integration method. We also compare the methods and highlight the best practices. Table of Contents What Is Microsoft Dynamics 365? What Is HubSpot? Why Integrate HubSpot and Microsoft Dynamics 365 Integration Methods: Choosing the Right Approach Method 1: Native HubSpot Connector Method 2: Microsoft Dynamics Hubspot Integration with Skyvia Method 3: Custom API-Based Integration Best Practices for HubSpot and Dynamics 365 Integration Conclusion What Is Microsoft Dynamics 365? [\u200bMicrosoft Dynamics 365](https://dynamics.microsoft.com/en-gb/what-is-dynamics365/) is a collection of intelligent [CRM and ERP](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) business applications offered by Microsoft to help organizations manage various operations like sales, finance, customer service, and supply chain activities. Key Features AI-Driven Sales Insights: It involves artificial intelligence to analyze sales data, helping organizations forecast trends and identify potential opportunities. This predictive capability supports informed decision-making and strategic planning.\u200b Microsoft Ecosystem Cooperation: The platform is integrated with familiar Microsoft Office applications, such as Excel, Word, and Outlook, by default. Users can extract, edit, import, or export data directly within the Microsoft environment. Business Event Notifications: It offers a feature that notifies external systems, Power Automate, and Azure messaging services about specific business events. This functionality enables organizations to react fast to critical activities and maintain efficient operations. \u200b Dynamics 365\u2019s compatibility with other Microsoft products ensures a smooth workflow for users already within the Microsoft environment. Additionally, the platform supports integration with various third-party solutions, allowing businesses to tailor the platform to their needs. What Is HubSpot? [HubSpot](https://www.hubspot.com/) is a universal platform that encompasses marketing, sales, customer service, operations, and content management tools upon a unified CRM system. This centralized approach ensures that all team members have access to consistent and up-to-date information for better collaboration and more personalized customer interactions. More than [248,000+ users](https://www.hubspot.com/our-story) enjoy HubSpot as a great helper in automating marketing processes and managing client data. Key Features Marketing Automation: The platform provides tools to create, manage, and analyze marketing data across various channels. Users can create email campaigns, manage social media accounts, and track website interactions. Sales Pipeline Management: It allows users to track deals, monitor sales activities, and manage contacts.\u200b Customer Service Tools: It can manage customer inquiries, feedback, and support tickets. AI-Driven Insights: In 2024, HubSpot introduced [Breeze](https://www.hubspot.com/products/artificial-intelligence) , an AI engine designed to automate tasks and enhance customer engagement. HubSpot has a [marketplace](https://ecosystem.hubspot.com/marketplace/apps/featured) of integrations, connecting with over 1800 services and applications like e-commerce platforms, event management systems, analytics tools, and others. Why Integrate HubSpot and Microsoft Dynamics 365 Businesses implementing both platforms need HubSpot Dynamics integration to eliminate data inconsistency and obtain a holistic view of the customer base. Data integration between these tools also provides opportunities for revenue growth, aligning sales and marketing strategies, and creating customer engagement programs. Let\u2019s look at the benefits in more detail: Data Accuracy. No more common mistakes when transferring information manually. You get more accurate and reliable customer records. Advanced Reporting. Companies assess marketing performance and adjust marketing strategies. Lead Conversion Rates Improvement. Better tracking and visibility of lead behavior helps sales teams identify high-priority leads and personalize interactions, increasing conversion chances. Two-way Data Synchronization. Data exchange provides consistent customer information, clearing misunderstandings or miscommunications between teams. Marketing Campaigns Personalization. Marketers have a complete overview of the customer\u2019s activity and preferred means of communication. Faster Customer Service Resolutions. With synchronized information timely available, support agents can resolve issues quickly. Integration Methods: Choosing the Right Approach Making a decision about what method to choose is a big deal. It depends on a bunch of factors, like business industry, size, infrastructure, processes, and data architecture. The size and composition of the team also matter. Even businesses operating in the same industry have different requirements and data integration needs, and there are several ways to do that. Native HubSpot Connector HubSpot provides a built-in connector specifically for Dynamics 365. It helps to synchronize the standard objects like Contacts, Leads, Accounts, and Deals without external software. Method Pros Easy to set up and configure. Directly supported by HubSpot. Regular updates from HubSpot directly. Method Cons Allows syncing one object at a time. Limited flexibility for customizations. Does not support complex or custom integrations. Basic synchronization features (primarily standard objects). Best suited for Small-to-medium businesses with basic data synchronization requirements and standard workflows. Third-Party Integration Tools Cloud-based integration platforms like [Skyvia](https://skyvia.com/) offer pre-built connectors that enable data synchronization, migration, and workflow automation. Method Pros Easy-to-use interface for non-technical users. Supports advanced features (custom objects, complex workflows). Offers automatic scheduling, error handling, and detailed logging. No coding is required. Method Cons For now, Skyvia supports only [Dynamics 365 CRM](https://skyvia.com/connectors/dynamics-crm) and [Dynamics 365 Business Central](https://skyvia.com/connectors/business-central) but doesn\u2019t support the full Dynamics 365 suite. Capabilities depend on the subscription. Best Suited for Small-to-large businesses that prefer ease of use and functionality and need to implement advanced integration use cases without coding or extensive IT resources. Custom API Integration This method involves building an integration for a company\u2019s specific requirements, using custom-developed code and APIs provided by Dynamics 365 and HubSpot. Method Pros Flexible for individual business requirements. Gives full control over data flows and integration logic. Real-time sync. Method Cons High initial cost and significant ongoing maintenance. Requires experienced developers and IT support. Longer setup time and complexity. API rate limits. Best suited for Large enterprises or businesses with development teams in regulated industries who want to implement highly customized integration solutions with specific workflows or compliance demands. Let\u2019s compare the methods by several criteria. Criteria\\ Method Native HubSpot Connector Third-Party Tools (Skyvia) Custom API Integration Complexity Low Moderate High Setup Time Quick Quick Low Customization Limited High High Scalability Limited High High Technical Expertise Minimal Minimal Extensive Support Direct from HubSpot Third-party vendor In-house / contracted Now, let\u2019s look at each method in detail. Method 1: Native HubSpot Connector You can download it from HubSpot App Marketplace and install it in several steps. STEP 1. Connect HubSpot and Dynamics 365 Find the Dynamics 365 Integration on the HubSpot marketplace and click Install App . Enter your Microsoft Dynamics 365 subdomain and credentials. If two apps are connected successfully, you\u2019ll get the corresponding message. STEP 2. Synchronize HubSpot and Dynamics Click Set up Sync . Choose the object to be synchronized: Company, Product, Contact, Deal, Invoice, or Activity. The corresponding field in Microsoft Dynamics 365 will be shown as well. NOTE: You\u2019ll be able to select only one object at a time. STEP 3. Configure HubSpot Dynamics integration Select the synchronization direction. In case two-way sync is selected, you\u2019ll also need to indicate which app will obtain the priority to overwrite data. Review data field mappings. To set custom field mappings or override the existing ones, you need an upgraded app version. Review other synchronization settings and edit them if needed. Click Save and Sync . To synchronize other data between HubSpot and Dynamics 365, click Sync more data as shown below. To stop or pause data synchronization between CRMs, select the needed option under the Actions menu of the sync object. Method 2: Microsoft Dynamics Hubspot Integration with Skyvia [Dynamics 365 and HubSpot integration](https://skyvia.com/data-integration/integrate-dynamics-crm-hubspot) with Skyvia addresses the above-mentioned issues present in the native integration application. Skyvia enables the synchronization of several data objects simultaneously without limitations. Also, it offers flexible customization options for data field mapping. [Skyvia](https://skyvia.com/) is a universal cloud-based platform capable of building up [ELT, ETL](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) solutions. In the case of [HubSpot Dynamics 365 integration](https://skyvia.com/learn/hubspot-integration-with-dynamics) , this platform allows you to get data from HubSpot, transform it, and import it to the Microsoft Dynamics 365 CRM or vice versa. Skyvia is a completely user-friendly tool that requires no coding. No desktop installations are needed, only a web browser. Skyvia supports complex data integration scenarios with the help of designer tools: [Data Flow and Control Flow](https://docs.skyvia.com/data-integration/data-flow/) for compound [data pipelines](https://skyvia.com/learn/what-is-data-pipeline) . To start integrating HubSpot and Dynamics, you need an active Skyvia account. STEP 1. Connect to HubSpot Click +Create New -> Connection . Select HubSpot from the connectors list. Sign In with HubSpot and save the connection. STEP 2. Connect to Dynamics Create a new connection from the menu. Select Dynamics 365 from the list. STEP 3. Create a New Import Create a new Import from the menu. Set HubSpot as a Source and Dynamics as a Target. Create an integration task by clicking Add New . STEP 4. Define an Integration Task Select the source object or specify the custom command. Select the Target object and choose the action: Insert, Update, Upsert, or Delete. Map the fields. HubSpot and Dynamics have different data structures. Mapping helps to adjust source data to fit the target data structure. Skyvia maps the fields with the same name automatically. It offers various mapping types, enabling complex transformations. Save the task and the integration. Run the integration manually or set the schedule to run the integration regularly. Check the integration results on the Monitor or Logs tabs. Data Flow More complicated data integration scenarios are also available with Skyvia\u2019s help. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) tools help design sophisticated data pipelines from HubSpot to Dynamics 365 and specify complex data transformations involving multiple targets. And all that is done with no coding in a visual designer. Say hello to more intelligent workflows! Sign up for Skyvia now, and let\u2019s start the data integration journey together! Check how to implement other data integration scenarios in our video: Method 3: Custom API-Based Integration If the above-mentioned approaches don\u2019t meet your business needs, there is one more way. You can build a custom API-based integration between HubSpot and Microsoft Dynamics 365. This method is helpful when building unique workflows, implementing business-specific goals, and solving very special data integration problems. The custom integration method wipes out all the limitations present in other methods. It is perfect for large organizations with complex workflows and multi-departmental demands. To build API-based integration, generate API keys for both platforms. See how to do it for HubSpot on the [HubSpot developer site](https://developers.hubspot.com/docs/api/overview) .\u00a0 For Microsoft Dynamics 365, use the [Microsoft developer guide](https://learn.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/overview?view=op-9-1) . Then, you can concoct your custom application that will perform API calls. You can also use Microsoft Power Automate, a low-code platform that simplifies API calls into manageable workflows. Set up a new flow, connect it to HubSpot and Dynamics using their API connectors, and configure your logic. Check out [Microsoft\u2019s Power Automate](https://docs.microsoft.com/en-us/power-automate/) docs for more details. Best Practices for HubSpot and Dynamics 365 Integration We collected several ideas that can make the implementation of your integration easier. Perform the initial assessment and planning. Benjamin Franklin once said, \u201cBy failing to prepare, you are preparing to fail.\u201d Don\u2019t neglect proper planning when it comes to data integration. List all the tasks it must solve. Identify the process flow and forecast the expected and unexpected behaviors. Think about what may go wrong and how to fix it.\u00a0 Develop a roadmap with clear steps, timelines, and responsibilities. Ensure the plan is aligned with company strategic goals. Choose the right data integration method. Carefully evaluate the available methods and check if the method aligns with resources and goals. The chosen method must meet the integration goals and scope. Define the mapping. Data structure differences are the main pain point when you integrate data. Check the data structure peculiarities and define the specifics of your business case. Think about how the data structures can be adjusted to fit. Define unique identifiers, align custom fields in both platforms and configure data transformation rules to match formatting (e.g., date formats, phone numbers). Perform Testing. When you have planned everything carefully, chosen the method, and completed data preparation, you can start implementing the integration. Take your time to test every scenario, each object, and all the possible workflows. For more insights, look at our [Integration Strategies](https://skyvia.com/learn/data-integration-strategy) article. Conclusion Data integration fills the gap between marketing and sales. It ensures data consistency, improves lead conversion, and provides a unified customer view. By exchanging key information across platforms, businesses eliminate manual errors, improve reporting, and create more personalized marketing and sales strategies. Each of the listed methods has its advantages and disadvantages and solves different problems for various businesses. A successful integration requires proper planning and careful testing to ensure stable performance. Choose the right method, and don\u2019t hesitate to start your integration discovery. Can\u2019t wait to take advantage of all those opportunities? Try Skyvia now! FAQ for HubSpot and Dynamics 365 What data can I integrate between HubSpot and Dynamics 365? Native HubSpot connector allows you to transfer standard object data. Skyvia and Custom API integration methods support the integration of standard and custom objects. Is it possible to schedule the integration to run automatically? Yes. Third-party applications mostly offer scheduling options. For example, Skyvia supports [scheduling integrations for automatic execution](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html#:~:text=Skyvia%20allows%20you%20to%20set,operation%20to%20a%20later%20time.) . Does integration support two-way data synchronization? Yes. The Native HubSpot Connector and third-party tools support two-way synchronization, ensuring customer data stays consistent across both platforms. Can I integrate HubSpot and Dynamics 365 with other cloud or on-premises applications and platforms? Yes, HubSpot offers over 1,800 integrations, and Dynamics 365 connects with Microsoft products and various third-party solutions. Third-party integration tools also allow you to involve other apps and platforms. For example, with Skyvia, you can build complex flows with multiple [connectors](https://skyvia.com/connectors) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-dynamics-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=HubSpot+and+Dynamics+365+Integration+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-dynamics-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-dynamics-integration/&title=HubSpot+and+Dynamics+365+Integration+Guide) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/hubspot-export-csv-guide/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Export CSV from HubSpot (Contacts, Deals, Reports & More) By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/hubspot-export-csv-guide/#respond) 4 June 4, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-export-csv-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Export+CSV+from+HubSpot+%28Contacts%2C+Deals%2C+Reports+%26+More%29&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-export-csv-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-export-csv-guide/&title=How+to+Export+CSV+from+HubSpot+%28Contacts%2C+Deals%2C+Reports+%26+More%29) HubSpot export CSV giving you a headache? You log in, hit export\u2026 and half your data\u2019s missing. Maybe the custom fields didn\u2019t show up. Maybe the filters weren\u2019t right. Whatever it is, exporting a simple [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) from [HubSpot](https://www.hubspot.com/) shouldn\u2019t be this frustrating. This guide breaks it all down, step by step. Whether you\u2019re backing up contacts, moving deals to another tool, or just pulling a report, we got you covered. You\u2019ll learn how to export the data you actually need, skip the common mistakes, and even automate the whole thing if you\u2019re tired of doing it over and over. Ready? Table of Contents Can You Export CSV from HubSpot? Preparing to Export Data to CSV from HubSpot Manual CSV Export Options in HubSpot Limitations of CSV Export in HubSpot Automate CSV Export from HubSpot with Skyvia Other Ways to Export HubSpot Data to CSV Best Way to Export CSV from HubSpot: Side-by-Side Comparison Troubleshooting HubSpot Export CSV Issues Conclusion Can You Export CSV from HubSpot? Well, of course! Why you\u2019re here tells us you want to know more. But first, let\u2019s break down what data you can pull out of HubSpot. Contacts You can export contact records with all their associated details \u2014 name, email, phone, lifecycle stage, and even custom properties. Before exporting, you can apply filters to narrow it down (like just contacts from last month or tagged under a certain campaign). Example: Pull a CSV of all contacts from your \u201cNewsletter Subscribers\u201d list to import into a different email platform. Companies Company records include details like company name, domain, phone, industry, and any custom fields you\u2019ve added. This is useful for reporting or syncing B2B data with external systems. Example: Export companies with annual revenue over $1M to hand over to your sales team for outreach. Deals You can export deals from your pipelines, complete with deal names, stages, close dates, amounts, and owners. Filters let you export only active, won, or lost deals, or filter by pipeline. Example: Export all closed-won deals in Q2 to calculate sales performance. Tickets Tickets help track customer support issues. You can export them with ticket status, pipeline stage, owner, and associated contact or company. Example: Export support tickets tagged \u201curgent\u201d from last week to share with your CS manager. Products If you\u2019re using HubSpot\u2019s product library, you can export all items, including names, prices, and SKUs. This is handy for e-commerce reporting or syncing with inventory tools. Example: Export all products with a price above $500 to update a pricing spreadsheet. Custom Properties HubSpot also lets you export property definitions \u2014 basically a list of your custom fields and their settings. This is useful for audits or preparing for [data migration](https://skyvia.com/learn/what-is-data-migration) . Example: Export custom property definitions to document your CRM setup before switching tools. Reports You can export report data (table format only) from dashboards. Visuals like charts and graphs don\u2019t get exported to CSV. Still, you can export raw data from reports for deeper analysis in Excel or Sheets. Example: Export a table report showing total deals by rep to analyze team performance outside HubSpot. \ud83d\udca1 Quick Tip: When using\u00a0HubSpot export data to CSV, always double-check your filters and columns before downloading. It saves you from missing key info later. Before you hit that export button, let\u2019s make sure your data\u2019s clean, filtered, and ready to go. What should you check before exporting? Let\u2019s run through a quick checklist. Preparing to Export Data to CSV from HubSpot Before you download anything, let\u2019s make sure your export goes smoothly. Bad exports usually come from missing fields, bad filters, or permission issues \u2014 all stuff you can avoid with a little prep. Think of this as your quick HubSpot export checklist before hitting \u201cExport.\u201d Check Your Permissions First things first \u2014 do you have permission to export data? HubSpot limits export access based on user roles. If you don\u2019t see the export button, that\u2019s probably why. Fix it: Ask your HubSpot admin to update your role or export it for you. Clean Up Your Data It\u2019s tempting to export everything, but junk in means junk out. Go through your contacts, deals, or lists and clean up duplicates, fix missing values, or update outdated fields. Why this matters: If you\u2019re preparing HubSpot data for export to another tool, bad data causes errors and wasted time. Select the Right Columns HubSpot lets you pick which columns to include. Don\u2019t just go with the default. Make sure all the fields you need \u2014 especially custom properties \u2014 are selected. Tip: You can customize columns in the view before exporting. HubSpot saves that view for next time. Apply Filters (If Needed) Don\u2019t want the whole database? Use filters to narrow it down by deal stage, contact owner, lifecycle, date ranges \u2014 whatever makes sense. This is a must if you\u2019re trying to filter HubSpot before exporting to include only relevant data . Example: Filter to only export contacts created in the last 30 days for a monthly marketing report. Choose the Right Format HubSpot gives you options: XLS, CSV, or XLSX. For most tools and uploads, CSV is the safest choice \u2014 it\u2019s clean, simple, and supported almost everywhere. Unless you need formatting or special Excel functions, go with CSV. \ud83d\udca1 Quick Tip: If you\u2019re exporting for backup, grab all fields . But if you\u2019re sending data to another app, less is more \u2014 just pick what you need. Ready to Hit Export? Now that your filters are in place, your fields are selected, and your data\u2019s squeaky clean\u2026 Let\u2019s walk through how to export different data types in HubSpot \u2014 starting with contacts. Need to export just a list or certain deals? We\u2019ll cover all that next. Manual CSV Export Options in HubSpot If you\u2019re not using integrations or automation tools, no worries \u2014 HubSpot still lets you export data manually with just a few clicks. We\u2019ll walk through how to export contacts, deals, companies, tickets, lists, reports, and even custom properties. Export Contacts Need a CSV file of your contacts? It\u2019s simple. Go to CRM > Contacts . Choose a View or Apply filters if needed (like lifecycle stage = Lead) Click Export . Know the location of the button below (boxed in green): Choose CSV as your File format . Click Export \u2014 check your email to download the CSV file. Exporting HubSpot contacts with all your custom fields? Make sure those fields are visible in your view before exporting. Example: Export all contacts who filled out a webinar form, including job title and signup date. Export Deals, Companies, and Tickets You can also export other key CRM objects like deals, companies, and support tickets. Just follow the same general steps: Go to the object you want (Deals, Companies, or Tickets) Apply filters \u2014 like \u201cOnly open tickets\u201d or \u201cDeals in Q2\u201d. Click Export. Choose CSV file format and go. Check your email for the exported CSV. Need to export deals from a specific pipeline? Use the pipeline filter first before exporting. Example: Export all open deals from the Enterprise pipeline so you can prep for your next sales meeting. Export Lists If you\u2019re working with a list \u2014 static or active \u2014 HubSpot lets you export those too. Lists are a way to segment and organize records based on specific criteria. Check out a sample below: Go to CRM > Lists . Select your list. Click Export . Choose CSV and hit export. Active lists update automatically, so they\u2019re great for ongoing segmentation. Static lists are fixed snapshots. So, if data changes, it stays the same. Example: Export an active list of newsletter subscribers who opened an email in the last 7 days. Export Reports You can download the raw data from most table-style reports in HubSpot \u2014 just not the visuals like pie charts or graphs. To export: Go to your Dashboard . Find the report you want. From the report within the dashboard, look for the menu in the top-right corner. Click Export unsummarized data . Choose the CSV file format and hit Export . Check your email for the CSV file. This is a quick way to get metrics out of HubSpot and into Excel or Google Sheets for deeper analysis. Example: Export a deals-by-owner report in CSV format to track team performance offline. Export Custom Properties You can also export your property definitions \u2014 helpful for audits, migrations, or just documenting your setup. Here\u2019s how: Go to Settings > Properties . Click Export all properties . Choose CSV to download your fields, internal names, and settings, and hit Export . Check your email to download the CSV. Example: Export all custom contact properties before switching to a new CRM to keep your setup consistent. \ud83d\udca1 Quick Tip: What you see in your HubSpot view is what ends up in your CSV. So take a second to double-check your filters and selected columns before exporting. Wondering why your export is missing fields or why you can\u2019t export linked records like Contacts + Deals together? Let\u2019s look at the limitations of HubSpot CSV exports \u2014 and how to work around them. Limitations of CSV Export in HubSpot Exporting CSVs from HubSpot is super handy \u2014 until you hit a wall. Some things just don\u2019t export the way you expect, and a few common HubSpot CSV export issues can trip you up if you\u2019re not ready. Let\u2019s go over the most important HubSpot export limitations so you don\u2019t waste time later. Max File Size HubSpot has file size limits for exports, especially for large datasets. If your export is too big, it may fail, or you\u2019ll get it broken into parts via a ZIP file. Quick fix: Apply filters to break large exports into smaller chunks. Limited Columns You can\u2019t export every field in one go, especially if your records have a lot of custom properties. There\u2019s a cap on the number of columns that can be exported. In most cases, it\u2019s around 250 columns \u2014 anything more gets trimmed off. Tip: Prioritize only the fields you need. You can always run a second export for the rest. Associated Records Aren\u2019t Included If you\u2019re exporting contacts, you won\u2019t automatically get the deals or tickets linked to them \u2014 and vice versa. Each object (Contacts, Deals, Companies, etc.) must be exported separately. This is one of the most common HubSpot export limitations folks run into when trying to view everything in one spreadsheet. Workaround: Use HubSpot\u2019s reports to create a joined view, then export the table if possible. Charts and Visuals Can\u2019t Be Exported Reports with charts, graphs, or dashboards? You can\u2019t export those as CSV. Only the raw table data behind them can be exported. Alternative: Take a screenshot or export the underlying table instead. CSV Format Quirks Sometimes CSV exports show weird characters, missing line breaks, or misaligned data when opened in Excel or Google Sheets. Fix it: Always open the file using UTF-8 encoding Double-check column headers and delimiters (especially if your data includes commas) \ud83d\udca1 Quick Tip: If your HubSpot export isn\u2019t working or data seems missing, double-check your filters, view settings, and file size. Nine times out of ten, it\u2019s one of those. Tired of repeating the same export steps every week? Let\u2019s look at how you can automate HubSpot CSV exports using a tool like Skyvia \u2014 so you can save time and skip the manual work. Automate CSV Export from HubSpot with Skyvia Manually exporting the same data every week? That gets old fast. That\u2019s where Skyvia steps in \u2014 a no-code platform that lets you schedule automatic HubSpot exports to destinations like Google Drive, Dropbox, or FTP. No scripts. No stress. Skyvia supports filters, custom field selection, and even lets you map HubSpot fields to match your target file layout. So you can keep your reports up to date without lifting a finger. Step-by-Step: Set Up HubSpot Export CSV Integration in Skyvia Here\u2019s how to get it done in a few simple steps: 1. Create a HubSpot Connection Log in to your [Skyvia account](https://skyvia.com/) . Click +Create New > Connection . Search and choose HubSpot from the list. Sign in with your HubSpot account and authorize access. Name your HubSpot connection, and save it. Boom \u2014 Skyvia can now access your CRM data. Check out what it looks like from below: 2. Set Up Your Target (CSV Destination) Decide where you want to send your exported CSV files: Google Drive Dropbox FTP/SFTP server Or other cloud storage. Create a connection to your target platform just like you did with HubSpot. Below is a successful connection to an FTP location: 3. Create a New Export Integration Go to + Create New > Export Choose your HubSpot connection as the source. Use the one you created earlier. Pick the object to export (like Contacts, Deals, or Companies) Apply filters (e.g., export only closed-won deals or contacts added last 30 days) Select your CSV target (like Dropbox or FTP). Use the storage connection you created earlier. Click Save and set up a schedule \u2014 daily, weekly, hourly\u2026 you choose. Check out a sample HubSpot export to CSV using Skyvia. It uses the same HubSpot and FTP location created earlier. And here\u2019s the recurring schedule so it\u2019s automatic: And that\u2019s it \u2014 you\u2019re now exporting CSVs from HubSpot on autopilot. \ud83d\udca1 Quick Tip: Need multiple exports for different teams? Set up separate Skyvia tasks for each one \u2014 marketing gets contacts, sales gets deals, support gets tickets. Prefer to handle things with scripts or want even more control? Let\u2019s check out other ways to export HubSpot data to CSV \u2014 including full CRM export and using the HubSpot API. Other Ways to Export HubSpot Data to CSV Skyvia is great for no-code folks \u2014 but it\u2019s not the only way to get your data out of HubSpot. If you\u2019re an admin or a developer, there are other tools and built-in options that give you more control and flexibility. Let\u2019s walk through them. Full CRM Data Export (Admin Only) If you\u2019re an admin and need everything \u2014 contacts, companies, deals, tickets, products \u2014 HubSpot offers a full CRM export . Here\u2019s how: Go to Settings. Scroll to Data Management > Import & Export . Choose the object to export from the list. This is perfect for backups, migrations, or closing down an account. Example: Export all CRM data for backup and store it on a secure storage like SFTP. Export via HubSpot API Want full control over what, when, and how you export? Use the HubSpot API to write custom scripts. You can export anything you can access in the CRM \u2014 and push the CSV wherever you want, like an FTP server. Here\u2019s a simple Python example that fetches contacts via API and writes the data to a CSV, then uploads it to an FTP location: import requests\nimport csv\nfrom ftplib import FTP\n\n# Replace with your actual HubSpot private app token\nHUBSPOT_TOKEN = 'your_private_app_token'\n\n# Step 1: Get contacts from HubSpot\nheaders = {\n 'Authorization': f'Bearer {HUBSPOT_TOKEN}',\n 'Content-Type': 'application/json'\n}\nresponse = requests.get('https://api.hubapi.com/crm/v3/objects/contacts', headers=headers)\ncontacts = response.json().get('results', [])\n\n# Step 2: Write contacts to CSV\ncsv_file = 'contacts_export.csv'\nwith open(csv_file, 'w', newline='', encoding='utf-8') as f:\n writer = csv.writer(f)\n writer.writerow(['First Name', 'Last Name', 'Email']) # Add more if needed\n for c in contacts:\n props = c['properties']\n writer.writerow([props.get('firstname'), props.get('lastname'), props.get('email')])\n\n# Step 3: Upload CSV to FTP\nftp = FTP('your.ftp.server')\nftp.login('ftp_user', 'ftp_password')\nwith open(csv_file, 'rb') as f:\n ftp.storbinary(f'STOR {csv_file}', f)\nftp.quit() This is a flexible way to automate complex exports, especially for enterprise needs. Use Google Sheets or Excel Add-ins If you\u2019re a light user and just want to pull data into a spreadsheet now and then, this one\u2019s easy. Use tools like the [Skyvia Excel Add-In](https://skyvia.com/excel-add-in) to place your HubSpot data to Excel. Check out a sample Excel spreadsheet with a query result from HubSpot using the Skyvia Excel Add-In: These add-ins connect HubSpot to Google Sheets or Excel. From there, you can tweak your data and export as CSV anytime. Example: Use Skyvia Excel Add-in to refresh a contact list in Microsoft Excel every morning \u2014 then export to CSV and share with your team. \ud83d\udca1 Quick Tip: Need one-time exports? Use the built-in tools. Need control? Go API. Need hands-free simplicity? Skyvia or Sheets add-ons are your best bet. Not sure which method is right for you? Let\u2019s compare HubSpot\u2019s native export, Skyvia, and other popular options side by side. Best Way to Export CSV from HubSpot: Side-by-Side Comparison Still unsure which export method fits you best? Here\u2019s a quick side-by-side look at your main options: Method Best For Setup Time Skill Level Customizability Native HubSpot Export Quick one-time exports (contacts, deals, reports) None \u2013 ready to use Beginner-friendly Low \u2013 limited filters and fields Skyvia Export Integration Scheduled exports to cloud (Google Drive, FTP) ~15\u201330 minutes No-code users Medium \u2013 supports filters, mapping Full CRM Export Backup or migration (admin-only) 5\u201310 minutes Admin only Low \u2013 all or nothing HubSpot API + Scripts Developers who want full control 30\u201360 minutes or more Advanced (coding skills) High \u2013 fully customizable Google Sheets or Excel Add-ins Marketers and light users 10\u201315 minutes Beginner to Intermediate Medium \u2013 basic filter options \ud83d\udca1 Quick Tip: If you just need a quick CSV for a report, stick with native export. But if you\u2019re doing it often \u2014 or need it hands-free \u2014 automation or scripts are totally worth it. If something\u2019s missing in your CSV, the export failed, or the button\u2019s just\u2026 gone \u2014 don\u2019t worry. Let\u2019s walk through a few quick fixes for common HubSpot export issues . Troubleshooting HubSpot Export CSV Issues Ran into trouble while trying to export data from HubSpot? You\u2019re not alone \u2014 a few common glitches can trip things up, but most have easy fixes. Let\u2019s break them down. Export Button Not Showing You\u2019re looking everywhere, but the Export option just isn\u2019t there. Most of the time, this means your HubSpot user role doesn\u2019t have permission to export data. Only users with export access can see and use that button, and permissions are set by your HubSpot admin. Example: Your marketing intern logs in to export a contact list but can\u2019t find the export button. Turns out, they\u2019re not assigned export rights under their user role. Export Fails or Gets Stuck You click export\u2026 and nothing happens. Or worse, the file just never arrives. This usually happens when: The file is too large There\u2019s a temporary issue on HubSpot\u2019s end You\u2019re exporting too many columns or rows at once Fix: Try breaking up your export with filters or reducing the number of properties selected. Example: You\u2019re exporting 50,000 contacts with 200+ properties, and the process hangs. You apply a \u201clast 30 days\u201d filter and reduce to 50 properties \u2014 export works like a charm. Missing Fields or Incomplete Data You run the export and \u2014 surprise! \u2014 your CSV is missing fields or showing empty values. The most common causes: Those fields weren\u2019t visible in your current table view You didn\u2019t select them in the export options The data doesn\u2019t exist for some records (e.g., not every contact has a job title) Fix: Double-check your view and make sure all key properties are selected before export. Example: You expect to see \u201cJob Title\u201d in the contact export, but it\u2019s blank for most rows. After checking, you realize only 30% of contacts ever had that field filled in. Wrong Characters or Broken Formatting Sometimes your CSV looks messy when opened \u2014 weird symbols, broken line breaks, or data shoved into the wrong columns. This usually happens when: The CSV isn\u2019t opened with UTF-8 encoding Your data contains commas or line breaks that mess with formatting Excel auto-detects the wrong delimiter Fix: Use \u201cImport\u201d instead of \u201cOpen\u201d in Excel or Sheets, and pick the correct encoding and separator. Example: A contact\u2019s note field includes commas and quotes, causing the data to spill into extra columns. Opening the file with Excel\u2019s import tool fixes the formatting. Hitting HubSpot API Limits If you\u2019re using a script or third-party tool like Skyvia and your export suddenly fails or stalls, you might be bumping into HubSpot\u2019s API rate limits . Here\u2019s what you need to know: HubSpot\u2019s default limit is 250,000 API calls per day There\u2019s also a short-term burst limit: 100 calls every 10 seconds Hitting either limit can cause your export to slow down or stop completely Fix: Space out your automated exports Use filtering to reduce the data pulled For heavy-duty exports, consider running them during off-peak hours Example: You schedule three large Python exports at the same time \u2014 one for contacts, one for deals, one for tickets. Two run fine, but the third fails. Turns out you hit the burst limit. Spacing them 10 minutes apart solves the issue. \ud83d\udca1 Quick Tip: If your export fails, check both the data volume and your API usage. Most third-party tools show you how many calls you\u2019ve used. Conclusion Exporting data from HubSpot doesn\u2019t have to be a guessing game. You\u2019ve got options \u2014 whether you just need a quick CSV, a weekly automated report, or a full backup for migration. Let\u2019s recap your choices: Use HubSpot\u2019s native export for simple, one-time exports Set up Skyvia to automate CSV exports on a schedule Try full CRM export for admin-level backups Use the HubSpot API or tools like Coupler.io for custom workflows No matter what method you go with, always double-check your filters, pick the right fields, and test a small export first. Ready to Take the Next Step? Here are some helpful next reads: [How to Integrate HubSpot and Snowflake](https://skyvia.com/blog/hubspot-to-snowflake-data-analysis/) [How to Integrate HubSpot and NetSuite](https://skyvia.com/blog/hubspot-netsuite-integration/) [Integrate HubSpot and Shopify Ultimate Guide](https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/) [Best Methods of Integrating HubSpot with Salesforce](https://skyvia.com/blog/hubspot-salesforce-integration/) [How to Connect HubSpot to MySQL In Various Ways](https://skyvia.com/blog/how-to-connect-hubspot-to-mysql/) F.A.Q. for HubSpot Export CSV Can I schedule CSV exports from HubSpot? Not directly in HubSpot \u2014 at least not in the free or basic plans. But you can automate CSV exports using tools like Skyvia , Coupler.io , or by writing scripts with the HubSpot API. Is there a free way to export all HubSpot data? Yes \u2014 if you\u2019re an admin, you can use the Full CRM Export in your HubSpot settings. It gives you a ZIP file with multiple CSVs, covering all your main CRM objects. How do I export email data or activity logs? You can\u2019t export detailed email interactions or activity logs in bulk through HubSpot\u2019s built-in export. For that, you\u2019ll need to use the HubSpot API or third-party connectors that support those object types. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-export-csv-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Export+CSV+from+HubSpot+%28Contacts%2C+Deals%2C+Reports+%26+More%29&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-export-csv-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-export-csv-guide/&title=How+to+Export+CSV+from+HubSpot+%28Contacts%2C+Deals%2C+Reports+%26+More%29) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/hubspot-netsuite-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Integrate HubSpot and NetSuite: Step-by-Step Guide By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/hubspot-netsuite-integration/#respond) 3666 April 6, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-netsuite-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+HubSpot+and+NetSuite%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-netsuite-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-netsuite-integration/&title=How+to+Integrate+HubSpot+and+NetSuite%3A+Step-by-Step+Guide) In today\u2019s fast-paced business world, integrating software tools has become more important than ever before. It enables businesses to streamline their operations, reduce errors, and make better decisions. In this article, we\u2019ll explore the benefits of integrating HubSpot with NetSuite and the various ways to connect these two platforms. Table of Contents Connection via Pre-built integrations Connection via Skyvia About HubSpot About NetSuite HubSpot and NetSuite integration benefits Conclusion Connection via Pre-built integrations One of the ways to connect HubSpot with NetSuite is through pre-built integrations. HubSpot offers an out-of-the-box integration that allows you to sync your contacts, leads, and deals data with NetSuite. To set up this integration, you\u2019ll need to have a HubSpot Marketing Hub or HubSpot CRM account and a NetSuite account. Once you\u2019ve connected the two accounts, you can start syncing data between them. You can also set up workflows to automate the syncing process, saving you time and effort. [Hubspot App Marketplace](https://ecosystem.hubspot.com/marketplace/apps/finance/accounting/netsuite-226318) Here is a step-by-step guide for connecting HubSpot with NetSuite via pre-built integrations: Log in to your HubSpot account and click on the gear icon in the top-right corner of the screen to access your settings. In the left sidebar, select Integrations and then click on Connect an app . Search for NetSuite and click on the NetSuite icon. Click Connect and log in to your NetSuite account when prompted. In the integration settings, select the data that you want to sync between HubSpot and NetSuite. This can include contacts, companies, deals, and more. Configure your sync settings by choosing which records to sync and how often to sync them. Map your NetSuite fields to HubSpot properties to ensure that data is accurately synced between the two platforms. Once you have completed these steps, click Save to finalize the integration. Connection via Skyvia If you\u2019re looking for a more customizable [Hubspot to Netsuite integration](https://skyvia.com/data-integration/integrate-hubspot-netsuite) solution, Skyvia is an excellent option. It is a cloud-based integration platform that enables you to connect HubSpot with NetSuite and many other software tools. With Skyvia, you can map fields between the two systems, filter data, and set up custom workflows to automate the syncing process. Skyvia offers a user-friendly interface that makes it easy to set up and manage your integrations. You can monitor the status of your integrations and get alerts when errors occur, ensuring that your data is always up-to-date and accurate. When it comes to integrations, Skyvia provides several useful products such as [Import, Export, Synchronizations, Replication](https://skyvia.com/data-integration/) , and others. Let\u2019s check how to import data from Netsuite to Hubspot. For this, you need to create connections to Netsuite and Hubspot and an import package. Create Hubspot Connection To create a connection to Hubspot: Go to New > Connection and choose Hubspot. Click Sign In with HubSpot . In the opened window, enter your HubSpot credentials and click Log in . Choose your account and click Connect App . Click Create Connection . Create Netsuite Connection To create a connection to Netsuite: Go to New > Connection and choose Netsuite. In the Authentication list, select either Basic or Token-Based. Enter your your [login credentials](https://docs.skyvia.com/connectors/cloud-sources/netsuite_v2_connections.html) . Specify your Account Time Zone. Click Create Connection . Create and Run Import Package To create a package that imports Netsuite data to Hubspot, do the following: Go to New > Import . Choose Data Source as your source type. Choose Netsuite connection as Source and Hubspot connection as Target . Add a task to the integration by clicking Add New . You can add multiple tasks to each integration. Select a Netsuite object you want to import, set preferred filters, and click Next Step . Select a Hubspot object you want to import data to and an action that should executed over this data. Once done, click Next Step . Map the required fields and save your progress. Click Create to create an Import package. Click Run to execute. Use the [schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) to automate the import process. Import is just one tool that is offered by the Data Integration product in Skyvia. Here is the list of available tools and their short descriptions: Import : Quickly imports data from various sources to your destination. This includes fields mapping, filtering, and validation rules. Export : Exports data from your source system to a target destination in CSV format, with filtering criteria. Data Flow : Creates complex data transformations and manipulations between your source and target systems, including joining, filtering, aggregating, and pivoting data. Control Flow : Creates complex control structures and workflows for your integration tasks, including conditional statements, loops, error handling, and more. Synchronization : Keeps your source and target systems in sync by comparing and updating data between them. Replication : Replicates data to a database or data warehouse for future analysis. About HubSpot [HubSpot](https://www.hubspot.com/) is an all-in-one marketing, sales, and customer service platform that enables businesses to attract, engage, and delight customers. It offers a wide range of tools, including lead capture forms, email marketing, social media management, and customer feedback surveys. HubSpot also provides robust reporting and analytics capabilities that enable businesses to measure the effectiveness of their marketing and sales efforts. About NetSuite [NetSuite](https://www.netsuite.com/portal/home.shtml) is a cloud-based enterprise resource planning (ERP) system that provides businesses with a suite of tools for managing their operations. It offers modules for financial management, inventory management, order management, and manufacturing, among others. NetSuite also provides reporting and analytics capabilities that enable businesses to gain insights into their operations and make data-driven decisions. HubSpot and NetSuite integration benefits Integrating HubSpot with NetSuite offers a wide range of benefits for businesses, including: Improved data accuracy : By syncing data between HubSpot and NetSuite, you can ensure that your customer and prospect data is consistent and up-to-date. Increased efficiency : Integrating the two systems can save you time and effort by automating manual data entry tasks. Enhanced visibility : By integrating HubSpot with NetSuite, you can gain a complete view of your customer interactions, enabling you to make more informed decisions. Better lead management : With HubSpot\u2019s lead capture forms and NetSuite\u2019s lead management capabilities, you can track leads from acquisition to close, ensuring that no leads slip through the cracks. Improved customer service : By syncing customer data between the two systems, you can provide better, more personalized service to your customers. Conclusion Integrating HubSpot and NetSuite can bring a range of benefits to businesses, including improved data management and streamlined workflows. Pre-built integrations can be a good option for basic tasks, but for businesses looking for a wider range of features, exploring alternative integration solutions such as Skyvia may be worthwhile. By leveraging the power of these integrations, businesses can gain a competitive edge in their industries and drive growth and success in the long term. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-netsuite-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+HubSpot+and+NetSuite%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-netsuite-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-netsuite-integration/&title=How+to+Integrate+HubSpot+and+NetSuite%3A+Step-by-Step+Guide) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/hubspot-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Methods of Integrating HubSpot with Salesforce By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/hubspot-salesforce-integration/#respond) 6748 April 24, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Methods+of+Integrating+HubSpot+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-salesforce-integration/&title=Best+Methods+of+Integrating+HubSpot+with+Salesforce) Sales and marketing collaboration looks like a usual process. Both teams use separate platforms like HubSpot and Salesforce and constantly exchange data. However, if you look better, you\u2019ll notice it is a Pandora\u2019s box. You\u2019ll see endless manual updates, inconsistent records, mismatched data, and reporting headaches. Sales end up working with partial information or chasing leads that have already gone cold. Looks hopeless, isn\u2019t it? Integrating Salesforce and HubSpot isn\u2019t just about syncing contact records \u2014 it\u2019s about aligning your growth engine. When both platforms intertwine, you have a complete picture. Your teams work faster, your reports reflect reality, and there are fewer human mistakes. This article helps to prepare users for HubSpot Salesforce integration and demonstrates how to set it up using native connectors and third-party tools. Table of contents What is Salesforce? What is Hubspot? Why Integrate HubSpot and Salesforce How to prepare for HubSpot and Salesforce Integration Best Methods for HubSpot Salesforce Integration How to Integrate Salesforce and HubSpot Using Native Connector How to Integrate Salesforce and HubSpot with Skyvia Best Practices for Salesforce HubSpot Integration Summary What is Salesforce? Salesforce is one of the most popular management platforms. Support, sales, and marketing teams utilize it worldwide to better connect with partners, customers, and leads. Let\u2019s check some general features that stand behind its popularity. What is Hubspot? HubSpot is also a big player in the market. It provides user-friendly tools for the marketing team to attract people, the sales team to engage those people, and the service team to delight clients so they will attract even more customers. Let\u2019s check what Hubspot brings for each of those categories. Why Integrate HubSpot and Salesforce Proper data sync is a way to fix broken handoffs, clean up messy process flows, and give teams an overall perspective on real situations. Below are the key reasons why businesses decide to integrate these platforms. Less Manual Intervention Integration automates data transfer between systems, so teams sync without spreadsheets or double entries, saving time and avoiding human mistakes. For example, HubSpot\u2019s lead scoring can automatically update lead information in Salesforce. Similarly, activities logged in HubSpot, like form submissions and marketing emails, can be synced with Salesforce as tasks. Data Alignment Between Teams You get a much clearer picture of your leads, contacts, companies, and deals because the information is shared between HubSpot and Salesforce. This alignment means your marketing team in HubSpot and your sales team in Salesforce are operating with the same information, reducing the chances of miscommunication and ensuring everyone has the most up-to-date details. Improved Visibility By integrating these platforms, marketing and sales teams gain better insight into the customer journey. Marketers in HubSpot can see which of their efforts are leading to closed deals in Salesforce, helping them to optimize their campaigns. Sales teams in Salesforce can view HubSpot\u2019s marketing activity for their leads and customers, providing valuable context for their outreach. Comprehensive Reporting Salesforce excels at tracking sales processes and outcomes, while HubSpot provides deep insights into marketing activities and engagement. By syncing data like contacts, deals (opportunities), accounts (companies), and activities (events/tasks), you can start to create reports that unite marketing efforts and sales results. For instance, you can analyze which marketing campaigns in HubSpot are generating the most qualified leads that eventually convert into opportunities and closed deals in Salesforce. Embedded Marketing Tools for Salesforce Teams If your sales team is deeply embedded in Salesforce and resistant to switching systems, integrating with HubSpot allows your marketing team to utilize a robust marketing automation platform without requiring the sales team to leave their familiar environment. How to prepare for HubSpot and Salesforce Integration So, your company has finally decided to connect both platforms. It\u2019s great, but what\u2019s next? The answer is preparation. Before you start making practical implementation steps, you must think of a solid foundation. Here are five crucial steps that may help you set up smooth integration: Define Integration Goals and Objectives . Think of what you want to achieve with this integration. What exact problems does it have to solve for your business, and what results do you expect from it? Audit your data. Each minor inconsistency may lead to unexpected consequences. If you don\u2019t detect and clean up the mess, you will have it multiplied by the future integration. To avoid syncing outdated contacts, mismatched fields, and duplicates across both systems, consider performing data cleanup. Evaluate data volumes to understand what capacity your business will require. Plan Data Mapping and Sync Rules . Both platforms have completely different data structures. Before implementing your integration use case, think about how to match the data across platforms. Define the mapping for the key objects: contacts, companies, deals, and lifecycle stages. Agree on naming and structure, and align field types across both systems. Outline the workflows. Evaluate the existing workflows across systems and plan what exact workflows you will automate. Check whether it is enough to integrate standard objects or if you also require custom objects or fields. Outline all the processes. Choose the integration method. Now, you can decide which method to use for Salesforce and HubSpot collaboration. This decision is not an easy one. It depends on integration objectives, business size, data volume, customization requirements, and other factors. We explored and compared the existing methods below. Best Methods for HubSpot Salesforce Integration Data integration tasks vary from team to team. Companies may have completely different business goals, even operating in the same industry. Thus, there is no standard approach. Each company makes decisions for itself. We compared several data integration solutions to help you decide for your business. Method Ease of setup Flexibility Cost Maintenance Best For Native Connector Easy Low Low Low Small/standard setups Third-Party Tools Easy Medium Low to Medium Low Mid-size/advanced use cases Custom API Integration Hard Very High High High Enterprises or unique business requirements Let\u2019s look at each method in detail below. Native HubSpot Connector HubSpot offers a built-in connector for Salesforce, available for all HubSpot plans except the free tier. It connects standard objects like contacts, companies, and deals. This method is suitable for getting started, but can become restrictive for growing companies or those with non-standard data models. Best for Automating simple marketing-to-sales handoffs in small to mid-sized teams with standard workflows that want fast integration with minimal setup. Pros Easy to set up. Supports bidirectional sync for standard objects. Built and supported directly by HubSpot Cons Available only for Professional or Enterprise subscriptions. Limited customization \u2013 custom objects and fields integration is not supported. No advanced scheduling or sync logic Third-Party Integration Tools like Skyvia, Zapier, Workato, or others Although the native connector offers two-way integration, it only allows you to import and synchronize standard objects and is available for specific subscriptions only. Third-party solutions instead provide more capabilities for a broader range of data-related tasks. This method involves [cloud-based data integration solutions](https://www.g2.com/search?utf8=%E2%9C%93&query=data+integration&filters%5Bstar_rating%5D%5B%5D=5) that connect Salesforce, HubSpot, and other tools with advanced data sync logic, transformations, and monitoring features. Third-party tools can connect multiple data sources, support complex workflows, and offer more customization features than native HubSpot connectors. It is ideal for teams who need more control than the native connector allows, without going full custom. Best for Teams that need more control over integration rules for non-trivial data architecture and complex workflows. Pros Easy to set up. High flexibility with field mapping, filters, scheduling, and automation. Custom objects support. Monitoring features and error handling. No coding Cons Additional cost compared to native HubSpot connector Available capabilities depend on the subscription. Requires more setup and configuration than the native connector. Skyvia-specific bonus Skyvia stands out for no-code configuration, support for bulk data sync, custom scheduling, and a strong balance of power and usability \u2014 making it a solid choice for teams that outgrow the native connector but don\u2019t want to build from scratch. Custom API Integration The alternative method involves fully custom integration built using the Salesforce and HubSpot APIs, which are often developed in-house or by an outsourced partner. Allows total control over implementation, logic, data flows, security, and error handling. This method is excellent for complex data models that third-party tools can\u2019t handle. Best for Enterprises with integration objectives that other methods cannot satisfy and with internal development teams who can afford to outsource development partners. Pros Total control over the integration. Full customization capabilities. It can accommodate any logic, volume, and workflow. Flexible security and error handling. Cons High upfront cost (dev time, testing, maintenance) Requires ongoing internal ownership and maintenance. Implementation complexity. How to Integrate Salesforce and HubSpot Using Native Connector This method is available for users with Professional or Enterprise subscriptions on both platforms. STEP 1: Install Native HubSpot connector for Salesforce Go to HubSpot, click the marketplace button in the top right corner, and Apps . Type Salesforce in the search bar and select Salesforce . On the Salesforce native connector page, click Install App and sign in with Salesforce. STEP 2: Install HubSpot in Salesforce Click Start the Salesforce package installation to begin. You\u2019ll be redirected to Salesforce. Select Install for All Users -> Install . Select Yes, grant access to these third-party websites , click Continue , and wait while Salesforce installs the HubSpot integration package after the package has been installed in Salesforce. Get back to HubSpot and click Next . NOTE: You can optionally install the HubSpot Visualforce module on Salesforce. This module shows the contact\u2019s likelihood to close and allows you to view and filter contact activity and enroll contacts in HubSpot workflows. Click Add HubSpot to Salesforce to install or skip this step to proceed to the next step. STEP 3: Configure Integration Settings Choose the preferred sync setup method. If you select Recommended setup, click Review settings . Note: If you select Recommended setup, HubSpot will create mappings between HubSpot properties and Salesforce fields. If the Salesforce field does not have a matching HubSpot property, a new property will be made in HubSpot by an Unknown user. If you select Advanced setup, click Next, and manually set up your contact, activity, task, object, and property sync settings. Click Change to modify the sync settings. When you\u2019re ready, click Finish setup and begin syncing . That\u2019s it. Now, you can manually import your records between systems or create triggers that will launch the import process. STEP 4: Import Data Between Salesforce and Hubspot If you have the HubSpot Salesforce integration enabled, you can import Salesforce leads, contacts, accounts, opportunities, tasks, and campaigns into HubSpot. Open Hubspot and go to Contacts > Contacts. Click Import on the upper right, and choose Start an Import . Select Integrations > Salesforce records, then click Next . Select the Salesforce object you want to import and click Begin import. When your import is complete, it will appear in the import table. Click View import errors to view any errors for your Salesforce import. How to Integrate Salesforce and HubSpot with Skyvia Let\u2019s look at the integration example using a third-party tool. In the example below, we show you how to import Salesforce contacts to HubSpot using Skyvia in several simple steps. [Skyvia](https://skyvia.com/) is a universal no-code cloud platform that helps non-tech users solve various data-related tasks with no coding: [ETL](https://skyvia.com/learn/etl-pipeline-meaning) , [ELT](https://skyvia.com/learn/what-is-elt) , [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) , data migration, one-way and bi-directional data sync, workflow automation, real-time connectivity, and others. Skyvia helps to connect [200+ cloud apps, databases, and storage services](https://skyvia.com/connectors) . To implement your integration scenarios, you need an active Skyvia account. No account yet? Get a trial and hop on the data integration race now. STEP 1: Create Connections to Salesforce and HubSpot Click +Create New -> Connection and select Hubspot . Click Sign in with HubSpot and use your credentials to log in. To connect to Salesforce, click +Create New -> Connection and select Salesforce . Sign in with Salesforce and save the connection. STEP 2: Create Integration Click +Create New -> Import . Set the Source type to Data Source and choose Salesforce as Source and HubSpot as Target. STEP 3: Create Import Task Click Add New on the upper right to add an import task. Select the source object you want to import. Here, you can select objects, set filters, or write a custom command to select specific data with advanced conditions. Select the object you want to import data to and choose what action to perform. Map the source object fields to target object fields using available [mapping](https://skyvia.com/learn/what-is-data-mapping) types. Skyvia offers several mapping types that help transform source data to fit the target data structure. Save the task and the integration. NOTE: Add more tasks to import multiple objects in a single integration. STEP 4: Run the Integration and Check Results Launch the integration manually by clicking the corresponding button. Or set up a schedule to automate integration runs. The schedule can automatically run integration on specific dates or weekdays every few hours or minutes. Check the integration results on the Monitor or Logs tabs. That\u2019s it, now your data is loaded from Salesforce to HubSpot. You can load more objects or integrate other supported data sources. Advanced Use Cases The example above shows a simple use case of importing a single object. However, with Skyvia, you can implement more advanced scenarios involving more transformations and customization capabilities.\u00a0 This is possible with our designer-based tools [Data Flow and Control Flow](https://www.youtube.com/watch?v=U8Zbk03E58Q) . For example, you can build custom data flows and tailor the integration to your business-specific requirements and architecture. Follow the link above and learn more about advanced Skyvia solutions. Best Practices for Salesforce HubSpot Integration We\u2019ve consolidated several points that can soften your company\u2019s data integration journey. These steps help to reduce the risk of common issues. Plan and prepare. Take your time to prepare for the integration, and don\u2019t underestimate [building a strategy](https://skyvia.com/learn/data-integration-strategy) . The more you plan and predict, the fewer problems wait for you later. Before jumping into the technical setup, define the objectives clearly. List all processes it should automate and forecast potential issues. Plan the flow of data between HubSpot and Salesforce and identify possible bottlenecks. Select the appropriate integration method. Not every integration method suits your business needs. Take time to review the available options \u2014 whether it\u2019s the built-in native connectors, flexible third-party tools like Skyvia, or others, or custom API-based solutions. The right method should align with your business objectives and your project specifics. Consider integration complexity, flexibility, budget, and available resources. Test before going live. Once your plan is ready, the method is chosen, and the [data mapping](https://skyvia.com/learn/what-is-data-mapping) is completed, proceed to testing. Carefully test all the possible scenarios, objects, and workflows. Recreate actual business conditions and check the integration behavior in various cases. Goof testing ensures you\u2019ll catch issues early and makes the implementation smoother for everyone involved. Take care of maintenance. Think about how to track the integration performance and efficiency. You may need [backups](https://skyvia.com/backup) to avoid data loss in case of failures. You also have to think about how to maintain and monitor the integration. The amount of required resources partially depends on the chosen method. Third-party tools usually offer built-in monitoring features and support services. You should assign monitoring, maintenance, and support resources using custom API development. Arrange regular audits. Everything changes fast, and technologies develop each day. Your integration must be flexible. Even if it went live smoothly, you must perform regular audits. Keep track of the last updates and changes on both platforms and your business processes. Check the integration for relevance. Regularly monitor the data structures. Summary HubSpot Salesforce integration benefits businesses by: Reducing manual data entry Aligning team efforts Improving visibility into customer interactions More accurate reporting Process automation flexibility Customer experience improvement If you\u2019re just starting or have a small to mid-sized team with standard workflows, native solutions or third-party tools may be the best fit. However, custom API integration might be worth considering if your business has complex needs or you want complete control over your data and processes. Each method offers different levels of flexibility, complexity, and cost. By carefully planning your integration, choosing the right approach, and conducting thorough testing, you can avoid common issues and succeed. Ready to connect HubSpot and Salesforce? Choose the proper method for your business and take your sales and marketing alignment to the next level. If you\u2019re unsure which method is best for your needs, explore Skyvia for a no-code solution that balances ease of use with powerful customization. Get started today! F.A.Q. for HubSpot Salesforce Integration How do I set up the HubSpot Salesforce integration? To set up the integration, go to the HubSpot App Marketplace, select the Salesforce connector, and sign in to Salesforce to install the HubSpot integration package. Grant third-party access, configure sync settings in HubSpot (e.g., contacts, companies, deals), and add the HubSpot Embed window in Salesforce for activity tracking. A Salesforce admin with API-enabled permissions is required. Which Salesforce editions are compatible with the HubSpot integration? The HubSpot Salesforce integration supports Salesforce editions with API access, including Enterprise, Unlimited, and Professional editions. It also works with Salesforce Government Cloud but does not support Salesforce Group Edition. What are common sync errors in the HubSpot Salesforce integration, and how do I fix them? Common sync errors include data formatting inconsistencies, mismatched field mappings, or insufficient API call limits. Check the Sync Health tab in HubSpot for error details, ensure field mappings align, and allocate sufficient API calls in Salesforce. Regular data cleaning and testing in a sandbox environment can prevent issues. What permissions are required for the HubSpot Salesforce integration? The integration user needs a Salesforce edition with API access, Account Access permissions in HubSpot, and specific Salesforce permissions like API Enabled, Modify All on synced objects, and Download AppExchange Packages. For the HubSpot Embed window, additional permissions like Modify Metadata are required. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Methods+of+Integrating+HubSpot+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-salesforce-integration/&title=Best+Methods+of+Integrating+HubSpot+with+Salesforce) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/hubspot-to-snowflake-data-analysis/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Integrating Hubspot and Snowflake By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/hubspot-to-snowflake-data-analysis/#respond) 3307 July 19, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-to-snowflake-data-analysis%2F) [Twitter](https://twitter.com/intent/tweet?text=Integrating+Hubspot+and+Snowflake&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-to-snowflake-data-analysis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-to-snowflake-data-analysis/&title=Integrating+Hubspot+and+Snowflake) When it comes to the data\u2019s place of residence \u2013 the data warehousing option is nowadays. Recent research has proven that [54% of organizations](https://www.trustradius.com/buyer-blog/data-warehouse-statistics) have already adopted data warehouses, making them the most widely used technological solution for data storage and further analysis. Any brick-and-mortar warehouse needs a manager to receive goods from suppliers and accommodate them adequately, and any data warehouse needs an orchestrator. And the Snowflake tool comes in handy! It allows companies to create, fill up, and sustain data warehouses. Snowflake receives digital goods \u2014 data from cloud apps and on-prem sources \u2014 and orchestrates them. This article focuses on HubSpot data source as the most popular marketing automation solution used by millions of companies worldwide. We describe [HubSpot Snowflake integration](https://skyvia.com/data-integration/integrate-hubspot-snowflake) in several steps and explain how Skyvia makes this process painless. Table of Contents About Hubspot Key Features and Benefits of Hubspot About Snowflake Key Features and Benefits of Snowflake Benefits of Integrating Hubspot to Snowflake How to Connect Hubspot to Snowflake in Skyvia Loading Hubspot Data to Snowflake Data Analysis in Snowflake Other Sync Scenarios Conclusion About Hubspot [HubSpot](https://www.hubspot.com/) is an excellent CRM solution for marketing, sales, and customer service teams. This tool is so popular because it\u2019s free to use and implements necessary functions. Its crucial feature is marketing automation with a strong focus on customer satisfaction improvement. Key Features and Benefits of Hubspot Being a CRM, HubSpot allows users to operate the customer data according to business needs. Let\u2019s have a look at what exact features HubSpot offers to businesses: Contact management. Individual leads and company profiles can be created by including contact details and activity status. Conversation management. HubSpot provides chat and ticketing channels as a form of interaction with customers. Additionally, integration with the corporate Facebook Messenger is easily configurable. Ad campaign analysis. It\u2019s possible to connect corporate Facebook, Instagram, LinkedIn, and Google Ad accounts to HubSpot. The system automatically analyzes how users interact with different ads and suggests how to improve the effectiveness of ad campaigns by aligning them with user interests. Email automation and analysis. This makes a great part of the whole marketing automation. HubSpot has mechanisms to create emails based on templates, send them in bulk to chosen contacts, and analyze recipient engagement rates. Sales process guidance. There is everything needed to keep track of sales deals: meeting scheduling, documents, and payments. Advanced reporting. It\u2019s possible to create dashboards to obtain a clear overview of practically every marketing aspect and sales performance. The connection of HubSpot with Snowflake described later in this article introduces even more powerful reporting and analysis capabilities. The features mentioned above are included in the free version of HubSpot. There are also extra functions in the Professional plans of Marketing, Service, and Sales Hub. Thus, marketing campaign design, sales forecast, and feedback surveys are available at additional cost. About Snowflake [Snowflake](https://www.snowflake.com/) is a cloud-based solution designed for bringing structured and semi-structured data to a unified platform. It accommodates data from cloud applications, IoT devices, and OLTP databases in centralized data storage. This greatly facilitates operational reporting, empowers scalability, and makes data analytics transparent. Key Features and Benefits of Snowflake As Snowflake is completely cloud-based, it perfectly integrates with the largest cloud service providers: Azure, AWS, and Google Cloud. Snowflake provides the following features for its customers: Security. Role-based control and multi-factor authentication guarantee a secure approach to data storage and access. Time travel feature. Snowflake provides access to deleted or modified data at any point in time. This is especially convenient when one needs to restore some data. Scaling. As businesses might require various storage capacities at different points in time, scaling on-demand is possible with the corresponding prices. Data sharing. Snowflake offers secure sharing of data even with those who don\u2019t have an account on the platform. You can also find more about this in the article on the [be st ETL tools for Snowflake](https://skyvia.com/blog/snowflake-etl/) . Analytics. This service perfectly integrates with BI and analytics tools to ensure seamless querying of big data for driving valuable insights. Benefits of Integrating Hubspot to Snowflake Each of the above-mentioned tools performs excellently in its genre, though their combination in HubSpot Snowflake integration reinforces the power of each. Here\u2019s a list of the most tangible benefits that one can experience when combining the strengths of both solutions. When importing data from HubSpot to a centralized data warehouse, a wide range of analytical options is revealed. Snowflake prepares data and integrates it with analytical tools for driving balanced decisions for businesses without spending much time on that. A data warehouse can host data from various sources, so it\u2019s possible to transfer data not only from HubSpot but also from MailChimp, Dynamics CRM, Salesforce, and other tools used by a company. HubSpot Snowflake data sharing doesn\u2019t take much time and effort as there are advanced solutions, such as [Skyvia](https://skyvia.com/) , that don\u2019t require any complex API configurations to transfer data from one platform to another. Prerequisites Before starting the process of data transfer from HubSpot to Snowflake, make sure the following preconditions are met: [HubSpot account](https://app.hubspot.com/signup-hubspot/crm?) [Snowflake account](https://signup.snowflake.com/) [Skyvia account](https://app.skyvia.com/register) How to Connect Hubspot to Snowflake in Skyvia Instead of spending hours configuring the API via the CLI environment, use [Skyvia](https://skyvia.com/) for flawless [data integration](https://skyvia.com/data-integration/) between tools. This is a cloud-based platform accessed directly via a browser with no extra software installations. Such a service is suitable for any kind of business due to its enormous scaling capabilities and pay-as-you-go pricing model supporting companies upon their evolvement. [Skyvia](https://skyvia.com/) performs different data integration scenarios (ELT, ETL, and Reverse ETL) and supports more than 180 applications and widely used databases or data warehouses. This service can transfer data in the opposite direction \u2013 from Snowflake to HubSpot \u2013\u00a0according to the [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) workflow. Skyvia has the Replication solution to simply create a copy of HubSpot\u2019s data in Snowflake. Also, it allows the transfer of data from HubSpot to Snowflake according to the [ETL scenario](https://skyvia.com/data-integration/import) with conventional data transformations using the Import tool. As real data integration scenarios usually appear more complex than predicted, there\u2019s a need for advanced [data pipeline design](https://skyvia.com/blog/data-pipeline-architecture/) . Skyvia has Data Flow and Control Flow components for compound data pipelines involving complex logic and various data sources. For instance, it\u2019s possible to send data from HubSpot and other cloud apps with multistage transformations into a DWH in Snowflake. In this article, we particularly focus on the basic data integration scenario using the Import tool provided by Skyvia to move data from HubSpot to Snowflake. However, before integrating data, we need to establish connections with the platforms of interest. Create HubSpot Connector Log into your Skyvia account. Press the +NEW tab in the upper panel. Click Connection and select HubSpot from the list. Click Sign In with HubSpot . Enter your HubSpot credentials and click Log in . Select an account and click Choose Account . Click the Connect app to approve the access request. Create Snowflake Connector Press the +NEW tab in the upper panel. Click Connection and select Snowflake from the list. Enter your Snowflake domain name in the Domain field. Enter your Snowflake username in the User field. Enter your Snowflake account password in the Password field. Indicate the database name in the Database field. Enter the preferred data warehouse to transfer data to in the Warehouse field. Click Create Connection . There are also optional parameters in the [Snowflake connector](https://docs.skyvia.com/connectors/databases/snowflake_connections.html) setup window, such as Schema and Role , that could be specified if necessary. Loading Hubspot Data to Snowflake After the connectors are established in Skyvia, it\u2019s the right time to start sharing HubSpot data with Snowflake. Let\u2019s have a look at the most common scenarios for data transfer between these tools: data replication with the Replication tool and data integration with Skyvia\u2019s Import tool. SCENARIO 1: CREATE REPLICATION PACKAGE (ELT) When there is a need to create an exact copy of selected HubSpot data in a DWH, use the Replication solution \u2013 an easy-to-use ELT tool. Skyvia can automatically create tables within DWH and then keep them up-to-date with incremental updates. Let\u2019s have a detailed look at how to set up [HubSpot to Snowflake data replication](https://skyvia.com/data-integration/replicate-hubspot-to-snowflake) in Skyvia. Press the +NEW tab in the upper panel. Select Replication . NOTE: Replication with Snowflake connector works only in the bulk load mode. Indicate HubSpot as a source and Snowflake as a target. Select the Incremental Updates checkbox so that the system will add only new or updated records in further replication package runs. Select the Create Tables checkbox if new database tables must be created for replicated data. Select the HubSpot data fields for replication. Click Schedule to set the timing for the replicated copy check and keep it in the actual state. Click Create to preserve the replication package. To execute the replication package immediately, click Run . Otherwise, it\u2019s po SCENARIO 2: CREATE IMPORT PACKAGE (REVERSE ETL) Let\u2019s have a look at the sample use case: moving Deals data fields from the Snowflake-based DWH into the HubSpot. It\u2019s done with the Data Import solution using the Reverse ETL scenario to improve operational effectiveness. Press the +NEW tab in the upper panel. Select Import . Indicate all the necessary parameters for the import package: Snowflake as a source, HubSpot as a target, options, and batch size. Click Add new under Tasks to define parameters for data integration and mapping. Select the parameter (in this case, the one corresponding to deals) from the list of Source fields. Click Next Step . Select the corresponding data field on the HubSpot target side (in this case, Deals). Indicate the data integration operation type (INSERT, UPDATE, DELETE). Click Next Step . NOTE: Skyvia doesn\u2019t support UPSERT import operation for the Snowflake connector. Map target columns to source columns \u2013 see more details on mapping [here](https://docs.skyvia.com/data-integration/common-package-features/mapping/index.html) . Click Save to preserve instructions for the import package, then click Create in the tab bar to preserve the import package. Click Schedule to set the timing of the package. To execute the import package immediately, click Run . Otherwise, it\u2019s possible to transfer data from HubSpot to Snowflake later \u2014 the import package could be found under Objects -> Integrations . Data Analysis in Snowflake As Snowflake allows companies to accumulate incredibly large amounts of data and scale on demand, this creates perfect preconditions for data analysis. Snowflake itself doesn\u2019t provide analytical modules but perfectly integrates with BI tools such as Tableau, Looker, and many others. They analyze and visualize data by representing sophisticated graphs and charts so businesses can retrieve detailed information on their operational performance and growth. Reports generated by HubSpot itself aren\u2019t exhaustive enough, so other analytical tools might be handy. Given the above-mentioned opportunities provided by Snowflake and its conformity with tools for comprehensive data analysis, loading HubSpot data, there allows businesses to track key marketing performance metrics, perform target audience segmentation, and analyze customer lifecycle. Other Sync Scenarios The number of possible operations with data within Snowflake isn\u2019t just limited to the analytical and statistical reports. There are also other cases of how data could be effectively used to help businesses in their daily workflows and strategic planning. Data Activation The data accumulated in Snowflake from various sources (contact details, companies, customer service tickets, etc.) could be sent back to them with [reverse ETL](https://docs.skyvia.com/data-integration/import/) . For instance, all contact details gathered in Snowflake tables could be transferred to HubSpot to enrich the global CRM of a company. Storage A data warehouse could also be perceived as a reserve storage location for business data. Skyvia helps to organize such operations with the data replication scenario described above. AI Models Machine learning and artificial intelligence algorithms require large data sets for learning. Once those models are trained, they will be used in predictive analysis or error detection, which tends to facilitate business operations. Conclusion Snowflake data warehouse makes up a solid foundation for data analysis widely used across industries. It allows companies to be aware of their current performance and decide what they can do about it \u2014 keep on going or change strategic development plans. Data from HubSpot informs companies about their target audience\u2019s behavioral patterns. Customer purchasing habits, sales deals information, and other valuable insights come as a result of the HubSpot to Snowflake integration with the help of Skyvia. Just create an integration package for data transfer from HubSpot to Snowflake and set data mapping and scheduling parameters in Skyvia. With [Skyvia solutions](https://skyvia.com/solutions/) , businesses can bring data from dozens of other sources to Snowflake and benefit from it to drive go-to-market success. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-to-snowflake-data-analysis%2F) [Twitter](https://twitter.com/intent/tweet?text=Integrating+Hubspot+and+Snowflake&url=https%3A%2F%2Fblog.skyvia.com%2Fhubspot-to-snowflake-data-analysis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/hubspot-to-snowflake-data-analysis/&title=Integrating+Hubspot+and+Snowflake) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/import-csv-file-to-sql-server/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) 3 Easy Ways to Import CSV File to SQL Server By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/import-csv-file-to-sql-server/#respond) 9191 March 23, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fimport-csv-file-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=3+Easy+Ways+to+Import+CSV+File+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fimport-csv-file-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/import-csv-file-to-sql-server/&title=3+Easy+Ways+to+Import+CSV+File+to+SQL+Server) Are you looking for ways to import your CSV data into SQL Server? Whether you\u2019re integrating data from different platforms or updating your databases, importing CSV files is a common yet crucial task. In this guide, we explore the reasons why businesses integrate CSV and SQL. We will walk you through three methods to insert CSV data into SQL Server: Using the BULK INSERT command SQL Server Management Studio\u2019s Import Wizard Automated cloud-based [ETL tools](https://skyvia.com/blog/etl-tools/) like Skyvia By the end of this article, you\u2019ll be equipped with the knowledge to choose the method that best fits your needs, ensuring a successful data import process. Table of contents Why to Import CSV to SQL Server How to Import CSV File in SQL Server (3 Easy Ways) Method 1: Using BULK INSERT for CSV to SQL Import Method 2: Using SQL Server Management Studio Import CSV Tools Method 3: Using ETL Tools \u2013 Skyvia Cloud Solution Alternative Methods for Importing CSV Files to SQL Server Conclusion Why to Import CSV to SQL Server If you\u2019ve ever worked with data, you know how messy things can get when dealing with different formats and systems. Imagine you exchange data between 2 separate systems with different structures and architectures. How do we implement that if one is using a [NoSQL](https://cloud.google.com/discover/what-is-nosql?hl=en) database like [PayPal\u2019s Go](https://go.dev/solutions/paypal) and the other is using [SQL Server](https://www.microsoft.com/en-us/sql-server) , for instance? Just export a CSV file from the NoSQL database and import a CSV file to SQL Server. The comma-separated values (CSV) format has been the easiest and fastest solution to this problem [since 1972](https://en.wikipedia.org/wiki/Comma-separated_values) . While CSV is a simple and universal format, SQL Server provides a structured, scalable, and efficient way to store, manage, and analyze data. It is rated 4.5 out of 5 in [Gartner Peer Insights](https://www.gartner.com/reviews/market/cloud-database-management-systems/vendor/microsoft/product/microsoft-sql-server) and 4.4 out of 5 on [G2 Crowd](https://www.g2.com/products/microsoft-sql-server/reviews) . Moreover, SQL Server stays in the [top 3](https://db-engines.com/en/ranking) among hundreds of competitors. CSV to SQL Import Benefits Let\u2019s consider why importing CSV files to SQL Server is so essential. Centralize Data \u2013 Store scattered data from multiple sources in one secure location. Improve Data Integrity \u2013 SQL Server helps enforce rules, validate data, and prevent duplicates. Enable Advanced Analysis \u2013 Run SQL queries, create reports, and integrate data into business intelligence tools. Automate Workflows \u2013 Set up scheduled imports for real-time updates and synchronization. Enhance Performance \u2013 SQL Server processes large datasets much faster than spreadsheets or flat files. Everyday Use Cases for CSV to SQL Import You may ask, what\u2019s the point of the described benefits without using them in real life? Let\u2019s look at CSV to SQL import in practice. Companies across various industries import CSV files to SQL Server for different reasons. Some typical scenarios include: Finance & Banking \u2013 importing transaction records, customer data, or payment logs from CSV files provided by banks or payment processors. E-commerce & Retail \u2013 migrating product catalogs, customer orders, or inventory lists from CSV exports of online stores. Healthcare \u2013 transferring patient records, appointment schedules, or medical test results into SQL databases for better management. HR & Payroll \u2013 loading employee details, attendance logs, or payroll reports from external HR software. Marketing & CRM \u2013 importing leads, campaign results, or customer feedback from marketing tools into SQL Server for analysis. Custom Software Development \u2013 using SQL Server as the backend for their applications and importing CSV files to update databases dynamically. The import of CSV to the SQL Server is worth the effort. The next step is to decide how to do that. Below, we review the best methods to import CSV files to SQL Server. How to Import CSV File in SQL Server (3 Easy Ways) There are several ways to upload a CSV file to MSSQL table: BULK INSERT is an SQL command that imports files into a database table. It allows you to import large datasets quickly. Bulk import supports various file formats, including CSV to MSSQL. Use it if you are OK with coding. SQL Server Management Studio Import CSV Tools . This is a Microsoft native wizard for SQL server upload from CSV. It is great for small to medium datasets, and it doesn\u2019t require coding. You have to install [SSMS](https://learn.microsoft.com/en-us/ssms/download-sql-server-management-studio-ssms) to do that. Using third-party ETL Tools . Though there are various [data integration tools](https://skyvia.com/blog/data-integration-tools/) on the market, we will demonstrate this method using [Skyvia Cloud Solution](https://skyvia.com/) , a no-code platform for solving various data-related tasks. If you don\u2019t want to install any software and write complex SQL commands, you can use Skyvia to import SQL Server SQL Server [Import](https://skyvia.com/data-integration/import) from CSV. We compared all these methods in the table below. Method Complexity Performance Best For Requires SQL? BULK INSERT High High Large datasets Yes SSMS Wizard Medium Moderate Small-to-medium datasets No ETL Tools Easy High Automated workflows No Let\u2019s look at each method of CSV to SQL import in detail. Prerequisites Before trying to load a CSV file to the SQL Server database, you need a sample CSV and a target table in SQL Server: Download a copy of the actor.csv file from [here](https://drive.google.com/file/d/146Q-cHEE3RQ_4WV-z_FGnb9hMgsGWSE1/view) . Remember where you saved it. Create a sample target table in your database. Use any database administration tool, like [SSMS](https://learn.microsoft.com/en-us/ssms/download-sql-server-management-studio-ssms) or [DbForge](https://www.devart.com/dbforge/sql/studio/) . You can use the command below or add a table manually. CREATE TABLE Actors (\n id INT PRIMARY KEY IDENTITY(1,1),\n lastname VARCHAR(20) NOT NULL,\n firstname VARCHAR(20) NOT NULL,\n middlename VARCHAR(20) NULL,\n suffix VARCHAR(3) NULL\n); The target table is structured the same in SQL Server. In this tutorial, we use our sample server and database names. Replace them with your values. Method 1: Using BULK INSERT for CSV to SQL Import [BULK INSERT](https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15) is an SQL command that imports files into a database table. You can run it from SQL Server Management Studio or any other SQL Server tool. Clear the records prior to bulk load from CSV. TRUNCATE TABLE command deletes all the records in the target table. Bulk insert data from CSV file to SQL table. Run the BULK INSERT command. The location of the CSV should follow the rules of the [Universal Naming Convention (UNC)](https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-dtyp/62e862f4-2a51-452e-8eeb-dc4ff5ee33cc) . Specify the file format. FORMAT=\u2019CSV\u2019 tells SQL Server what file format it\u2019s dealing with. Tell SQL what row to count records from. FIRSTROW = 2 because the first row contains the column names. If there are no column names in the file, set FIRSTROW to 1. For bulk insert CSV to SQL Server, run the command below. -- truncate the table first\nTRUNCATE TABLE dbo.Actors;\nGO\n \n-- import the file\nBULK INSERT dbo.Actors\nFROM 'C:\\Documents\\Skyvia\\csv-to-mssql\\actor.csv'\nWITH\n(\n FORMAT='CSV',\n FIRSTROW=2\n)\nGO As a result, you imported data from CSV to the MSSQL table. Pros It is much faster than using a GUI interface; No need to parse the data. BULK INSERT does it for you. If the target column uses a data type too small for the data, an error will occur; Scheduling of execution possible in SQL Server Agent; Cons You cannot take a CSV from cloud storage like Google Drive or OneDrive; Allows only SQL Server as the target database; Requires careful file preparation. Requires a technical person to code, run, and monitor. Best For One-time import of large datasets by users who are familiar with SQL syntax. Method 2: Using SQL Server Management Studio Import CSV Tools Another useful tool is the [SQL Server Management Studio Import wizard](https://learn.microsoft.com/en-us/sql/integration-services/import-export-data/import-and-export-data-with-the-sql-server-import-and-export-wizard?view=sql-server-ver16) . Step 1. Select the Target Database In the Object Explorer , expand the Databases folder and select the target database. Right-click that database and then select Tasks -> Import Data . Step 2. Define the Source Click Next on the welcome screen and select the Flat File Source . Select the source file. Click Browse and specify the file location. Step 3. Configure File Options SSMS can detect the correct data types for each column. But if the automatic detection of data types and sizes is off, define them manually. Pro tip: Whenever you import data from one platform to another, it\u2019s best to match the correct types and sizes. Why? \u2013 To avoid errors and headaches. Click Advanced on the left. Select a column and set the type and size. Step 4. Define the Destination To specify the SQL Server as the target for import, do the following. Select Microsoft OLE DB Provider for SQL Server . Specify the SQL Server name and enter the necessary credentials. Select the Database name and click Next . Specify the Database Table. Click Edit Mappings to see if the columns from the source match the target and set the mapping options below. Note: If you attempted to import using BULK INSERT earlier, there is data present in the target table. To avoid duplicates or errors, delete rows in the target table or recreate a table. Optionally Save to an [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) Package or Run Immediately. SSIS package allows you to schedule the import to run at regular intervals. In our case, we will just run it immediately. Choose Next to see a summary of your settings. Click Finish to run the import. That\u2019s it for importing CSV to SQL Server using Import Data in SSMS. Pros No coding is required; If column mappings match the source and target, it just works; Allows many data sources and destinations, not just SQL Server; Saving to SSIS catalog and scheduling is possible but limited to what was defined; Cons If you don\u2019t have the specifications of the column types and sizes in the CSV file, column mapping is cumbersome; No way to get the CSV from Google Drive, OneDrive, or a similar cloud storage. Best for This method is the most helpful for one-time loads with no coding. It is great for importing files with few columns and a small data volume. Method 3: Using ETL Tools \u2013 Skyvia Cloud Solution This method involves using a no-code cloud ETL tool to [import the CSV file to SQL Server](https://skyvia.com/data-integration/sql-server-csv-file-import-and-export) . Unlike the two methods mentioned, this approach enables automatic CSV to SQL import from [cloud](https://skyvia.com/connectors#storage) storage like Google Drive, FTP, Amazon S3, DropBox, and others. In this section, we show how to use Skyvia to import the CSV file from Google Drive to SQL Server. Prerequisites Account. To start, you need an active Skyvia account. You can [register](https://app.skyvia.com/register?) in Skyvia right now and get a free 2-week trial for any available subscription plan. Connections. To import CSV from Google Drive to SQL, you need to connect to both of them. Follow the steps below to import CSV to SQL using Skyvia Import. Step 1. Create SQL Server Connection Create a new Agent from the menu (optionally) . If your SQL Server is protected from external connections, you need an [Agent application](https://skyvia.com/agent) to allow Skyvia to connect to a remote SQL Server. Follow the installation and downloading instructions on the page. Run the installed Skyvia agent application. Create a new connection and select the SQL Server. Click the Agent under Connection Mode and select the agent you created earlier. Enter the server name, credentials, and database name. Step 2. Create Google Drive Connection Create a new connection from the menu and select Google Drive . Sign in with Google and save the connection. Step 3. Create the Skyvia Integration to Import CSV File to SQL Server Create a new import from the menu. Select your Google Drive connection as a source and SQL Server connection as a target. Step 4. Create Import Task Click Add new in the integration editor to create a new task. Select the actor.csv file in Google Drive. Skyvia will automatically detect the columns in the file. Make the Text Qualifier blank and the Code Page Western European Windows (1252) . Finally, set the id column to DT_I4 (Integer). Select the operation to perform. Skyvia allows you to insert, update, or delete data. Let\u2019s select the Insert operation. On the mapping definition tab, map the source file columns to the target table columns. Mapping allows you to transform your Source data to fit the Target data structure. Skyvia maps the columns with the same names automatically. You can change the automatic mapping and map other fields using available mapping types: Column, Constant, Expression, and Lookup. Save the task and the integration. You can add multiple tasks to import several CSV files at once. Step 5. Run the Integration and Check Results You can run the integration manually or schedule it for automatic runs. Track the integration results on the Monitor or Logs tabs. Pros Skyvia supports various data sources, including cloud apps, storage services, and databases. Schedule an unattended integration execution. Free plan and 2-week trial. No need to install development tools. Rated [4.8 in G2\u2019s Best ETL Tools](https://www.g2.com/products/skyvia/reviews) and [4.8 in Gartner Peer Insights](https://www.gartner.com/reviews/market/data-and-analytics-others/vendor/devart/product/skyvia) . Cons Sometimes, queueing can take longer than the actual runtime duration. This can be improved. Advanced transformations are available for paid plans. Best for Long-term integrations, when you need to perform regular imports from different files. It is also helpful for scenarios including more than one target and complicated transformations. Alternative Methods for Importing CSV Files to SQL Server There are also several alternatives to the mentioned methods that you can use to import data from CSV to an MS SQL table. We outlined them below. Using OPENROWSET for CSV to SQL Import If you prefer working with SQL queries, the [OPENROWSET](https://learn.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver16) function is a flexible way to import CSV data without involving external tools. This method allows you to treat a CSV file as a table and query it directly within the SQL Server. Best For: One-time imports where you don\u2019t want to set up complex configurations. Ad-hoc queries on CSV data before inserting it into a table. Using PowerShell for CSV to SQL Import PowerShell is a scripting tool that enables automation of CSV imports into SQL Server. By using [Import-CSV](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/import-csv?view=powershell-7.5) and [Write-SqlTableData](https://learn.microsoft.com/en-us/powershell/module/sqlserver/write-sqltabledata?view=sqlserver-ps) , you can create scheduled jobs for automatic data updates. Best For: Automating recurring imports. Handling CSV data transformations before inserting them into SQL Server. IT administrators who prefer scripting solutions. Automating Imports via SQL Server Integration Services (SSIS) [SQL Server Integration Services (SSIS)](https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) is a robust [ETL](https://skyvia.com/learn/what-is-etl) (Extract, Transform, Load) tool that can automate CSV-to-SQL imports. Best For: Enterprises managing frequent or high-volume CSV imports. Complex data processing needs, such as transformation and validation before inserting data. Conclusion Importing CSV files into an SQL Server is a critical task for businesses looking to centralize data, improve accessibility, and enhance performance. Throughout this guide, we explored three primary methods for achieving this: BULK INSERT \u2013 ideal for developers and database administrators who prefer a fast, code-based solution for large datasets. SQL Server Management Studio (SSMS) Import Wizard \u2013 an excellent choice for users who prefer a GUI-based approach for small to medium-sized data imports. ETL Tools like Skyvia \u2013 the best option for automating recurring imports, integrating cloud data, and managing more complex workflows. Each method has its strengths, and the right choice depends on your business needs, data volume, and technical expertise. Explore More Looking for more SQL Server tips and best practices? Check out these articles: [How to Import CSV to MSSQL table using SSMS](https://www.devart.com/dbforge/sql/data-pump/) [SQL Server Data Warehouse: Easy and Practical Guide](https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/) [Best ETL Tools for Microsoft SQL Server for 2025: A Deep Review](https://skyvia.com/blog/sql-server-etl-tools/) FAQ for Import CSV File to SQL Server What is the best way to import a large CSV file into SQL Server? It depends on your business needs and use case. Use BULK INSERT for fast one-time import. Use the SSMS Import tool for one-time import with no coding. Use Skyvia to schedule recurrent imports or to implement more complex scenarios. Can I automate CSV imports into SQL Server? Yes.\u00a0 SQL Server Integration Services (SSIS) or cloud-based ETL tools like Skyvia allow you to schedule and automate imports. Is there a way to import CSV files from Google Drive or OneDrive directly? Yes. Cloud-based no-code ETL tools like Skyvia can import CSV files directly from cloud storage, eliminating the need for local file downloads. Can I import multiple CSV files into SQL Server at once? Yes, with Skyvia you can automate the process and import multiple files into SQL Server in a single integration. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fimport-csv-file-to-sql-server%2F) [Twitter](https://twitter.com/intent/tweet?text=3+Easy+Ways+to+Import+CSV+File+to+SQL+Server&url=https%3A%2F%2Fblog.skyvia.com%2Fimport-csv-file-to-sql-server%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/import-csv-file-to-sql-server/&title=3+Easy+Ways+to+Import+CSV+File+to+SQL+Server) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/importing-data-into-salesforce/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to import data into Salesforce By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/importing-data-into-salesforce/#respond) 3791 March 28, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fimporting-data-into-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+import+data+into+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fimporting-data-into-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/importing-data-into-salesforce/&title=How+to+import+data+into+Salesforce) Salesforce is a popular cloud-based customer relationship management (CRM) platform that allows businesses to manage their sales, marketing, and customer service processes. Importing data into Salesforce is an essential task for businesses that want to take full advantage of the platform\u2019s features. In this article, we\u2019ll explore the main methods of importing data into Salesforce and how to choose the right one for your needs. Table of Contents Why Data Import to Salesforce is Important Import Data into Salesforce Using Data Import Wizard Import Data into Salesforce Using Dataloader.io Import Data into Salesforce Using Skyvia Common Challenges Conclusion Why Data Import to Salesforce is Important Importing data into Salesforce allows businesses to centralize their customer data, making it easier to manage and analyze. This data can include customer names, contact details, purchase history, and other relevant information. With accurate and up-to-date data, businesses can improve their customer relationships, make better-informed decisions, and drive growth. Import Data into Salesforce Using Data Import Wizard The [Data Import Wizard](https://help.salesforce.com/s/articleView?id=sf.data_import_wizard.htm&type=5) is a native Salesforce tool that allows users to import data from spreadsheets or CSV files. This tool is ideal for small to medium-sized data imports, as it can handle up to 50,000 records at a time. The Data Import Wizard is easy to use, with a step-by-step process that guides users through the import process. To import data into Salesforce using the Data Import Wizard, do the following: Log in to your Salesforce account and navigate to the Data Import Wizard by clicking on the gear icon and selecting Setup . Under Data Management , select Data Import Wizard and click Launch Wizard . Select the type of data you want to import and click Next . Choose the file you want to import by clicking Choose File and selecting the file from your computer. Map the fields in your file to the fields in Salesforce by selecting the appropriate options from the drop-down menus. You can also create new fields if needed. Set the import options, such as how to handle duplicates, and click Next . Review the import summary and click Start Import . Salesforce Data Import Wizard enables you to quickly and easily import data from external sources into your Salesforce org. However, as with all the other methods, it has its pros and cons. Let\u2019s check those below. Pros: Easy to use: The Data Import Wizard has a user-friendly interface that makes it easy for even non-technical users to import data into Salesforce. Multiple import options: The Data Import Wizard supports a variety of import options, including CSV, XLS, and XML files. Pre-built templates: The Data Import Wizard comes with pre-built templates that allow you to quickly import common data types, such as leads, contacts, and accounts. Data validation: The Data Import Wizard performs validation checks on the imported data to ensure that it meets Salesforce\u2019s data requirements. Cons: Limited customization: The Data Import Wizard has limited customization options. You cannot, for example, map fields from your source file to custom fields in Salesforce. Limited error handling: The Data Import Wizard can only handle a limited number of errors. If you encounter too many errors during the import process, you may need to fix them manually. Limited data volume: The Data Import Wizard can only import up to 50,000 records at a time. If you need to import more data than that, you will need to use a different tool or method. Limited automation: The Data Import Wizard is not designed to automate the import process. If you need to import data on a regular basis, you will need to set up a custom solution using Salesforce APIs or a third-party integration tool. Overall, the Salesforce Data Import Wizard is a useful tool for importing data into Salesforce, especially for users who need to import data infrequently or in small batches. However, if you need more advanced customization, error handling, or automation options, you may need to use a different tool or method. Import Data into Salesforce Using Dataloader.io [Dataloader.io](http://dataloader.io/) is a third-party data import tool that is ideal for larger data imports. It can handle up to 5 million records at a time and supports various file formats, including CSV, Excel, and XML.\u00a0Dataloader.io\u00a0is a cloud-based tool, which means that users can access it from anywhere with an internet connection. It also offers advanced features such as scheduling and automation, making it a popular choice for businesses with complex data import needs. To import data into Salesforce using\u00a0Dataloader.io, do the following: Open the\u00a0Dataloader.io\u00a0website and log in to your account. Click Import on the left-hand menu. Choose your source file by either selecting it from your computer or by importing it from a cloud storage service like Dropbox or Google Drive. Select the destination Salesforce object that you want to import data into. Map the fields in your source file to the corresponding fields in Salesforce. Define your import settings, such as the batch size and error handling options. Click Import Now to start the data import process. Monitor the progress of your import by checking the status updates provided by\u00a0Dataloader.io. Here are some of the pros and cons of using\u00a0Dataloader.io\u00a0for importing data into Salesforce. Pros: Advanced customization:\u00a0Dataloader.io\u00a0offers advanced customization options that allow you to map fields from your source file to custom fields in Salesforce and perform complex data transformations. Bulk data import:\u00a0Dataloader.io\u00a0can import large volumes of data at once, allowing you to quickly and easily move data into Salesforce. Error handling:\u00a0Dataloader.io\u00a0can handle a large number of errors during the import process and allows you to easily fix errors and retry the import. Automation:\u00a0Dataloader.io\u00a0allows you to automate the data import process, schedule regular imports, and even integrate with other systems. Cons: Complexity:\u00a0Dataloader.io\u00a0has a steeper learning curve than the Salesforce Data Import Wizard and may require some technical expertise to set up and configure. Cost:\u00a0Dataloader.io\u00a0is a paid tool, and the cost may be prohibitive for smaller organizations or individuals. Security:\u00a0Dataloader.io\u00a0requires access to your Salesforce org, which may raise security concerns for some organizations. Third-party dependency: Using\u00a0Dataloader.io\u00a0means relying on a third-party tool, which may introduce additional risk if the tool is not maintained or updated regularly. Overall,\u00a0Dataloader.io\u00a0is a powerful tool that offers advanced customization, error handling, and automation options for importing data into Salesforce. However, its complexity and cost may be a barrier for some users, and organizations should carefully consider the security implications of using a third-party tool to access their Salesforce data. Import Data into Salesforce Using Skyvia Skyvia is another cloud-based data integration tool that allows businesses to import data into Salesforce quickly and easily. It offers a simple drag-and-drop interface and supports various file formats, including Excel, CSV, and XML. Skyvia also supports data synchronization, which means that businesses can keep their Salesforce data up to date automatically. With its user-friendly interface and advanced features, Skyvia is an excellent choice for businesses of all sizes. To import data into Salesforce using Skyvia, do the following: Sign in to your [Skyvia\u2019s account](https://skyvia.com/) . Once you\u2019re signed in, go to\u00a0the New > Connection button to create a connection to your Salesforce account. Follow the prompts to enter your Salesforce login credentials and authorize Skyvia. Once you\u2019ve established a connection to Salesforce, go to New > Import . Select Salesforce as your Target and your file, cloud app, or file storage as\u00a0the Source . Map the fields in your source file to the corresponding fields in Salesforce. Define additional import settings. Run the import package. Once the import is complete, review any error logs to identify and correct any issues. Here are some of Skyvia\u2019s pros and cons. Pros: Skyvia offers a user-friendly interface that is easy to navigate and use. Skyvia supports a variety of data sources beyond Salesforce, including other CRMs, databases, and cloud services. Skyvia allows for automated and scheduled data imports, which can save time and effort for users who need to regularly update their Salesforce data. Skyvia offers robust data mapping and transformation capabilities, allowing users to customize the way their data is imported into Salesforce. Skyvia provides detailed logs and error reporting, which can help users identify and address any issues with their data imports. Cons: Skyvia\u2019s free plan has some limitations on the amount of data that can be imported per month, which may not be sufficient for larger organizations with substantial data import needs. Some users may prefer more control over the data import process than Skyvia provides. Skyvia\u2019s pricing plans can be relatively expensive compared to other data import tools, especially for users with larger data volumes. Common Challenges Importing data into Salesforce can be challenging, especially for businesses that are new to the platform. Some common challenges include data formatting issues, data mapping errors, and duplicate records. To avoid these challenges, it\u2019s important to prepare your data carefully before importing it, ensure that your data mappings are accurate, and use data validation tools to identify and eliminate duplicates. Conclusion Importing data into Salesforce is an essential task for businesses that want to take full advantage of the platform\u2019s features. There are several methods available for importing data, each with its own strengths and weaknesses. While the Data Import Wizard is a popular free-of-charge method, it has its limitations in terms of usability. So if you want to improve your Salesforce data import experience, take a look at some third-party solutions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fimporting-data-into-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+import+data+into+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fimporting-data-into-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/importing-data-into-salesforce/&title=How+to+import+data+into+Salesforce) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/jira-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Integrate Jira with Salesforce: Ultimate Guide for 2025 By [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) [0](https://skyvia.com/blog/jira-salesforce-integration/#respond) 4437 March 13, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fjira-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Jira+with+Salesforce%3A+Ultimate+Guide+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fjira-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/jira-salesforce-integration/&title=How+to+Integrate+Jira+with+Salesforce%3A+Ultimate+Guide+for+2025) If your team relies on Jira for issue tracking and Salesforce as a CRM platform, integrating them is a game-changer for improving collaboration and efficiency. With a sync between Jira and Salesforce, teams can perform workflow automation and seamlessly share data: Close the gap between sales and other teams: engineering, product, or marketing to get 360-degree customer engagement. Plan and track progress on the big picture across multiple teams. Reduce manual updates through automatic, real-time synchronization. Address and resolve customer issues. In this guide, we\u2019ll cover the most effective ways of Jira to Salesforce integration, emphasizing its common benefits and showing you how to implement a two-way sync that delivers real business value. Table of contents What is Jira What is Salesforce Integration Benefits for Jira and Salesforce Jira and Salesforce Integration Options Method 1. Native Integration Method 2. Marketplace Connectors Method 3. Integration Using APIs Method 4. Third-party Integration Platforms Conclusion What is Jira [Jira](https://skyvia.com/connectors/jira) is a well-known project management tool operated by software development, DevOps, product management, test management, etc. It\u2019s widely used in agile projects in companies of all sizes. With Jira\u2019s powerful dashboards, it\u2019s easy to stay on track with business goals, follow the most critical tasks, and grasp bugs and issues. Jira key features Data updates in real-time Integrated bug-tracking system Extended integration with 3000+ services and applications Support of the Scrum, Kanban, and Scrumban (a hybrid of Kanban and Scrum) methodology Customization of software suit to fit specific team\u2019s needs. Jira additional resources [YouTube playlist: Jira software demos](https://www.youtube.com/playlist?list=PLaD4FvsFdarR9RNlvUfee_iJ6WKRsRJn4) [Documentation](https://confluence.atlassian.com/jirasoftwarecloud/jira-software-documentation-764477791.html) [Jira Community](https://community.atlassian.com/t5/Jira/ct-p/jira) [Atlassian University: training and certification](https://www.atlassian.com/university) What is Salesforce [Salesforce](https://skyvia.com/connectors/salesforce) (SF) is a Customer Relationship Management (CRM) platform that helps companies maintain and manage customer relationships and interactions. With this data, managers can make better decisions, as it includes data about accounts, contacts, opportunities, leads, and sales. In addition to providing cloud-based tools, Salesforce offers data analytics and IoT solutions. Collecting customer data and providing valuable insights to businesses is one of the most beneficial services in the tech stack of any business. Salesforce key features Customer relationship management. Provides access to customer data, the details of past and present interactions, and any updates that could affect future communications. Dashboard functionality. Builds interactive dashboards to analyze how sales and customer support issues impact business and get comprehensive company performance data. Managing opportunities. Salesforce offers a detailed view of customers, their history, buying patterns, etc., providing powerful insights. Support for email integration. Provides synchronization of calendars and schedules with various email applications, such as Microsoft Outlook and Gmail. Salesforce additional resources [Help and documentation](https://help.salesforce.com/s/) [Salesforce videos](https://www.salesforce.com/resources/videos/#!page=1) [Salesforce podcasts](https://www.salesforce.com/resources/podcasts/) Integration Benefits for Jira and Salesforce The complexity of modern businesses is constantly growing, and the traditional approaches to working on projects fail to keep up. It\u2019s more important than ever to have tools that can automate the sharing of critical business information between teams and reduce costly errors. What benefits does JIRA to Salesforce integration bring to your business? Improved collaboration: Teams can communicate and collaborate bi-directionally, automatically, and in real-time \u2013 without leaving the platform they\u2019re currently operating. Controlled data sharing: Each team can define what data from different systems is shared or received. Faster response to customer needs: Developers can quickly address new user requests, customer issues, and feedback. Deeper business insights: Merging Jira and SF data allows for advanced reporting, helping teams gain a more comprehensive understanding of business operations. Elimination of manual work: Automating data synchronization removes the need for manual data entry, reducing duplication and inconsistencies. Better customer service: Support teams can track the progress of Jira tasks directly in Salesforce, streamlining customer service and case resolution. Scalability and flexibility: As business grows, automated integrations allow systems to scale without requiring additional manual effort. Jira and Salesforce Integration Options Without a seamless connection, managing projects across these two platforms can be difficult. Teams risk miscommunication, duplication of work, and inefficiencies. Fortunately, they have multiple integration options, from built-in solutions to custom APIs. Let\u2019s break down the top four approaches and their benefits. Method 1: Native Integration Jira and Salesforce offer built-in integration with limited customization. These work best for basic use cases but may lack flexibility for complex workflows. Method 2: Third-Party Marketplace Connectors Marketplace connectors are pre-built solutions available in Atlassian Marketplace or SF AppExchange. They provide an easy setup with minimal technical effort but may have limitations in customization and pricing. Method 3: Integration Using APIs APIs allow direct, custom integration between Jira and Salesforce. This approach is highly flexible, enabling tailored workflows, but it requires development expertise and ongoing maintenance. Method 4: Third-Party Integration Platforms Platforms like Skyvia or Zapier provide a no-code or low-code way to connect Jira and Salesforce. They offer more customization than native options and are easier to set up than APIs, making them a balanced solution for many businesses. Comparative Table of Integration Methods Method Ease of Setup Customization Scalability Maintenance Best For Native Integration \u2b50\u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50 \u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50 Basic use cases with minimal setup Marketplace Connectors \u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50\u2b50 Pre-built solutions with moderate customization Integration Using APIs \u2b50 \u2b50\u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50\u2b50\u2b50 \u2b50 Full control and tailored workflows Third-Party Integration Platforms \u2b50\u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50\u2b50 \u2b50\u2b50\u2b50\u2b50 A balance of ease, customization, and scalability Method 1. Native Integration Salesforce and Jira offer a [built-in integration](https://support.atlassian.com/jira-service-management-cloud/docs/integrate-with-salesforce-service-cloud/) between Salesforce Service Cloud and Jira Service Management. Once a case is created in SF Service Cloud, Jira Service Management generates an alert, and any updates to the case status in Salesforce are automatically synchronized with Jira. This enables case routing for support and development collaboration. To configure the connection, a Salesforce Service Cloud integration must be added in Jira Service Management, and then the integration must be configured in SF Service Cloud. For broader integration needs, such as connecting SF Sales Cloud to Jira Software consider other methods like third-party connectors, APIs, or integration platforms. To use this method, follow two easy steps: Configure the SF Service Cloud \u2013 Jira Service Management integration. Set up case routing to trigger Jira alerts and automatic status updates. Best for Organizations using SF Service Cloud and Jira Service Management for case resolution. Basic case routing and status syncing between platforms. Prerequisites You must be using SF Service Cloud and Jira Service Management (this doesn\u2019t work with Sales Cloud or Jira Software). Admin permissions in both Salesforce and Jira are required to configure the integration. Pros Built-in functionality without additional costs. Automated case synchronization between platforms. No coding is required for setup. Cons Limited to Service Cloud & Jira Service Management (not compatible with Sales Cloud or Jira Software). Basic functionality only \u2013 no advanced customization or field mapping. Method 2. Marketplace Connectors Third-party marketplace connectors are pre-built integration apps available in platforms like [Atlassian Marketplace](https://marketplace.atlassian.com/) and Salesforce [AppExchange](https://appexchange.salesforce.com/appxSearchKeywordResults?keywords=jira) . These connectors are designed to seamlessly link Jira and Salesforce without complex coding or API configurations: Atlassian Marketplace solutions prioritize Jira-centric workflows for issue tracking and project management. SF AppExchange apps focus more on CRM data synchronization and enhancing customer relationship insights. While third-party integration tools (covered in Method 4) provide a broader range of applications, marketplace connectors focus exclusively on Salesforce and Jira. They offer minimal configuration but have limited customization features. Some of the most rated connectors include: Appfire (formerly ServiceRocket) \u2013 a widely used solution for syncing Jira issues with Salesforce records. Peeklogic \u2013 offers bi-directional synchronization with extensive customization options. Goldfinger \u2013 provides automation features for streamlined workflows. zAgile \u2013 focuses on deep data visibility and security for enterprise use. When selecting a Jira-Salesforce connector, consider these features: Bi-directional sync \u2013 ensure real-time data updates. Field mapping \u2013 match and transfer different data fields, including custom. Automation \u2013 schedule or build data flows to eliminate manual updates. Security & Compliance \u2013 protect sensitive data with user permissions and encryption. Best for A quick and pre-built integration between Salesforce and Jira. Automated data sync without custom API configurations. Real-time collaboration between support and development teams. Prerequisites Needs admin access to the Jira Cloud instance and technical or billing contact with the Jira Cloud instance. Appfire stands out as a popular choice in the marketplace. In the following section, we will guide you through its installation process, key benefits, and potential drawbacks. Step-by-step guide Step 1. Install Appfire connector in Jira Open the [Atlassian Marketplace](https://marketplace.atlassian.com/) and search for [Connector for Salesforce & Jira](https://marketplace.atlassian.com/apps/1214214/connector-for-salesforce-jira?hosting=cloud&tab=overview) by Appfire. Click Try it free to start downloading the app. Note. For the installation process to continue, you must log in to the Atlassian account. After that, the SF connector will appear at the top of the Apps menu. Step 2. Install Appfire connector in Salesforce Once you\u2019ve installed the connector for Jira, you need to install the Appfire connector package in Salesforce. Open the [AppExchange market](https://appexchange.salesforce.com/appxSearchKeywordResults?keywords=jira) and search for the [Appfire Jira connector](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3000000E7xufEAB) . Click the Get It Now button and log in to your SF account. Choose where to install the app (the Production Org or the Sandbox Org). Then, confirm installation details, tick the agreement on terms and conditions, and click Confirm and Install . After that, you have to choose the security type of installation and finish the process by clicking the Install button. Note. If you choose \u201cInstall for Admins Only,\u201d consider these limitations: Only SF Admins can be used as an integration user for the steps in setting up a connection. Only SF Admins can view the content of the lightning components, even when it is added to the Salesforce Object layouts. Proceed with the installation prompts. After installation, go to Installed Packages to connect with the server. You can find it in the menu Platform Tools/Apps/Packaging. In the Quick Find, type remote site settings to open the setup page and click the New Remote Site button. Fill in the following details: Remote Site Name: Jira Remote Site URL:\u00a0 https://sfjc.integration.appfire.app Click Save to finish the installation of the Salesforce package. Step 3. Set up Jira to Salesforce connection In Jira, select the Salesforce application from the Apps dropdown menu at the top. Go to the Connections section, click the +Add Connection button, type the connection name, and add a new connection. Allow access to your SF account. Select what objects from Salesforce should be imported and finish authorization of the connection. Step 4: Set up Salesforce to Jira integration Start from the Installed Packages page, find the Jira for Salesforce package, and click Configure . Paste the API token for your JIRA instance. You can find the API token in Jira on the page with Salesforce connections under the three dots menu. Copy the API Access token and enter it in the Salesforce dialogue window. Now you\u2019ve successfully connected Salesforce to the Jira instance. For more details, check the Appfire [Installation Guide](https://appfire.atlassian.net/wiki/spaces/CSFJIRA/pages/470754511/Installation+guide) . Pros Out-of-the-box solution \u2013 doesn\u2019t need coding or customization Detailed documentation and an excellent support team Triggers automatic updates and workflows between platforms Cons Requires installation on both Salesforce and Jira Limited customization for complex workflows A paid license is needed Method 3. Integration Using APIs With [JIRA APIs](https://developer.atlassian.com/server/jira/platform/jira-rest-api-examples/) , you can fetch and synchronize data with multiple applications. To use the APIs, you need to develop the SF [Apex Class](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_classes_defining.htm) and use Triggers to monitor and make updates in JIRA directly from Salesforce. Best for Businesses with the technical resources needing to build custom integrations with advanced automation. Prerequisites Requires technical skills and admin level of access to Salesforce account Needs knowledge of REST API. Step-by-step guide Step 1. Server Calls Enabling To call an external server, you have to write a code in the APEX Class in Salesforce. Log in to your Salesforce account, then select the security control option. After that, you have to add a new remote site. To create the remote site, enter your JIRA URL and name. Note. You need to have an admin level of access to your Salesforce account. Step 2. Getting Jira Issue Keys You need to get the JIRA issue key to use the APIs. This key is used to create an outbound map and match it with the text fields. Create a text field from the setup option, customize it, and select the fields option. Add a custom field and name it JIRA_key. Once you\u2019ve created the text field, you need to add an outbound map from your key to the text field. Step 3. Setting Up Salesforce Apex Class Once you\u2019ve created an outbound map, you must write the code for your Salesforce Apex class. To do this, go to the app setup option, click Develop, and select the Apex Classes option. You need to write the integration code specifying your parameters and allowing the Salesforce Apex Class to connect with your JIRA instance. Note . To write the code, refer to the official Salesforce documentation on [Apex Rest Code](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_rest_code_sample_basic.htm) . Step 4: Setting Up Salesforce Triggers Salesforce Apex triggers can be configured to automate the JIRA issue creation process as well as to do custom actions before or after changes to Salesforce records, like insertions, updates, or deletions. You must create and add a trigger code to the Apex Class you wrote earlier. Every time a new case is added in Salesforce, it will automatically be synced to JIRA using the REST APIs. Note . You can learn more about Salesforce triggers in [Salesforce developer docs](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_triggers.htm) . Pros Flexibility \u2013 allows any data-related operation and integration to be managed according to various needs. Advanced automation \u2013 enables complex workflows and real-time data synchronization. Multi-system connectivity \u2013 can integrate SF and Jira with other platforms. Cons Difficult to install and requires administrator permissions Needs programming skills and a Developer edition in Salesforce Has limitations on user rights, the number of everyday updates to data, etc. Ongoing maintenance \u2013 custom-built solutions require continuous monitoring and updates. Method 4. Third-party Integration Platforms Third-party integration platforms, such as Zapier, Workato, Tray.io, and Skyvia, can connect Jira and SF using prebuilt connectors and workflows. They balance simplicity and flexibility and eliminate the need for development. Among the various tools on the market, Skyvia stands out for its powerful features and ease of use. [Skyvia](https://skyvia.com/platform) data platform offers a no-code way to integrate Salesforce and Jira data. It doesn\u2019t require specific skills. You must take three easy steps to get cloud integration and establish connections with two services in Skyvia. Set up a connection with Jira. Set up a connection with Salesforce. Set up data import between these two platforms. Best for Businesses needing a balanced approach with more scalability and customization without extensive development efforts. Prerequisites To use Skyvia, you need to create a free account, which provides access to its core functionality. Step-by-step guide Step 1. Jira connection Skyvia can connect to both [Jira Cloud](https://skyvia.com/connectors/jira) and [Jira Server](https://skyvia.com/connectors/jira-service-management) . In this guide, we will show the cloud scenario. [Sign in](https://app.skyvia.com/login) with Skyvia and click +Create New in the top menu. Then, click the Connection button in the menu on the left. In the opened Select Connector page, select Jira service. The default name of a new connection is Untitled, you need to click it to rename the new connection. Provide Skyvia access to your Atlassian account by accepting privacy policies. Then, click Create Connection , and Save Connection . Note. Skyvia doesn\u2019t support custom fields in the default Jira connector. However, if you need to work with custom fields in Jira, you may contact our support team at support@skyvia.com to request custom field additions. You can also check the [Skyvia documentation](https://docs.skyvia.com/connectors/cloud-sources/jira_connections.html) for more information. Step 2. Salesforce connection The same actions as with the Jira connection are required for Salesforce. Click the Connection button, open the Select Connector page, and choose Salesforce service. Change the default name from Untitled by clicking on it. Then, fill in the information required: Environment : choose the environment type of Salesforce to export data from. Authentication : choose the authentication method for connecting to Salesforce. Depending on your chosen authentication type, either click the Sing in with Salesforce button or specify your Salesforce account e-mail, password, and security token. Create and save the connection by clicking the corresponding buttons. Note. For more information, check the [connecting to Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) Skyvia documentation . Step 3. Jira and [Salesforce data import](https://skyvia.com/blog/importing-data-into-salesforce/) To sync Jira and Salesforce, click +Create New in the top menu. In the Integration column, click Import. Rename the import for quick identification later. Select the Data Source database or cloud app source type. Select your Salesforce connection as a Source and Jira connection as a Target. On the right, click Add task to create the Import task. You can use either the [Simple](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html#simple-task-editor-mode) or [Advanced](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html#advanced-task-editor-mode) mode in the Task Editor. Here, you have to define Source, Target, and configure Mapping. For this, select the source table or object from the Source list, the object to import data to from the Target list, and the type of operation for the task: Insert, Update, Delete, or Upsert. Then, on the Mapping Definition page, you should configure the mapping of target columns to source columns, expressions, constants, lookups, etc. Columns with the same names in the source and target are mapped automatically. Note . For more information, read the detailed instructions on [creating the Import task](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html) and configuring [the mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . After completing the mapping, save the created task and Import. You have to create another Import according to the above steps to perform the import data in the opposite way from Jira to Salesforce. You can perform the import manually or [set the schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) to start automatically. The common practice is to schedule runs at different times to avoid data overlapping. Various business use cases need different integration scenarios, and Skyvia provides options for any. In addition to data import, [Skyvia Integration](https://skyvia.com/data-integration) includes: A Synchronization for bi-directional data sync, A more advanced Data Flow and Control Flow. For creating complicated data flows with multiple sources, multistage transformations, and error handling, these tools are the best options. Pros No coding required \u2013 easily set up integrations without technical expertise Flexible data mapping \u2013 Skyvia allows you to map data with the help of output schema, functions, properties, and variables Automated workflows \u2013 schedule data updates and ensure seamless synchronization Detailed logging and email notifications. Cons Feature limitations on the free plan \u2013 advanced automation features require a paid subscription. You can find out more about Skyvia on [Atlassian Marketplace](https://marketplace.atlassian.com/apps/1219290/skyvia-data-integration-for-jira?tab=overview&hosting=cloud) . Conclusion As the Salesforce CRM platform and the Jira project management service are powerful tools, the data integration between these platforms helps ensure that team members work effectively. Since these are top-rated and commonly used services, in this article, we\u2019ve described 4 methods to integrate Salesforce and Jira. Native integration enables a simple solution for basic use cases. Third-party marketplace connectors provide easy setup for automated data sync but offer limited customization. API integration allows for customized solutions, though it requires technical expertise. For tasks that demand more flexibility without programming, third-party integration platforms such as Skyvia provide a well-rounded solution with pre-built workflows and connectors. FAQ for Jira and Salesforce Why should businesses integrate Jira with Salesforce? Jira integration with Salesforce improves collaboration, automates workflows, and keeps data in sync. It helps teams track customer issues to improve development processes and decision-making. What factors should be considered when choosing an integration tool? \u2013 Automation needs \u2013 Customization options \u2013 Ease of setup \u2013 Security \u2013 Pricing \u2013 Support for real-time two-way sync What are common use cases for Jira Salesforce integration? \u2013 Syncing support tickets with development tasks \u2013 Tracking feature requests \u2013 Automating status updates \u2013 Generating cross-platform reports Are there any security concerns to sync Jira and Salesforce? Yes, consider data access controls, API security, compliance with industry regulations, and user authentication to prevent unauthorized access. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fjira-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Jira+with+Salesforce%3A+Ultimate+Guide+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fjira-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/jira-salesforce-integration/&title=How+to+Integrate+Jira+with+Salesforce%3A+Ultimate+Guide+for+2025) [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) A dedicated technical writer, Liudmyla brings extensive experience in creating and managing diverse learning materials. Passionate about user-centered documentation, she thrives on enhancing user experiences through clear, engaging, and accessible content. With a keen analytical mindset and a collaborative approach, Liudmyla excels in bridging information gaps and simplifying complex concepts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/loading-data-into-snowflake/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Load Data Into Snowflake: 4 Fastest Methods Explained By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/loading-data-into-snowflake/#respond) 3430 February 26, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Floading-data-into-snowflake%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Load+Data+Into+Snowflake%3A+4+Fastest+Methods+Explained&url=https%3A%2F%2Fblog.skyvia.com%2Floading-data-into-snowflake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/loading-data-into-snowflake/&title=How+to+Load+Data+Into+Snowflake%3A+4+Fastest+Methods+Explained) Snowflake is one of the leading data warehousing solutions in the market today. Being a cloud-native solution and relying on a single elastic performance engineer makes it a high-speed and scalable solution. What\u2019s more, it can handle concurrent workloads and high volumes of data. To take full advantage of Snowflake, you need to be aware of all the capabilities it offers to users. In this article, in particular, we explore five principal methods to put data into Snowflake. We will explain for which use cases each approach is suitable and which pitfalls it conceals. Table of Contents More Details about Snowflake Data Ingestion Techniques in Snowflake Method 1: Data Loading to Snowflake Using Web Interface Method 2: Data Ingestion Using the SnowSQL Client Method 3: Getting Data Into Snowflake Using Snowpipe Method 4: Bulk Loading with SQL Commands Method 5: Using Skyvia for Loading Data to Snowflake Load Data from CSV to Snowflake with Skyvia Load Data from SQL Server to Snowflake with Skyvia Load Data from Amazon S3 to Snowflake with Skyvia Load Data from Oracle to Snowflake with Skyvia Load Data from Microsoft Azure Blob to Snowflake with Skyvia Conclusion More Details about Snowflake [Snowflake](https://www.snowflake.com/en/) is a cloud-based data warehousing tool using a three-layered architecture: Database storage layer. Computing layer. Cloud services layer. This DWH has gained popularity due to its universality and support of the following: File formats: CSV, JSON, and Parquet Data types: structured and semi-structured. Load methods: Snowpipe, SnowSQL Client, COPY command, etc. At the moment, [10,000+ organizations use Snowflake](https://www.snowflake.com/en/company/overview/about-snowflake/) daily for their data-related tasks. This platform is not only popular as a DWH but also as a powerful analytics engine, which allows users to extract value from data and use it for business development. To help you decide whether this platform suits your needs, let\u2019s review its main advantages and pitfalls. Advantages Cloud-first approach boosts cost optimization, so you only pay for the storage and computing time used. Scalability assures resource allocation according to the current workloads and performance benchmarks. Security and compliance help to protect your information of any volume and type. Data caching improves query performance and facilitates hardware upgrades. Cloud-agnostic nature of Snowflake enables it to run in public clouds e.g. Google, Microsoft, and Amazon. Pitfalls Loading times can sometimes take longer than expected. Unpredictable spending in case your business experiences unpredictable operational spikes. Unstructured data support still needs enhancement. Data Ingestion Techniques in Snowflake It\u2019s possible to send data to Snowflake in several different ways. When deciding on the most suitable option, consider your specific use case and dataset volumes to be operated. Here, we explore several ingestion methods along with their benefits and drawbacks to help you select the most suitable one. Web interface SnowSQL client Snowpipe SQL queries Skyvia Before diving deeper into each data ingestion technique, let\u2019s have a quick look at each of them. Method Short description Advantages Drawbacks Web interface Extraction of information from your preferred sources via Snowflake-native graphic UI. -Simple and user-friendly approach that is suitable even for non-techs -Limitations on the number of files to be uploaded and their size SnowSQL client Bulk data extraction (both structured and unstructured) from any delimited text file. -Powerful transformation and data manipulation options -No real-time information exchange isn\u2019t guaranteed Snowpipe Information retrieval from stages and loading it into DWH with Snowpipe\u2019s units called pipes. -Support of high-frequency or streaming data -Insufficient permissions and incorrect formatting SQL queries Bulk data extraction using SQL commands. \u2013 Loading large data amounts in a short time \u2013 Considering the efficient use of virtual DWH Skyvia Universal cloud data platform that can integrate Snowflake with 200+ other sources. -Various integration scenarios-No-code interface-Budget-friendly-Powerful data management options -24-hour data refresh on a free tier-Steep learning curve for advanced features Method 1: Data Loading to Snowflake Using Web Interface Manual ingestion via the web interface is the simplest method for uploading data into Snowflake. It\u2019s done with the help of Classic Console, also known as Snowsight. With a manual [approach](https://skyvia.com/blog/export-data-from-salesforce-to-excel/) , you can use the following: Files in CSV and TSV formats. Semi-structured data from JSON, Avro, ORC, Parquet, and XML file formats. Classic Console accepts data from such locations: Local computer An existing stage An existing cloud storage location on Snowflake, Amazon S3, Google Cloud Storage, or Microsoft Azure. Step 1. Select Tables NOTE: First of all, you need to click on the drop-down menu under your loging name and select Switch Role > ACCOUNTADMIN . Check whether you have the USAGE privilege for the database and table you are going to use. Verify that you also have CREATE STAGE and CREATE FILE FORMAT privileges on the selected database schema. In your Snowflake account, use the Classic Console to select the specific database from Databases . Click Tables to select the appropriate table. On the Table details page, click the Load Table button. Step 2. Select Files Check the Load Data wizard that opens. Select the warehouse from the drop-down list. It includes each warehouse on which you have the USAGE privilege. Use one of the following options: Click Load files from your computer and browse directories on your local machine. Select the required files and click Open. Click Load files from external stage and choose a cloud storage location like Amazon S3, Microsoft Azure, Google Cloud Platform, etc. Step 3. Finish Process Set the error handling options. Click the Load button to enable Snowflake to ingest data and put it into the selected table. Advantages Obviously, this approach is rather simple but efficient at the same time. Pitfalls At the same time, manual operation is associated with several limitations: The maximum number of files to process is 250 at a time. The maximum size of a file mustn\u2019t exceed 250 MB. Aspects of cost and efficiency This approach doesn\u2019t impose any additional costs since it goes along with the Snowflake web interface itself. Skills Needed Uploading data into Snowflake with a web interface is the most user-friendly [method](https://skyvia.com/blog/salesforce-to-salesforce-integration/) . In general, you should only be acquainted with the Snowflake UI and the overall structure of this DWH. Deep technical knowledge is not required to use this scheme. Method 2: Data Ingestion Using the SnowSQL Client It\u2019s possible to bulk load data with SQL queries using SnowSQL from any delimited text file, including comma-delimited CSV files. Additionally, this tool allows users to bulk upload semi-structured data from JSON, AVRO, Parquet, or ORC files. Compared to the manual method, this one suits more advanced users. First of all, [download and install SnowSQL](https://docs.snowflake.com/en/user-guide/snowsql) on your computer. It\u2019s supported on all major OS, including Windows, macOS, and Linux. To put data into Snowflake with this tool, proceed as follows: Step 1. Prepare Data Ensure that the file comes in a [compatible format](https://docs.snowflake.com/en/user-guide/data-load-prepare) file (CSV, JSON, Parque, etc.). [Prepare](https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare) data before sending it into Snowflake. Apply the necessary transformations or cleaning beforehand. Step 2. Create a Stage In Snowflake, the stage is a temporary location for file storage. It\u2019s necessary as an intermediary step before data transfer into a table. By default, there is an internal stage associated with each table for file hosting. It\u2019s also possible to create customer stages or named locations to obtain granular control. Step 3. Get Data Execute the [PUT command](https://docs.snowflake.com/sql-reference/sql/put) to upload data from local or cloud storage into the selected stage. Use the COPY command to transfer it into the table. Example: CREATE OR REPLACE TABLE mytable (\n id VARCHAR,\n registration_date VARCHAR,\n first_name VARCHAR,\n last_name VARCHAR);\n\nCOPY INTO mytable (\n id,\n registration_date,\n first_name,\n last_name\n) FROM (\n SELECT\n $1:ID,\n $1:CustomerDetails::OBJECT:RegistrationDate::VARCHAR,\n $1:CustomerDetails::OBJECT:FirstName::VARCHAR,\n $1:CustomerDetails::OBJECT:LastName::VARCHAR\n FROM @mystage\n); Advantages One of the most significant advantages of the SnowSQL client is its COPY feature. It\u2019s particularly beneficial when you need to load data into a Snowflake table since: It supports many options for transformation and manipulation, including reordering and indenting columns. The [COPY INTO command](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location) enables data package download from files already available in the cloud storage or data copying from a named stage. Pitfalls This method supports large file formats but cannot guarantee real-time information exchange. Note that Snowflake\u2019s [STREAM command](https://docs.snowflake.com/en/sql-reference/sql/create-stream) supports continuous loading if you need such an option. It allows you to receive real-time data from external systems. Aspects of Cost and Efficiency The SnowSQL Client tool is free to download from the official website. Since it supports large information volumes, its performance can sometimes be degraded due to that. Skills Needed To use this solution, you will need to possess a proficient understanding and practical use of SQL. Method 3: Getting Data Into Snowflake Using Snowpipe Another way to load data into Snowflake is to use the [Snowpipe](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro) tool. It can extract information from the stage and then put it into DWH. Snowpipe has pipes as the basic unit items, and each of them is the first-class object containing Snowflake\u2019s [COPY](https://docs.snowflake.com/user-guide/data-load-local-file-system) instructions. Snowpipe Configuration Make sure you fully comprehend the entire architecture and processes since a Snowpipe object doesn\u2019t rub in the isolated environment. Storage integration, stage, and file format need to be defined in advance. To create and configure a Snowpipe object, follow the steps below: Configure a separate database and schema for source data holding. Create a table for storing data. Specify the needed file format. Create a Snowpipe object. Example: CREATE OR REPLACE PIPE mypipe \n AUTO_INGEST = TRUE AS\n COPY INTO snowpipe_landing_table\n FROM @my_s3_stage/snowpipe/\n FILE_FORMAT = csv_file_format; Configure cloud storage event notifications. If you have created a Snowpipe object with the property AUTO_INGEST=TRUE, Snowflake automatically assigns a notification channel to it. This step-by-step procedure enables users to extract information from files in micro-butches. Such an approach allows users to keep their DWH locations always up-to-date. Advantages Snowpipe would be ideal if you work with high-frequency or streaming data. With continuous upload, latency is lower, as new data is constantly being transferred as soon as it becomes available. Pitfalls Incomplete or incorrect formatting. As a workaround, use the VALIDATE command right after the COPY INTO command execution to retrieve the list of errors. Insufficient permissions or credentials. To avoid this, run the CREATE THREAD request with the privileged user OWNERSHIP and ensure that the role is granted access rights. Network connectivity and latency issues on transfer. Aspects of Cost and Efficiency The more significant factors that impact Snowpipe\u2019s cost are the amount of data and file size. When sending it with the COPY INTO command, you pay for each second the virtual warehouse is active. Therefore, the loading process with your own virtual warehouse can be the most cost-effective option. Skills Needed You will need to access your cloud provider to create the event notifications. All other operations can be done with SQL commands, so strong experience with this programming language is highly desired. Method 4: Bulk Loading with SQL Commands Before extracting data from files, you should first consider preparing and organizing them. The first thing to do is to locate files in the right order \u2013 it should correspond to their upload sequence into Snowflake. Note that the correct organization of files determines how long Snowflake will take to scan the cloud storage. Step 1. Staging Files Define the File Format object. It\u2019s recommended to define File Format as a standalone object, which enables it to be reused for multiple data operations. CREATE OR REPLACE FILE FORMAT my_csv\nTYPE = csv\nFIELD_DELIMITER = \u2018,\u2019\nSKIP_HEADER = 1\nNULL_IF = '' Step 2. Configure Stage The Stage object describes where the files for ingestion are located and how to access them. There are internal and external Stages, but the latter is highly recommended for use in bulk loading operations. This is obvious since most of Snowflake\u2019s data already exists in the cloud storage. Step 3. Load Data The COPY command is used for both information loading and unloading in Snowflake. It\u2019s important to indicate all the related parameters to send it correctly. Note that the COPY command also requires an active virtual warehouse to run and ingest the files. To efficiently use the virtual warehouse, you need to organize your files in batches (recommended size 100MB-250MB). Also, consider using the smallest possible warehouse size that fits your SLA. COPY INTO mytable\n\nFROM @my_s3_stage Advantages The main benefit of this method is that you can perform bulk loading of data in a short time. Pitfalls You need to consider the efficient use of virtual DWH in order not to pay extra for unused resources. Aspects of cost and efficiency Note that serverless tasks are billed per second at the cost of $0.00125/sec. The smallest warehouse charges one credit per hour, though it\u2019s billed for a minimum of 60 seconds. (1 credit/hour * 1.5 * $3/credit) / 3600 seconds Skills needed For this method, strong SQL knowledge is required. You should also understand the overall architecture of Snowflake and how external cloud resources connect to it. Method 5: Using Skyvia for Loading Data to Snowflake [Skyvia](https://skyvia.com/) is a cloud-based, no-code ETL, ELT, and Reverse ETL non-technical user-friendly platform. The data loading and processing here are automated and highly secured. Using Skyvia Cloud Data Platform, you can successfully upload data into the Snowflake data warehouse from [200+ ready-made data connectors](https://skyvia.com/connectors/) , including SQL Server, Amazon S3, Oracle, Azure Blob, and vice versa. Skyvia is a flexible solution that provides you with the best experience. You can select the data-related scenarios that suit your business case best. Extra Capabilities At the same time, Skyvia is also a popular solution among IT engineers from tech-related spheres. Apart from the Data Integration product, this tool comprises other solutions: [Automation](https://skyvia.com/automation) for streamlining repetitive tasks and workflows. [Backup](https://skyvia.com/backup) for SaaS backups and restorations. [Query](https://skyvia.com/query) for data retrieval from various sources with SQL commands. [Connect](https://skyvia.com/connect) for creating OData and SQL endpoints to extract information from your preferred sources. Aspects of cost and efficiency Skyvia is a universal solution suitable for any kind of company operating in any industry. It offers five different [pricing plans](https://skyvia.com/pricing) , starting from a free tier with limited data volume for operation to complex enterprise solutions with all possible features included. Note that Skyvia doesn\u2019t have any hidden costs, as each plan has a predefined amount of data to load and process. You can always monitor the usage patterns in your account. Skills Needed Skyvia\u2019s significant benefit is that it doesn\u2019t require profound technical knowledge. You can load data into Snowflake without [coding](https://skyvia.com/blog/connect-salesforce-to-sql-server/) skills and programming experience. Compared to other methods that require SQL knowledge, Skyvia could be a real lifesaver for non-technical users and business leaders. Load Data from CSV to Snowflake with Skyvia How to load [data from CSV to Snowflake](https://skyvia.com/data-integration/snowflake-csv-file-import-and-export) with the Skyvia solution? Step 1 First, sign into your account and select the [data integration package](https://docs.skyvia.com/data-integration/import/configuring-import.html) using the lmport or Data Flow tool. Note: If you wish to retrieve data from the file storage, [se](https://docs.skyvia.com/data-integration/data-flow/components.html) lect the From storage service option under CSV source type and choose or create a file storage connection. Step 2 Set up [mapping and transformation](https://docs.skyvia.com/common-platform-features/mapping-editor.html) . Under the source section, select the source database or cloud app and choose the connection from the drop-down list. Step 3 Select the CSV file for loading and the table you want to import. Here, you can also combine information from multiple sources and send it to various destinations simultaneously. Step 4 Run the [data integration process](https://skyvia.com/data-integration/) to write information into multiple temporary CSV files and upload it into Amazon S3 or Azure. You can use the scheduling option here to save time and costs. Step 5 Finally, the data is imported into your Snowflake DWH. Load Data from SQL Server to Snowflake with Skyvia With Skyvia Cloud Solution, you can also transfer data from [SQL Server to Snowflake](https://skyvia.com/data-integration/integrate-sql-server-snowflake) in three simple steps. Step 1 Add [SQL Server](https://skyvia.com/connectors/sql-server) and [Snowflake](https://skyvia.com/connectors/snowflake) services to Skyvia using no-code built-in connectors. For that, select + Create New -> Connection from the main menu, find SQL Server on the list, and provide the required parameters (domain, database, user name, and passwords). Then, perform the same procedure for Snowflake if not connected yet. Step 2 Create the data integration scenario using one of the available Skyvia tools. For that, click +Create New on the main menu and choose the preferred tools under the Data Integration column (Import, Synchronization, or Data Flow). Step 3 The last step is mapping configuration and data transfer execution. Load Data from Amazon S3 to Snowflake with Skyvia With Skyvia, you can manage and automate your import and export of data between [Snowflake and Amazon S3](https://skyvia.com/data-integration/integrate-snowflake-amazon-s3) on a schedule by following the step-by-step instructions. Step 1 Configure [Amazon S3](https://skyvia.com/connectors/amazon-s3) and [Snowflake](https://skyvia.com/connectors/snowflake) connectors in Skyvia if they have not been added yet. You can set up these connections by specifying the required details. Then, go to + Create New in the main menu and select the preferred integration scenario. Step 2 In the window that appears, select Amazon S3 and Snowflake as the source target objects, respectively. Step 3 Map the source and target fields in the next step after selecting the source and target objects. This step is necessary to ensure an accurate transfer from Amazon [S3 to Snowflake](https://skyvia.com/blog/snowflake-amazon-s3-integration/) with all the required formatting and transformation. The [solution](https://skyvia.com/data-integration/integrate-amazon-rds-snowflake) also allows the data transferring scheduling to provide its automatic loading into Snowflake regularly. Load Data from Oracle to Snowflake with Skyvia How to easily upload your data from Oracle to Snowflake with [Skyvia Cloud Solution](https://skyvia.com/data-integration/integrate-oracle-snowflake) ? Just follow the three steps provided in the step-by-step tutorial below. Step 1 Set up [Oracle](https://skyvia.com/connectors/oracle) and [Snowflake](https://skyvia.com/connectors/snowflake) connectors in Skyvia first if not yet completed. When configuring Oracle, provide the database server hostname or IP address, the service name, and the port to run on for both services. For Snowflake, specify the requested credentials to establish the connection with the DWH. Step 2 Select the integration scenario of your choice from the main menu. In the Data Integration column, opt for Import, Synchronization, Data Flow, or another tool you choose. Afterward, select Oracle and Snowflake under the appropriate sections of Target and Source on the page. Step 3 Configure mapping and transformation settings in Skyvia to ensure proper data alignment with the target Snowflake schema. You can load Oracle data directly to/from major cloud and on-premise sources or synchronize it in both directions. Import packages also allow you to load information to a cloud app or database in one direction. Step 4 Finally, schedule and execute the data transfer process. Load Data from Microsoft Azure Blob to Snowflake with Skyvia To easily load your data from Microsoft Azure Blob to Snowflake using the [Skyvia](https://skyvia.com/data-integration/integrate-sql-azure-snowflake) platform, follow the three simple [setup](https://skyvia.com/blog/salesforce-quickbooks-integration/) steps. Step 1 Connect [Microsoft Azure Blob](https://skyvia.com/connectors/sql-azure) and [Snowflake](https://skyvia.com/connectors/snowflake) sources in the Skyvia interface. Step 2 Once all the systems are connected successfully, select the source and destination for the data transfer and the integration scenario (Import, Synchronization, or Data Flow). Step 3 The last step includes the configuration of transformation options and determining mapping rules to match the source and destination data structures efficiently. Conclusion To recap, there are multiple methods for transferring data from your preferred sources to Snowflake. However, the Skyvia platform stands out among all these approaches. It offers the simplest three-step solution that allows you to save time, resources, and effort. What\u2019s more, Skyvia is a solution adopted both for business users with no technical knowledge as well as developers with extensive coding experience. Skyvia also offers a set of advantages for data loading into Snowflake and other data warehouses: Automation Safety & security Accurate data mapping & transformation Scheduling capability Timely data refresh To discover more about Skyvia and its capabilities, feel free to [schedule a demo](https://skyvia.com/schedule-demo) . FAQ for Load Data Into Snowflake What file formats does Snowflake support for data loading? Snowflake supports multiple file formats for loading data, including CSV, JSON, AVRO, ORC, PARQUET, and XML. How can I handle errors encountered during data loading into Snowflake? Since each data loading method has its own flow, the errors would be different for different cases. A good option is to check the troubleshooting page for the selected method (for instance, [Data Loading with Snowpipe Troubleshooting](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-ts) ). What is the difference between bulk loading and continuous data loading in Snowflake? Bulk loading is also known as batch upload into Snowflake. It means that data is gathered into chunks and sent to a DWH or a database at certain intervals. Meanwhile, with the continuous loading, data arrives in Snowflake as soon as it appears in the source application. What are the best practices for loading large datasets into Snowflake? The most effective method to upload large data amounts into Snowflake is to use SnowSQL Client, which is described in this article as Method 2. This tool is particularly dedicated to handling big data. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Floading-data-into-snowflake%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Load+Data+Into+Snowflake%3A+4+Fastest+Methods+Explained&url=https%3A%2F%2Fblog.skyvia.com%2Floading-data-into-snowflake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/loading-data-into-snowflake/&title=How+to+Load+Data+Into+Snowflake%3A+4+Fastest+Methods+Explained) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/magento-and-quickbooks-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Magento QuickBooks Integration: Best Methods to Sync Your Data By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/magento-and-quickbooks-integration/#respond) 2899 March 12, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+QuickBooks+Integration%3A+Best+Methods+to+Sync+Your+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-and-quickbooks-integration/&title=Magento+QuickBooks+Integration%3A+Best+Methods+to+Sync+Your+Data) Imagine that you need to manually insert order information and payment details from your e-commerce website into the accounting system. It\u2019s a tedious process that requires a lot of time and attention. Due to human factors, some data might be inserted mistakenly, leading to a sequence of errors and inconsistencies. Since many businesses use the Magento platform as the foundation of their online store and QuickBooks as their base accounting system, this article will discuss the integration between these two systems. In particular, we\u2019ll see how to optimize and automate the Magento to QuickBooks integration using native, third-party, and custom solutions. Table of Contents Why Integrate Magento with QuickBooks? Key Benefits of Magento QuickBooks Integration Challenges Without Integration Who Needs This Integration? Best Methods to Connect Magento and QuickBooks Method 1: Using QuickBooks Connector Method 2: Using Store Manager for Magento Method 3: Using Cloud Integration by Skyvia Magento QuickBooks Import Scenario QuickBooks CSV Data Import & Export Automation Magento QuickBooks Bidirectional Synchronization Scenario Data Replication from Both Data Sources to the DWH Scenario Method 4: Custom API Integration (For Advanced Users) Choosing the Right Integration Method Conclusion Why Integrate Magento with QuickBooks? [Magento](https://about.magento.com/Magento-Commerce.html) is a global e-commerce platform that allows users to create and customize online stores. Its diversity of features helps businesses solve almost any e-commerce-related task or challenge. Having an online store is something that requires additional management efforts. For instance, the information about inventory, sales orders, and payments needs to be synchronized with other internal systems an organization uses. As many companies prefer [QuickBooks](https://quickbooks.intuit.com/global) as an excellent accounting tool for administering their operations, including those related to e-commerce stores, we will explore how to integrate Magento with QuickBooks in this article. Key Benefits of Magento QuickBooks Integration Let\u2019s review the possible Magento and QuickBooks integration scenarios and the potential benefits each may offer businesses. Scenario Benefits A single source of truth creation \u2013 Since companies usually orchestrate data from multiple systems, a single data source established through integration provides businesses with trusted data.\u2013 Prevented loss or redundancy of information. Synchronization of two data sources \u2013 Simultaneous data updates in two systems (Magento and QuickBooks) allow all organizational departments to be on the same page.\u2013 Minimized ambiguity, contradiction, and loss of information. Forecasting and trend prediction using AI and ML \u2013 Accumulating available raw data enables AI/ML algorithms to be applied, which assists in deriving valuable insights.\u2013 Businesses can quickly use the replicated data in a DWH with embedded analytics tools or external BI tools of choice to create reports/dashboards. Challenges Without Integration Having Magento and QuickBooks as standalone systems is something many organizations still experience nowadays. However, such companies face a lot of inconveniences and challenges by not bringing these two systems together. Here are some common ones: Human error. Entering data on sales and payments manually is an error-prone task, which may lead to inconsistencies in the company\u2019s financial statements. Time wasting. With non-automated data integration, employees need to manually copy all data from Magento or QuickBooks and vice versa. Executing such tasks often requires a lot of time and additional human resources. Taxation challenges. Tracking sales taxes isn\u2019t an easy thing in any case, but it becomes even more exhausting when e-commerce operations take place in various locations around the world. Accountants need to monitor sales and calculate taxes based on the specific country to make sure everything is compliant with the local official regulations. Who Needs This Integration? Obviously, the QuickBooks Magento integration is for those who use both services daily. Small businesses will take advantage of such a connection to streamline their daily workflows. Manufacturers and distributors will benefit from it for more effective inventory and stock management. Best Methods to Connect Magento and QuickBooks Each organization is unique, even when we refer to companies in the same industry. Every entity decides on its data stack, technological ecosystems, data volumes, update frequency, and other factors. Consequently, the data integration needs differ from one business to another. The market offers a variety of options for Magento and QuickBooks integration. Some of them don\u2019t require any technical background, while others are more demanding in terms of obligatory programming skills. These approaches also differ in terms of pricing and applicable use cases. Below, find a brief comparison table, where we shortly introduce each integration method along with its advantages, pitfalls, and pricing. Method Desciption Pros Cons Pricing QuickBooks Connector (OneSaas) An additional plugin is installed, and the integration configurations are done via Terminal. \u2013 It\u2019s easy to install.\u2013 No maintenance is required. \u2013 Additional costs for the extra features.- Limited customization and integration options. The [pricing](https://quickbooks.intuit.com/global/pricing/#) starts at $1 per month and includes three types of plans with a free trial. Store Manager for Magento It\u2019s a downloadable desktop tool with an add-on for QuickBooks integration. \u2013 Visual wizard.\u2013 Offers a set of several export and import scenarios. \u2013 Only Products can be imported from Magento to QuickBooks.- Web development skills are required. Pricing is flexible depending on the available features, company size, infrastructure, security, and other parameters. Skyvia A universal cloud data platform that connects Magento and QuickBooks with no coding. \u2013 Visual wizard.\u2013 Easy-to-use, no technical background needed.\u2013 Various integration scenarios from basic to complex data flows. \u2013 Lack of video tutorials.- More sophisticated features require additional training. Volume-based and feature-based [pricing](https://skyvia.com/pricing/) . The Freemium model allows you to start using Skyvia for free. Custom APIs Proprietary APIs allow you to create completely new workflows, data fields, and objects. \u2013 High degree of customization.- Full control over the integration workflow. \u2013 Strong programming skills and technical skills are required.- Regular updates are needed. The cost of this method depends on the underlying infrastructure pricing and engineers\u2019 salary rates. Method 1: Using QuickBooks Connector Overview & Features This approach to integrating QuickBooks and Magento is based on the in-house Intuit connector, OneSaaS. It can integrate the accounting system with other popular e-commerce platforms. Using the OneSaaS Connector requires additional plugin installations and configurations via Terminal. However, this method is simple and intuitive enough that it won\u2019t be difficult to set everything up in minutes. Note: OneSaaS Connector supports Magento 2.0 version and later. Here are some of the core options QuickBooks Connector (OneSaas) offers: Manual and automated synchronization of Magento order information with QuickBooks. The possibility to select orders on various statuses for synchronization (Pending, Processing, and Complete). Export of sales receipts from Magento orders to QuickBooks with a description of the payment method. The QuickBooks sales receipt number is the same as the Magento order/invoice number. Synchronizing customer details (name, email, billing address) and product information (name, quantity, rate, etc.). Sending tax details from Magento to QuickBooks. When sending data on downloadable products to QuickBooks, they are saved as non-inventory products. Step-by-step Setup Guide Download the [QuickBooks Connector Magento 2.0 plugin](https://onesaas.intuit.com/cdn/spokes/magento@latest/latest/magentoconnect.zip) . Extract the contents of the obtained .zip archive and copy all the files to the Magento installation directory. Run the following commands to install the plugin: php bin/magento setup:upgrade\nphp bin/magento setup:di:compile\nphp bin/magento setup:static-content:deploy Log in to your Magento and select System -> QuickBooks Connector -> Integration link to get the QuickBooks Connector API Key. Log in to your QuickBooks Online account and navigate to the Apps tab. Find the [Magento Connector](https://quickbooks.intuit.com/app/apps/appdetails/quickbooks-online-with-magento/) from the search bar and install it. At the end of the setup, click Connect . Once redirected to the Connections tab in your QuickBooks Connector, click Connect to Magento . In the window that appears, paste the Magento API Key provided upon plugin installation. Then, click Connect to Magento . Afterward, configure the QuickBooks Magento integration by selecting the options on when and how to synchronize data between systems. Check the [integration guide](https://support.onesaas.com/hc/en-us/articles/360019839972-Magento-and-QuickBooks-Online-Integration-Guide) to get more details. Pros and Cons Based on user reviews on Trustradius, Crunchbase, and other similar platforms, we have outlined the following benefits and limitations of QuickBooks Connector. Pros Cons \u2013 Ensures synchronization for Order, Product, and Inventory items. \u2013 Native QuickBooks connector. \u2013 Only one-way sync (from Magento to QuickBooks). \u2013 Considerably slow performance. \u2013 Issues with historical data integration. Method 2: Using Store Manager for Magento Overview & Features First, it makes sense to look at what Store Manager for Magento is and how it can facilitate the integration of e-commerce stores with QuickBooks. This tool helps to manage your catalog and sales on your Magento 2, Adobe Commerce, and Open Source platforms. Magento Store Manager provides multiple options for advanced storing, bulk editing, filtering, task automation, ChatGPT integration for working on product descriptions, and many more. When it comes to Magento and QuickBooks connection, this tool provides the following features that could fulfill the integration requirements: Inventory management. It\u2019s possible to update products using suppliers\u2019 files and sync data with accounting systems and other sales channels. Customer and order management. Export orders and customers to different file formats manually or on a schedule. How to Install and Configure QuickBooks Addon Store Manager enables users to eliminate manual data entry and automate all the integration-related processes. It helps you export Magento data, including customers, products, etc., to QuickBooks and import it from your accounting system to an e-commerce store. Download a [Store Manager on a free trial](https://www.mag-manager.com/free-download/) and use the [Quick Start Guide](https://store-manager-for-magento-doc.emagicone.com/quick-start-guide) to install this tool. Open the tool and navigate to the Addons tab. Click the QuickBooks Integration icon to get started. To start exporting and importing data between QuickBooks and Magento, use the step-by-step wizard provided with the add-on, choosing the appropriate data integration scenario: Export Magento 2 Products to QuickBooks Export Magento 2 Customers to QuickBooks Export Magento 2 Orders to QuickBooks Import Products from QuickBooks to Magento NOTE: The add-on in demo mode allows you to export five items for free just to try how it works. Exporting Products When exporting Magento 2 Products to QuickBooks, you can select the items before launching the add-on. You can also use filters and choose the products during the Preview Data Export step. Then, connect to your QuickBooks accounting tool. The available export options depend on your QuickBooks software (Desktop or Online). For Desktop, the parent configurable product with its SKU and quantity will be exported without the variations. For Online, the add-on will ship parent configurable products with their SKU and quantity and variations with their SKUs and quantities. With the add-on, you can create new items, modify existing ones, or carry out both operations when exporting products to QuickBooks. But take in mind that you have to specify product export options: Inventory part Non-inventory part Service Other charge NOTE: Ensure you use a dot(.) as a decimal separator. Otherwise, you will encounter an error, and the process won\u2019t succeed. Exporting Customers In the case of sending the Customers\u2019 data from Magento 2 to QuickBooks, the export wizard allows for adding a new buyer, updating client data, or performing both operations at once. Exporting Orders When sending Orders from Magento 2 to QuickBooks, export all relevant order data. Remember that you may only create but not update orders. If the customer doesn\u2019t exist, it won\u2019t be possible to export orders, so you will have to associate the order with the appropriate client first. Importing Products When extracting Products\u2019 information from QuickBooks to Magento using the add-on, it\u2019s possible to: Specify the import method (Create or Modify), product type, and field(s) for update. When adding new products to Magento, provide the Default category name for import. If the specified category doesn\u2019t exist in Store Manager, it will be created automatically during the data import process. Pros and Cons After analyzing user reviews on Trustpilot and G2, we have outlined several advantages and limitations associated with the Store Manager for Magento to QuickBooks integration. Pros Cons \u2013 Seamless integration of Orders, Customers, and Products from an e-commerce store with other data systems. \u2013 Advanced filtering options for retrieval of only the necessary data. \u2013 Only Product items can be imported from QuickBooks to Magento 2. \u2013 Requires fundamental technical knowledge for installation and use. Method 3: Using Cloud Integration by Skyvia [Skyvia](https://skyvia.com/) is an easy-to-use universal data integration tool that could be an excellent option for connecting Magento and QuickBooks. It seamlessly aggregates data with [200+ pre-made connectors](https://skyvia.com/connectors) to popular cloud apps, databases, and flat files. The platform supports various data-related processes, such as: ETL, ELT, reverse ETL Data migration One-way and bi-directional data sync Workflow automation Backup for cloud apps Data sharing via REST API, etc. Skyvia offers a number of tools for different integration scenarios: [Import](https://skyvia.com/data-integration/import) allows users to load data from one DB, cloud app, or CSV file to another DB or cloud app using the ETL or reverse ETL scenarios. [Export](https://skyvia.com/data-integration/export) performs data extraction from a cloud app or DB and loads it locally or in cloud storage as a CSV file. [Synchronization](https://skyvia.com/data-integration/synchronization) connects two sources, like DBs and cloud apps, and synchronizes data in both directions. [Replication](https://skyvia.com/data-integration/replication) creates an exact copy of the cloud app in DB, using the ELT scenario and keeping it up-to-date automatically. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) enables users to create complex data pipelines according to the appropriate sources and conditions. In this article, we\u2019ll review the data import and replication scenarios, particularly for [Magento and QuickBooks Online](https://skyvia.com/data-integration/integrate-quickbooks-magento) integration, based on real business cases. Magento QuickBooks Import Scenario Businesses often face the problem of data inconsistencies in different systems. For instance, when a new customer places an order on Magento, personal information and purchase details should also be added into QuickBooks. Instead of adding data manually, the Skyvia integration solution can be used. [Skyvia Import](https://skyvia.com/data-integration/import) is a powerful ETL tool that integrates data between various systems (with different data schemes and structures). This platform helps improve data accuracy, eliminate manual data entry, and boost workflow efficiency. Using the real-world example below, we show how to import client details from Magento to QuickBooks using the Skyvia Import tool. Note: Before starting the Import scenario, you need to connect data sources first. Check the documentation on how to connect [Magento](https://docs.skyvia.com/connectors/cloud-sources/magento_connections.html) and [QuickBooks Online](https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html) with Skyvia. Click +Create New in the top menu. In the Integration column, click Import . Under Source Type , click Data Source and select Magento from the Connection dropdown list. In the Target field, choose QuickBooks from the Connection dropdown list. Go to the Task Editor and click Add new . Select the source table, state filters, and choose the object for filtering (in this case, we select CustomerAddresses_BillingAddress as shown in the screenshot below). Then, click Next step . Select the target table from the dropdown similarly and click Next step again to proceed. In the Mapping Definition tab, set mapping by categories and the search type, and filter them as mapped, unmapped, not required, or valid. NOTE: The same names are mapped automatically; others must be mapped manually. Please pay attention to your screen and click the red colored text \u201cThis column must be mapped to do it\u201d. Click Save once you have finished with the mapping settings. Start the integration manually by clicking Run in the upper right corner of the screen. Otherwise, consider the automated integration with the Schedule feature. QuickBooks CSV Data Import & Export Automation With Skyvia, you can also quickly load data into QuickBooks from an external source or CSV files. For instance, when your vendors upload invoices to file storage, you need to add this data to your accounting system. You can set up the integration scenario with the Skyvia Import to automate the process. On the contrary, when you need to send bills to your vendors or customers, Skyvia can also assist you. The Export tool can automatically extract and transfer your accounting data to the desired location. Magento QuickBooks Bidirectional Synchronization Scenario On some occasions, businesses need to keep several systems aligned in terms of the data they contain. For instance, these may refer to the transaction data, which needs to be the same in both Magento and QuickBooks. To bring this scenario to life, bidirectional data synchronization would be suitable. Skyvia offers several options for the implementation of data sync, each of which would be more appropriate depending on the particular business scenario or the company\u2019s needs. Option 1. Import Tool You may use the Import component described above to synchronize data in two directions. In that case, you will just need to create two different packages for each integration flow (1 \u2013 from QuickBooks to Magento, 2 \u2013 from Magento to QuickBooks). Option 2. Synchronization Tool The bi-directional [synchronization scenario](https://skyvia.com/data-integration/synchronization) is a good choice if one of the apps is still empty. Skyvia copies all data from Magento to QuickBooks (or vice versa) for the first time, preventing the apps from records duplication that way. To set up and execute data sync with the Synchronization tool, please do the following: Click +Create New in the top menu. In the Integration column, click Synchronization . Under Source , select Magento. Under Target , select QuickBooks. Go to the Task Editor and click Add new . Select the source table, state filters, and choose the object for filtering (in this case, we select CustomerAddresses_BillingAddress as shown in the screenshot below). Then, click Next step . Select the target table from the dropdown similarly and click Next step again to proceed. In the Mapping Definition tab, set mapping by categories and the search type, and filter them as mapped, unmapped, not required, or valid. Click Save once you have finished with the mapping settings. Start the integration manually by clicking Run in the upper right corner of the screen. Otherwise, consider the automated integration with the Schedule feature. Data Replication from Both Data Sources to the DWH Scenario Businesses need comprehensive and reliable analytics to support stable growth and informed decision-making. While analyzing data from each system separately doesn\u2019t usually yield the desired results, all data from corporate systems should be gathered in one place. This unified location is typically a data warehouse or database. To aggregate information from QuickBooks, Magento, and other systems, [Skyvia\u2019s Replication](https://skyvia.com/data-integration/replication) can be used. It creates a copy of cloud app data in the selected cloud DWH or DB server and keeps it up-to-date. Contrary to the Import scenario described above , its configuration is more straightforward because there\u2019s no need for custom mapping and different structure data loading in the source and target. Replication can also automatically create the tables in the target database for transferred data. Let\u2019s observe how it works with another real-life example, where there is a need to replicate the QuickBooks customers\u2019 data and Magento customers\u2019 and products\u2019 data to DWH. Click +Create New in the upper screen menu. Select the Replication component. Under Source Type , click Data Source and select QuickBooks or Magento from the Connection dropdown list. In the Target field, choose the DWH you need in the Connection dropdown list. To add new or updated records for replication, check the Incremental Updates checkbox. To add new DB tables for the replication data, select the Create Tables checkbox. Go to the Task Editor to select objects for replication. Save the changes. Start the integration manually by clicking Run in the upper right corner of the screen. Otherwise, consider the automated option with the Schedule feature in the top left menu. Click Save when you have finished the scheduling settings. Feel free to review the video with the detailed step-by-step instructions on how to transfer data to a DWH. Pros and Cons Similar to the descriptions of other methods, we also provide a list of the core advantages and notable limitations of the Skyvia platform. Pros Cons \u2013 A wide variety of integration scenarios. \u2013 No-code approach to data migration. \u2013 Support of 200+ connectors, including cloud and on-premises systems. \u2013 No local installations \u2013 only a web browser and a Skyvia account are required to start. \u2013 Complex data transformations and mapping might require additional training. \u2013 Near-real-time data sync is not available under the free tier. Method 4: Custom API Integration (For Advanced Users) If a high degree of customization is preferable when connecting Magento and QuickBooks tools, APIs are recommended. Both platforms have their own APIs that enable data exchange between the source and target systems. They will also serve as a base for creating your custom application interfaces. When to Choose a Custom API Solution As we have already reviewed plenty of methods that connect Magento and QuickBooks, you have probably noticed that most of them offer the same set of fields for integration. Custom API creation might come in handy if you need to synchronize non-standard objects and fields. It will also be helpful if you\u2019d like to set custom monitoring dashboards and specific data sync intervals. How Custom APIs Work Custom API development enables crafting unique solutions tailored to specific business requirements. Proprietary APIs allow you to create entirely new workflows, data fields, and objects. They also provide a high degree of flexibility to change and adjust your integration processes as soon as required. When developing new APIs, consider using [QuickBooks-related APIs](https://developer.intuit.com/app/developer/qbo/docs/develop) and [Magento 2 REST APIs](https://doc.magentochina.org/swagger/) as a base. These APIs offer a set of predefined functions that developers can use to execute requests and receive responses over the HTTP protocol. Note that developing a custom API makes sense when you have a proprietary application with which you want to integrate data from your accounting application and e-commerce store. Pros and Cons Like any other method for integrating Magento and QuickBooks, custom API development has its limitations and advantages. Pros Cons \u2013 High degree of customizability to meet specific requirements for synchronization times and data structures. \u2013 Full control over the integration workflow. \u2013 Solid technical expertise is required. \u2013 Strong understanding of programming languages. \u2013 Regular updates are required. Choosing the Right Integration Method With such a variety of methods for Magento QuickBooks integration, it might be perplexing when making a choice. A little spoiler: You can use several methods if you want to; everything depends on your available resources and current business needs. Anyway, we\u2019d like to help you choose the proper integration method for your particular case. Here are some core factors you need to consider and pay elevated attention to: Ease of setup. Analyze your team members\u2019 technical expertise and decide whether they are interested in programming, complex infrastructure configurations, or other complex operations. If so, you can consider creating a custom API for Magento to QuickBooks integration. For your information, Skyvia also offers tools targeted at tech specialists to ensure a customized integration setup. Data to sync. Decide which information needs to be exchanged between the two systems. Think of the preferable integration direction for data transfer. Based on these factors, evaluate the available connection methods and the options they provide. For instance, if you need to send the Customer from QuickBooks to Magento, the Magento Store tool won\u2019t be suitable since it can only transfer Product data from the accounting system to the e-commerce store. Frequency of sync. Decide whether you need a near-real-time data transfer from one system to another or in both directions or if a daily sync would be fine for you. Pricing. If several solutions meet the selection criteria provided above, decide which best fits your budget. Note that all the methods described in this article have flexible pricing depending on the features included, data volume, and other factors. Comparison of Methods To further facilitate your decision-making, we have also prepared a comparison table. Here, we evaluate each method\u2019s ease of setup, real-time sync capabilities, business suitability, and pricing. Integration Method Pricing Ease of Setup Real-Time Sync Best for QuickBooks Connector (OneSaaS) Starts at $1 Easy \u2705 Small businesses Store Manager for Magento Flexible pricing that depends on multiple parameters (features included, user number, etc.) Moderate \u2705 Medium-sized businesses Skyvia Free tier Paid plans start from $79/month Easy \u2705 All businesses Custom APIs Custom Advanced \u2705 Large enterprises Conclusion In this article, we have reviewed in detail four methods to integrate data between Magento and QuickBooks: QuickBooks Connector (OneSaas) Store Manager for Magento Skyvia Custom APIs We have also provided a detailed comparison table of these approaches and advice on selecting the most appropriate method for your business. Among all these integration methods, we\u2019d like to focus on Skyvia and its universal approach to working with data. This solution could be suitable for any kind of business since it can handle various data volumes and offers both simple and complex data flows. What\u2019s more, Skyvia can be used for free, so new users can explore how the system works and F.A.Q. What is the best way to sync Magento with QuickBooks? This question has no exact answer since each company decides upon the best integration method depending on its current integration requirements and business processes. To help you select the best method, we have included a comparison table with four methods in this article. Can I integrate QuickBooks Desktop with Magento 2? Sure, you can integrate these two systems using Skyvia. This tool has pre-built connectors to both QuickBooks Desktop and Magento 2 tools. It also allows the setting up various integration scenarios between these two systems without coding using GUI. Does QuickBooks Online support real-time sync with Magento? Yes, you can synchronize data between QuickBooks Online and Magento in real time. All the methods reviewed in this article support real-time or near-real-time integration between these two systems. What happens if there\u2019s a sync failure? In this case, the data is not aligned between the systems. You need to view the operation logs to see which type of error has occurred. Once fixed, reset the synchronization process. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+QuickBooks+Integration%3A+Best+Methods+to+Sync+Your+Data&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-and-quickbooks-integration/&title=Magento+QuickBooks+Integration%3A+Best+Methods+to+Sync+Your+Data) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/magento-and-salesforce-integration-guide-to-easy-connection/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Magento Integration with Salesforce: A Guide to Easy Connection By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/magento-and-salesforce-integration-guide-to-easy-connection/#respond) 5569 September 27, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-salesforce-integration-guide-to-easy-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+Integration+with+Salesforce%3A+A+Guide+to+Easy+Connection&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-salesforce-integration-guide-to-easy-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-and-salesforce-integration-guide-to-easy-connection/&title=Magento+Integration+with+Salesforce%3A+A+Guide+to+Easy+Connection) A famous saying states that four eyes see better than two, meaning that two people can find a better solution. The same applies to Salesforce Magento integration \u2013 each system is robust, and bringing them together may open new horizons. Companies using Salesforce report that they make decisions [38% faster](https://www.solunus.com/post/interesting-salesforce-statistics-and-facts) than previously. Even being one of the most popular CRM systems in the world, Salesforce on its own can\u2019t satisfy all the company\u2019s needs. Combined with a content management system (CMS), like Magento, it takes customer relationship management and satisfaction rates to the stars. And this is where [Magento to Salesforce integration](https://skyvia.com/data-integration/integrate-salesforce-magento) enters the game. In this article, we focus on how to bring together Magento and Salesforce by also reviewing the advantages and pitfalls of each. Next, we explain how businesses can benefit from Magento 2 Salesforce integration from a long perspective. Table of Contents Salesforce CRM: An Overview Magento CMS: An Overview The Advantages of Salesforce & Magento Integration Magento Integration with Salesforce by Skyvia Integrate Magento 2 to Salesforce using Free Native Connector Paid Extension for Magento Salesforce Integration How to Choose the Right Integration Method for Business Summary Salesforce CRM: An Overview [Salesforce](https://www.salesforce.com/) is a global customer relationship management (CRM) system thousands of companies use. This platform incredibly optimizes the sales process in corporations. It has all the necessary functions for improving lead conversion, tracking sales, managing e-commerce websites, providing analytics on sales, and so on. In general, Salesforce appears to be the best CRM solution for large enterprises and rapidly growing companies. It contains many functions and is customizable to address business objectives. Magento CMS: An Overview Being over 15 years in the international market, [Magento](https://business.adobe.com/products/magento/magento-commerce.html) has gained enormous popularity as a CMS due to its customizability for e-commerce. Like WordPress, it\u2019s an open-source system with a stable base of contributors. However, things have slightly changed with the arrival of Magento 2 \u2013 it offers a free solution called Magento Open Source and an e-commerce platform named Magento Commerce. Nowadays, Magento is highly popular among large businesses that build up online stores. And Magento integration with Salesforce reveals new opportunities for those businesses designing and sustaining those e-commerce stores. The Benefits of Magento The extreme popularity of Magento CMS owes to its immense customizability. However, that\u2019s not the only thing determining the system\u2019s success in the market. Here, we list the most significant benefits an e-commerce business can experience from using this e-commerce platform. 1. Keeping up with the times. Both Open Source and Commerce variants of the Magento CMS roll out updates regularly. Those contain bug fixes and a set of new features as well as improvements for compatibility with new standards on the web. 2. Compliance with SEO. This platform gives users complete control over their metadata, uses an advanced XML sitemap, adapts to multiple website types, etc. All these components mean a lot for SEO and, thus, for e-store visibility on Google. 3. Alignment with responsive design trends. Multiple research studies have revealed that the majority of web users utilize mobile devices much more often than PCs. That\u2019s why website developers aim to optimize their e-commerce stores for mobile. Magento offers everything necessary to implement responsive design with HTML5 and CSS3 to make a website look great on any device type, even the newest models. 4. Availability of add-ons. Magento marketplace contains thousands of extensions for refining an e-store. Such add-ons help to improve security, increase conversions, implement payment systems, add social media tags, and many others. Differences between Magento and Magento 2 Having practically the same name, Magento and Magento 2 differ significantly. Moreover, Magento was put into the EOL (end of life) cycle, which means it\u2019ll no longer be supported soon. That\u2019s why it\u2019s highly recommended to upgrade to Magento 2 before [integrating it with Salesforce](https://skyvia.com/data-integration/integrate-salesforce-magento) . Now, let\u2019s look at the significant differences between those two versions of the CMS. Architecture. Magento 2 allows its users to customize their e-commerce websites more than its predecessor. It\u2019s due to the major upgrade of the system architecture: Redis for database caching, Symfony for refined control over content, Varnish for better website speed, and Composer for dropping extension conflicts. Security . Regular updates are addressing the most recent security standards on the web for both Magento 2 Open Source and Commerce projects. Meanwhile, the initial version of the CMS is no longer supported, so no updates appear for this system, and this may lead to further security issues. Speed. The second generation of this CMS implements extensions for boosting website speed and performance. Mobile-friendliness. Magento 2 offers multiple mechanisms that allow making a website adaptable for any device type. This tends to greatly improve customer experience and satisfaction rates. Administration. Magento 2 offers a much more intuitive admin panel. It has a user-friendly dashboard with order information, income tax, searched keywords, and best-selling products. Admins can also customize this dashboard. Checkout. Magento 1 had 6 steps in the order checkout process, while Magento 2 has only 2 steps. The recent version also has more payment methods and options available. This results in faster checkout and better conversion rates for e-commerce businesses. Given that Magento 2 outperforms its predecessor in nearly every aspect, it\u2019s highly recommended to upgrade to this latest version of CMS before integrating data with Salesforce or any other SaaS platform, app, or database. The Advantages of Salesforce & Magento Integration Let\u2019s have a look at the benefits Magento Salesforce integration can grant: Improved administration of customer profiles. Integration of sources establishes a single source of truth \u2013 a central repository with all customer data. This is usually done on a Salesforce basis by importing Customer fields from Magento. Sales managers and marketers are highly interested in this option because of the chance to refine email campaigns and advance marketing strategies. Inventory management. Magento Salesforce integration is a foundation for central inventory management. Similar to creating a unified customer base, a single source of truth can be established for stock management by importing product, invoice, and order data from one app to another. This option is of high interest to sales managers, accountants, and financial departments. Analytics and reports. Combining two data sources ensures refined analytical results for businesses, including financial analysis, KPI, and forecasting. It dramatically helps to evaluate a company\u2019s current performance and make predictions for its future. Increase sales. Managers and business owners obtain better control over sales when loading passive sales info from an e-commerce store to Salesforce. Making data holistic has a considerable potential to increase the sales rate overall. Magento Integration with Salesforce by Skyvia [Skyvia](https://skyvia.com/) is a universal SaaS (Software as a Service) data platform designed for a comprehensive set of data-related tasks without coding. This platform is cloud-based and can be easily accessed online just by using a web browser. Skyvia has a wide feature set to implement [Magento and Salesforce integration](https://skyvia.com/data-integration/integrate-salesforce-magento#:~:text=With%20Skyvia%20you%20can%20integrate,Skyvia%20offers%20powerful%20data%20synchronization.) scenarios: [Import](https://docs.skyvia.com/data-integration/import) component loads data from Magento to Salesforce or vice versa, using data transformations via powerful mapping settings. The Import tool of Skyvia complies with ETL and reverse ETL scenarios. [Replication](https://docs.skyvia.com/data-integration/replication) scenarios can create a copy of Magento or Salesforce data and update it automatically. The Replication tool of Skyvia complies with the ELT scenario. [Synchronization](https://docs.skyvia.com/data-integration/synchronization) implements one- or bi-directional synchronization between Magento and Salesforce. The Synchronization instrument is compatible with a bi-directional synchronization integration scenario between apps. Note that one of the data sources should preferably be empty to avoid data duplicates. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) builder allows the creation of complex integration involving more than two data sources and advanced data mapping and multistage transformations. With [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) , which creates logic and runs data flows according to specified conditions, it\u2019s possible to design compound ETL data pipelines. Now, let\u2019s look at examples of how to carry out [Magento Salesforce integration](https://skyvia.com/data-integration/integrate-salesforce-magento#:~:text=With%20Skyvia%20you%20can%20integrate,Skyvia%20offers%20powerful%20data%20synchronization.) using Skyvia. NOTE: Regardless of the business case of interest, the following preparation steps are necessary: Establish a connection with Magento in Skyvia (if not added yet). See detailed instructions [here](https://skyvia.com/connectors/magento) . Establish a connection with Salesforce in Skyvia (if not added yet). See detailed instructions [here](https://skyvia.com/connectors/salesforce) . Business Case 1. \u0421reating a Single Source of Truth in Salesforce CRM Integrating Magento with Salesforce is necessary when companies want to establish a single source of truth in a CRM. Setting up such a centralized repository helps businesses to: Avoid redundancy, ambiguity, and inconsistency in client data due to a single customer base. Keeping the information about inventory up-to-date owing to the centralized system with all the information about company products. Improving the quality of financial analysis and reporting. Enhancing sales management and business profits. In the example below, we provide an integration scenario where Salesforce becomes a centralized system for inventory management by transferring data about products from Magento. Note that it\u2019s possible to perform a similar procedure for Customer fields to create a single client database, for instance. The same can be done for other data fields, depending on your business needs and priorities. Click +NEW in the top menu. In the Integration column, click Import . Under Source Type , click Data Source and select Magento from the Connection drop-down list. Under Target , select Salesforce from the Connection drop-down list. Click Add New to open the Task Editor settings window. In the Task Editor window, select Products (or any other field of your interest) from the Source drop-down list. Click Next Step . In the Target drop-down list, select Products2 (or any other field corresponding to the source). Click Next step . On the Mapping Definition tab, check whether all required columns are mapped. Click Schedule to set the timing for integration. Click Save . Click Create in the tab bar to preserve the import task. Click Save to save the task. To execute the import package immediately, click Run . Otherwise, it\u2019s possible to transfer data from Magento to Salesforce later \u2014 the import package could be found under Objects -> Integrations . Business Case 2. Single Source of Truth: Data Warehousing Data can travel not only between the two sources but also to a data warehouse. Skyvia supports Amazon Redshift, Snowflake, Google BigQuery, and other DWH providers. So businesses can consolidate data from various sources in a DWH and make it a single source of truth for: Analyzing all available data with the BI tools directly connected to a DWH. Obtaining better control over stock management by examining the company\u2019s sales. Activating data by sending it back to applications from a DWH with the reverse ETL scenario. Designing trainable predictive models using ML / AI with the determination of factors influencing key performance indicators. The example below focuses on the Magento to Google BigQuery replication, though the same can be done for replicating Salesforce data to a DWH. In such a case, a data warehouse becomes a single source of truth. To load data to a DWH, follow the instructions on creating the replication scenario in Skyvia. Click +NEW in the top menu. In the Integration column, click Replication . Under Source Type , click Data Source and select Magento from the Connection drop-down list. Under Target , select a DWH (in this case \u2013 Google BigQuery) from the Connection drop-down list. Select the Incremental Updates checkbox so the system only adds new or updated records in further replication. Select the Create Tables checkbox if new database tables must be created for replicated data. Select Magento data fields for replication. Click Schedule to set the timing for the replication task. Click Save . Click Create in the tab bar to preserve the replication task. Click Save . To execute the replication task immediately, click Run . Otherwise, it\u2019s possible to transfer data from Magento or another source to a DWH of your choice later \u2014 the replication task could be found under Objects -> Integrations . The Positive Impact of Using Skyvia Companies take much weight off their shoulders by performing Magento integration with Salesforce using such a cloud-based application as Skyvia. Skyvia saves lots of time due to its simplicity in setting up everything fast and transferring data in the needed direction. Along with that, this service has the following pros: Provides robust data mapping and transformation capabilities. Offers automated scheduling for data integration packages. Supports more than 160+ sources, including cloud apps, databases, and data warehouses. Easy to navigate owing to a user-friendly interface. Provides various pricing plans for companies of any size. How to Integrate Magento 2 to Salesforce using Free Native Connector As Salesforce takes much care of its customers and those who want to integrate their profiles with CMS, it has developed a native connector [eShopSync for Magento](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000EXmiSUAT&tab=d) . It grants real-time data synchronization from the Magento e-commerce store to the Salesforce business account. It\u2019s possible to get the following fields: Customers (Accounts and Contacts in Salesforce), Categories, Products, Orders, and Contact Us (Leads in Salesforce). To set up Salesforce Magento 2 integration with a native connector, take the following steps: Download and unzip [eShopSync for Magento](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000EXmiSUAT&tab=d) . Open the root folder where the Magento setup is installed and connect the Magento backend through FTP details. Go to the SRC folder of the downloaded connector and move the APP folder located inside to the Magento root folder. Run the following commands in the Magento root folder: php bin/magento setup: upgrade\n\nphp bin/magento setup:di: compile\n\nphp bin/magento setup:static-content: deploy Go to the Magento admin panel and select System > Cache Management to clear the cash. In the Magento admin panel, go to Stores > Configuration > Salesforce Connector . Select Production in the Connect field. Log into Salesforce Org with your credentials. Enter the license key for validation. Set all the required parameters in the Configuration section and click Save Config . Mapping data fields for smooth integration. The Restrictions of a Free Native Connector Native connector developed by Salesforce is free to use and has multiple advantages. At the same time, it might also bring certain inconveniences to users: The eShopSync for Magento isn\u2019t easy to install and set up. It involves a command line interface in the configuration process, which might be a real pain for non-technical users. Only a limited number of data fields (Customers, Categories, Products, Orders, and Contact Us) can be synchronized between systems. This might be the case when companies need to merge larger data sets between two apps. There is limited support for eShopSync, which might be critical in case of bugs or other systems malfunctioning. Paid Extension for Magento Salesforce Integration There are also many add-on applications developed by third parties interested in promoting Magento integration with Salesforce. Below, find the most popular paid options for bringing two services together to reinforce your e-commerce experience: [Salesforce CRM Integration by Magenest](https://store.magenest.com/magento-2/salesforce-crm-integration.html) . Being an official Salesforce partner, Magenest\u2019s extension grants seamless integration for the Product, Order, Campaign, Account, Lead, Contact, and Opportunity fields to Salesforce. [Salesforce CRM Integration for Magento 2 by Magenplaza](https://www.mageplaza.com/magento-2-salesforce/) . This extension ensures auto-synchronization of Customer and Catalog Price Rule data to Salesforce CRM in real-time. It also synchronizes orders and product data, providing flexibility for field mapping, scheduling, and condition rules. [Salesforce Connectors for Magento 2 by Webkul](https://store.webkul.com/magento2-salesforce-integration.html) . Users can work with the Product, Order, Campaign, Account, Lead, Contact, and Opportunity fields to improve sales and customer service. How to Choose the Right Integration Method for Your Business Overall, there are three possible ways to integrate Magento with Salesforce: Using the Skyvia cloud platform Using native connector eShopSync Using paid extensions developed by third parties Each of these methods might be more or less applicable for a company depending on its needs, preferences, and budget. So let\u2019s have a look at a comparison table for the integration options based on chosen criteria. Parameter Skyvia Native Connector Paid Extensions Ease of use and setup Connecting sources and setting up integration takes several minutes. Takes some time to set up everything. Involves coding with Command Line Interface (CLI). Depends on the type of extension and the third-party provider. Data fields for integration Works with multiple data fields. Customers Categories Products Orders Contact Us Customers Categories Products Orders Contact Us Pricing Free and paid plans. Free. Depends on the provider. Based on the analysis provided above, it makes much sense to consider Skyvia. It\u2019s really comfortable to use and offers a variety of pricing plans involving certain amounts of data for transfer and feature sets. Meanwhile, the native connectors, even being a free solution, appear rather demanding in setup and impose other limitations for integration. Paid extensions partially resolve those issues, though they could be rather costly, with the prices starting from $349 and higher. Summary Salesforce and Magento integration might be an inevitable step for businesses eager to power up their e-commerce potential. It enhances sales management, inventory control, customer satisfaction, and other key business metrics. To bring Magento and Salesforce services together, one can implement either a native or third-party extension or use the Skyvia cloud platform. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-salesforce-integration-guide-to-easy-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+Integration+with+Salesforce%3A+A+Guide+to+Easy+Connection&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-and-salesforce-integration-guide-to-easy-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-and-salesforce-integration-guide-to-easy-connection/&title=Magento+Integration+with+Salesforce%3A+A+Guide+to+Easy+Connection) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/magento-to-shopify-migration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Magento to Shopify Data Migration: Fast & Secure Guide for 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/magento-to-shopify-migration/#respond) 3031 April 25, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-to-shopify-migration%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+to+Shopify+Data+Migration%3A+Fast+%26+Secure+Guide+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-to-shopify-migration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-to-shopify-migration/&title=Magento+to+Shopify+Data+Migration%3A+Fast+%26+Secure+Guide+for+2025) Constant business optimization is one of the keys to maintaining a competitive advantage in the market and attracting new customers. As a part of such optimization, some businesses decide to move their online stores from one platform to another, one that offers more opportunities. Very often, they choose Shopify as the base e-commerce platform due to its simplicity in setup and use, transparent management, multi-language support, and other notable benefits. If you currently use Magento as the e-commerce provider and want to switch to Shopify, you have landed on the right page. In this article, we explain how to make such a migration smooth by using one of the methods. We also describe typical use cases for each method, helping you to make the right decision. Table of Contents Understanding Magento and Shopify What is Magento? What is Shopify? Comparing Magento and Shopify in eCommerce Why Convert Your Magento Store to Shopify Preparing Data How to Migrate to Shopify from Magento Method 1: Manual Export/Import Method 2: Custom Scripts/Development Method 3: Automated Migration Tools/Platforms Common Data Migration Pitfalls & Mitigation Strategies Conclusion Understanding Magento and Shopify What is Magento? NOTE: Magento was acquired by Adobe in 2018, so now it\u2019s also known under the Adobe Commerce brand. For simplicity, we will use Magento throughout this article. Magento is a global e-commerce platform that allows merchants to build and customize online stores. It provides a comprehensive set of services for digital commerce, including marketing, SEO, product management, and other tools. What\u2019s more, this platform provides access to diverse extensions on the Magento marketplace that can be easily integrated into your store with third-party apps. Magento\u2019s ample feature set enables users to solve almost any e-commerce task or challenge. This platform also supports multiple payment gateways, currencies, and languages. What is Shopify? Shopify is a cloud-based e-commerce platform that enables companies to create and manage their online stores. This service is user-friendly, which makes it available for business owners with any level of technical skills. Shopify helps companies sell their products in multiple locations around the globe, from brick-and-mortar to web-based shops. It also dramatically simplifies the distribution of goods across various channels, including social media and online marketplaces. Comparing Magento and Shopify in eCommerce Criteria Shopify Magento Pricing Starting from $24 per month (paid annually). A trial is available. Pricing depends on the available features, company size, infrastructure, security, and other parameters. Contact Adobe directly to calculate the cost of the platform for your business. A free plan and trial are available. Ease of use Easy-to-use, beginner-friendly visual editor. You need web development skills to customize your store. Templates and Design 13 free and 233 paid, ready-made themes with prices from $100 to $500 per theme. All themes are adapted for mobile or tablet devices. Free themes are available. Paid theme prices range from $29 to $499. Custom themes can cost up to $25,000 per theme. Transaction Fee Free for Shopify payments. 0.5% to 2% for the external gateway. Contact Adobe Commerce directly for details. Marketing Features \u2013 Multilingual site support.- Email campaigns via third-party app integration. Marketing activities are carried out using the extension on the marketplace extensions and coding. Extensions 3200+ apps on the Shopify App Store. 3600+ native integrations available via Adobe Commerce Marketplace. Customer Support 24/7 support for all plans. Email or help desk, chat, frequently asked questions (FAQs), forums, and a knowledge base. Why Convert Your Magento Store to Shopify Magento is one of the best e-commerce platforms nowadays, with [120,000+ live websites](https://magecomp.com/blog/magento-statistics/) on the Internet as of October 2024. It offers a high degree of flexibility and scalability, which enables businesses to customize websites to meet their specific needs. At the same time, this platform requires proficiency in coding to be able to fully benefit from its potential. What\u2019s more, the prices for Magento are usually higher than average, which mightn\u2019t be affordable for some businesses. Those looking to reduce spending on their e-commerce store may find that switching from Magento to Shopify could be a good solution. Shopify offers plenty of features that Magento does, though it\u2019s much more user-friendly and affordable , which makes it attractive even for small businesses. In fact, Shopify is one of the easiest ways to set up your business presence online and promote its growth over time. This platform also provides thousands of partner third-party apps and extensions , opening literally unlimited integration and customization opportunities. If you decide to migrate your online store from Magento to Shopify, there are three possible options, which will be discussed in this article. One of those, the most user-friendly and fastest one, is to use the Skyvia data integration tool. Preparing Data Before the actual migration of your e-commerce store takes place, you need to develop a step-by-step plan for that. One of the aspects to include concerns the data that could be extracted into a CSV file or transferred from Magento to the Shopify store directly. Which Data Can Be Migrated? Please find the list of items that can be retrieved from Magento: Product details : name, price, SKU, description, weight, manufacturer, product image, tags, meta title, meta description, URL. Order details : date, status, product info, shipping address, customer name and email. Manufacturer information : name. Customer data : first name, last name, email, phone, billing address, shipping address, newsletter subscription status. Coupon information : code, date. Reviews data: title, date, user name, rate. Information You Can\u2019t Move Due to the differences in the databases and technologies of these two e-commerce platforms, not all data can be transferred from Magento to Shopify. Below, please find the list of items that can\u2019t be migrated directly but only with the use of third-party partner integration apps or manual recreation. Product collections Product categories Blog and page content User passwords Discount codes How to Migrate to Shopify from Magento In this article, we introduce three scenarios to move data from one online store to another. Each of them utilizes a different set of instruments for the implementation of the method. Moreover, the variety of items that can be exported also ranges from one approach to another. Method 1: Manual Export/Import Method 2: Custom Scripts/Development Method 3: Automated Migration Tools/Platforms Note that all these methods differ from each other and can\u2019t be considered equivalent. While the manual method could be suitable in one use case, API integration would absolutely be the right choice in another instance. Therefore, we have prepared a comparison table where you can briefly explore the main characteristics of each mode and decide which of them is likely to meet your integration needs. Manual Migration API integration Skyvia Best for SMBs that want to obtain full control over the migration process Teams with technical expertise Any company Skill level Low High Low Objects available for migration Advanced Pricing, Products, Customer Finances, Customer Main File, Customer Addresses, and Stock Sources All objects All objects Transformation opportunities You can modify Magento files manually to make the data valid for Shopify. Transformation logic is embedded in the script code. Built-in mapping feature. Time needed Time-consuming Time-consuming Fast Pros \u2013 Simple to implement- Accessible for everyone- Additional Shopify extensions are available for process simplification \u2013 A high degree of customization- Support of various Magento and Shopify platform versions- All data can be exported \u2013 No-code approach- Support of 200 data sources- Comprehensive documentation- Basic and advanced integration scenarios Cons \u2013 Error-prone- A limited number of items can be exported- Requires much time \u2013 Programming skills are required- Suitable for companies with dedicated IT teams- Expensive \u2013 No phone support- Advanced features come only under the paid plan Price Free If you decide to use Shopify extensions, additional costs may apply The cost depends on the developers\u2019 work Free plan available Paid plan price starts at $79/month Method 1: Manual Export/Import Direct migration is a manual or half-automated data transfer process from Magento to Shopify. First of all, plan each step carefully in advance, considering even minor details. Another vital thing to do before the migration starts is to register on Shopify and perform the initial setup of your online store. It\u2019s also highly recommended that you explore how this e-commerce platform works and what its specifics are. Best for This method is a good choice for those who want to fully manage and control the migration process . In reality, the manual approach is rather suitable for small and mid-size businesses rather than for large organizations. For the latter, it mightn\u2019t be very safe since big corporations usually have enormous amounts of data, the migration of which takes plenty of time. Step-by-Step Guide to Manually Migrating from Magento to Shopify STEP 1: Backup your Magento Store We strongly recommend backing up all your data in Magento in advance to ensure its integrity and safety. For that, take the following steps: Log in to your Magento admin panel and click SYSTEM -> Backups . Select the Backup type, enter the backup name, and enable the maintenance mode if needed. System Backup saves the entire system, including your source code and the database. Database and Media Backup saves the content of the database and media folder. Store themes aren\u2019t included in this backup. Database Backup saves the store database only. STEP 2: Export Magento Data NOTE: Magento allows users to export the following entities: Advanced Pricing, Products, Customer Finances, Customer Main File, Customer Addresses, and Stock Sources. Only one entity can be exported at a time, and to a separate file. To get Magento data, proceed with the following steps: Go to the Magento admin panel and click SYSTEM -> Export . Select the Entity Type and Export File Format . Then, enable the Fields Enclosure option if needed. Set the Entity attributes if needed. You can exclude fields or filter the set of data for export. Click Continue to start the data export. The operation runs in the background. Download the CSV file once the operation is completed. STEP 3: Import Data to Shopify Shopify supports several ways to import data. You can manually upload data using CSV files or use the data integration apps available on the Shopify marketplace. It\u2019s also possible to take advantage of the API integration if there is enough technical expertise on the team. The choice of the preferred method depends on the amount of data and developer resources. Let\u2019s have a look at importing product items from a CSV file into Shopify. Download [a sample CSV file](https://help.shopify.com/csv/product_template.csv) that matches Shopify standards. Adjust your current CSV file with products accordingly to make it match the template style. Don\u2019t exclude any columns or change their names. Follow other tips in the [video](https://www.youtube.com/watch?v=y_x0L_Oj4Zk) to prepare your CSV file for import. Go to the Admin panel in your Shopify store and select the Products tab. Click Import in the upper-right corner of the screen and browse your computer to upload the needed CSV file. Specify whether you\u2019d like to overwrite the products with the existing handle by checking the corresponding option. Click Upload and Continue . Preview the products and click Import products . For more details on how to import other items, feel free to refer to the [Shopify Help Center](https://help.shopify.com/en) . NOTE: Only the Products, Customers, and Inventory data can be put into Shopify from CSV files. Additional Options Apart from the CSV data import method, there are also other approaches considered to be manual. Those include API integration and third-party extensions for data migration. Shopify Migration Apps As an alternative to relatively limited CSV Import, Shopify offers a variety of [data migration](https://apps.shopify.com/search?utf8=%E2%9C%93&q=migration) apps. Unlike CSV Import, such applications support data other than just Products, Customers, and Inventory. They allow you to import historical orders, Gift cards, certificates, store credits, and other items from Magento to Shopify. Migration apps vary by the supported features, capabilities, and price. There are several completely free options on the Shopify Marketplace. At the same time, there are extensions that cost hundreds of dollars instead. Shopify API This method provides the largest number of data exchange features between e-commerce platforms. With API, you can literally import any data from Magento to Shopify. Note that this approach requires proficiency in programming languages and a deep understanding of technical concepts. Pros Data import from CSV files costs $0. Manual integration from Magento to Shopify is simple to implement. A large variety of additional apps and extensions for data migration are available on the Shopify Marketplace. Cons The manual approach is error-prone. CSV file import supports only Customers, Products, and Inventory. Shopify migration apps come at an extra cost. Method 2: Custom Scripts/Development As mentioned above, one of the ways to transfer data from Magento to Shopify involves using a programming language. In this case, developers write code blocks known as scripts to fetch data from your Magento store and send it to your Shopify shop. To fully exploit this method, a comprehensive understanding of Magento architecture, database schemas, PHP, and Shopify API (REST/GraphQL) is strongly required. Even though custom scripts seem to be rather challenging in the implementation, they conceal a strong potential for customization. They also allow you to migrate that data from Magento to Shopify that can\u2019t be extracted with other integration methods. Best for This option is generally suitable for businesses with highly unique, complex migration requirements that cannot be met by existing automated tools. Being a high-investment approach, API integration is typically reserved for specific edge cases in large enterprises with dedicated internal teams, rather than as a standard approach for most SMBs or mid-market businesses. Pros Offer a high degree of customization. Allows businesses to move all available data from Magento to Shopify. Supports various versions of the Magento and Shopify platforms. Cons Technical expertise, PHP knowledge, and Shopify API understanding needed. Not suitable for most small and mid-size businesses. High costs of implementation. Method 3: Automated Migration Tools/Platforms Another way of migrating Magento data to Shopify is to use Skyvia, a powerful no-code universal data platform. It removes the pain and trouble of connecting platforms with different data structures. It offers diverse products for various data-related tasks: [Data Integration](https://skyvia.com/blog/data-integration-tools/) Backup Query Connect Automation To perform data migration from one online store to another, the Data Integration set of tools would be the most appropriate. You can also consider the Backup solution to create a copy of your Magento store before transferring data to Shopify. Best for Skyvia is used by thousands of businesses, from individual entrepreneurs to large enterprises, for their daily data integration operations. In a nutshell, this product is suitable for any company operating in Healthcare, Education, Manufacturing, Software Development, and other industries. Step-by-Step Guide to Migrating from Magento to Shopify with Skyvia Before moving data from Magento to Shopify with Skyvia, please take some preliminary steps: Set up your Shopify store. Create and use an active Skyvia account. [Creating connections](https://docs.skyvia.com/connections/) in Skyvia for [Magento](https://docs.skyvia.com/connectors/cloud-sources/magento_connections.html#establishing-connection) and [Shopify](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html#establishing-connection) . STEP 1: Create the Integration To start your [Magento to Shopify Migration](https://skyvia.com/data-integration/integrate-shopify-magento) with Skyvia, proceed with the following steps: Click +Create New -> Import . Select the Data Source as a source type. Select the existing Magento connection or create a new one. Select the existing Shopify connection or create a new one under the Target settings. Enable other [Import settings](https://docs.skyvia.com/data-integration/import/configuring-import.html#import-settings) if needed. Click Add new on the right to create an integration task. STEP 2: Add Integration Task In the window that appears, you can configure the integration task. In the Source Definition tab, select the Magento object you want to migrate. NOTE: Skyvia allows you to set filters if you need to limit the data selection. Select the target Shopify object and operation to perform in the Target Definition tab. Magento and Shopify have different data structures. Thus, you have to map the fields in the Mapping Definition tab. Skyvia supports several [mapping types](https://docs.skyvia.com/data-integration/common-package-features/mapping/#mapping-types) , enabling the transformation of source data to be valid for a target. STEP 3: Run the Integration and Check the Results When the mapping is ready, save the task. You can add multiple tasks to one integration, one task for one object. After preparing the tasks, you can save the integration and launch it. When the run is completed, you can check its results on the Monitor and Log tabs of the Integration Editor. It shows the number of successfully imported records and failed records. You can click any run you want to check and see the error message for each failed record, if needed. Pros No-code approach that makes Skyvia suitable for both technical and business users. Support of [200+ data connectors](https://skyvia.com/connectors) that enable you not only to migrate data from Magento to Shopify but also to create various integration scenarios. Comprehensive documentation and built-in tips that simplify the data migration process. Simple and complex integration scenarios. Cons No support over the phone. Advanced features are available only under the paid plans. Common Data Migration Pitfalls & Mitigation Strategies In theory, moving data from Magento to Shopify seems doable. It will definitely take some time and resources to achieve the final goal. If the integration goes smoothly and seamlessly, then you are the lucky one. As a rule, the practical implementation of any method is associated with minor pitfalls, which don\u2019t impact the overall result of the integration, though they might create some inconveniences in the process. Therefore, we present some of the known issues associated with Magento to Shopify migration. \u2116 Issue Description Mitigation 1 Data corruption Data on the destination system appears different from that in the target one. \u2013 Create a backup- Test the migration- Map data correctly 2 Custom Magento fields handling There may be no equivalent fields on the destination system that match the Magento custom fields \u2013 Strategy for mapping custom fields to the available fields on the destination store 3 SEO impact Moving an online store from one platform to another may also impact the website\u2019s positions on search engines \u2013 Copy meta titles and descriptions correctly 4 Downtime It might happen that the migration process is interrupted by the system downtime. \u2013 Use tools that ensure data integrity on transfer to handle downtimes Conclusion If you need to switch from Magento to Shopify as your principal online store platform, you can do that in several ways: Migrate your data manually by exporting Magento entities to separate CSV files and importing them to Shopify manually or using Shopify extensions Use APIs for moving data Perform a no-code, convenient Magento to Shopify migration with Skyvia The manual method is relatively easy but time-consuming and limited by the number of objects for export and transformation opportunities. API integration is the most complex and expensive one, though it offers the highest level of customization. Meanwhile, the Skyvia data integration method is the simplest and fastest, allowing you to migrate any objects with relations and transform data to avoid data structure mismatch. What\u2019s more, Skyvia offers a free tier to start, so you can see it in action! F.A.Q. for Magento to Shopify What essential data can typically be migrated from Magento to Shopify? The availability of data that can be extracted from Magento depends on the chosen methods. For instance, the manual approach enables users to fetch only Products, Customers, and Inventory, while the API-based method and Skyvia can retrieve any kind of data. Can I migrate customer passwords from Magento to Shopify? Account passwords can\u2019t be moved from one platform to another. Users will need to recreate their accounts by specifying the same or new account credentials. Otherwise, you may consider the use of third-party partner integration apps. How long does the actual data migration process take? The duration of migration depends on the selected method. It might take weeks with custom scripts and API, or several hours with Skyvia. What are the primary cost factors involved specifically in data migration? The costs associated with data migration depend on the tools used, the time needed, and the specialists involved. What happens to data from my custom Magento extensions or specific custom fields? It\u2019s possible to transfer custom Magento fields to Shopify. To perform that, you need to create an explicit mapping strategy to create a correspondence of custom Magento fields to the existing Shopify fields. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmagento-to-shopify-migration%2F) [Twitter](https://twitter.com/intent/tweet?text=Magento+to+Shopify+Data+Migration%3A+Fast+%26+Secure+Guide+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fmagento-to-shopify-migration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/magento-to-shopify-migration/&title=Magento+to+Shopify+Data+Migration%3A+Fast+%26+Secure+Guide+for+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/marketo-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Marketo Salesforce Integration: The Ultimate Step-by-Step Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/marketo-salesforce-integration/#respond) 4374 May 14, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmarketo-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Marketo+Salesforce+Integration%3A+The+Ultimate+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fmarketo-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/marketo-salesforce-integration/&title=Marketo+Salesforce+Integration%3A+The+Ultimate+Step-by-Step+Guide) Sales and marketing alignment isn\u2019t just some trendy catchphrase. It\u2019s what separates businesses that grow from those that just get by. When your teams are in sync, you see faster deals, higher conversions, and revenue numbers you actually want to brag about. But when systems don\u2019t talk to each other, chaos creeps in: lost leads, insufficient data, missed opportunities. Enter Salesforce and Marketo Engage, two powerhouse platforms built to drive serious results. Salesforce is your go-to for managing relationships, tracking deals, and making smart moves based on real data. Marketo is the marketing brain behind personalized campaigns, lead nurturing, and customer engagement across every channel. On their own, they\u2019re impressive. Together, they\u2019re unstoppable. But here\u2019s the catch: if you don\u2019t integrate them properly, you leave money (and insights) on the table. Data gets trapped, workflows break down, and customers get a disconnected experience. That\u2019s why in this guide, we\u2019re covering: Why hooking up Salesforce and Marketo is a no-brainer for real growth The huge upsides, from better collaboration to way sharper analytics How to set it up (and avoid common facepalm moments) How top no-code data platforms, like Skyvia, can make integration smooth and fast. Ready to ditch the silos and start working smarter? Let\u2019s make it happen. Table of Contents What is Adobe Marketo Engage? What is Salesforce? Benefits of Integration Marketo with Salesforce Preparing for data integration How to Integrate Salesforce and Marketo Native Integration vs. Third-Party Platforms: A Quick Comparison Method 1. Native Integration Marketo and Salesforce Method 2. Third-party Integration Platforms Setting up Skyvia Import scenario for data integration Setting up Skyvia Data Flow scenario for data integration Summary What is Adobe Marketo Engage? [Marketo](https://skyvia.com/connectors/marketo) is a product of Adobe, a cloud lead management and marketing solution. On top of that, it has analytics automation features, customer acquisition cost, advertising budget calculation, price analytic tools, and tools for cross-channel engagement (A/B testing and dynamic content across multiple channels). Since the platform was designed to target qualified leads and look for new opportunities to attract traffic to websites (e.g., analytics for dynamic look-alike audiences), it owns a well-deserved title as one of the most famous and popular marketing platforms. But that\u2019s just the beginning. Today, the system leans heavily on AI to automate audience segmentation, lead scoring, predictive analytics, dynamic content generation, and real-time campaign tuning. The result? Faster launches, sharper targeting, and marketing that feels a whole lot more personal (and powerful). Key features AI-driven audience segmentation and personalization for tailored campaigns. Visual campaign building and automation across multiple digital channels, enabling personalized engagement at scale without heavy IT involvement. Advanced analytics and performance insights , including predictive analytics and campaign ROI measurement. Integration with Adobe Experience Platform (AEP) for unified data management and deeper personalization. Cross-channel engagement tools , such as A/B testing, dynamic content, and interactive webinars. Seamless data integrations with other platforms and flexible data management. Multiple certification programs , including the Adobe Certified Professional and Business Practitioner Expert, are valid for two years, with free online renewal exams available. What is Salesforce? [Salesforce](https://skyvia.com/connectors/salesforce) is the most popular CRM for any business, providing customizable CRM functionality based on the company\u2019s demands. It\u2019s powerful and easy to scale, which makes it ideal for companies of all sizes. The service has a comprehensive set of features that offer an extensive view of all customer interactions. In addition, it offers a full marketing cloud that includes all marketing channels, such as email, mobile, social media, and the web, in one place. And Salesforce keeps raising the bar. Marketing Cloud connects all the primary marketing channels, like email, mobile, social, and web, so that users can create seamless, cross-channel campaigns without the usual headaches. Salesforce AppExchange helps you tap into thousands of third-party apps and integrations, customizing your CRM setup to fit just about any business need. Key features Salesforce AppExchange. Thousands of apps and integrations, both proprietary and open-source, to customize the CRM exactly the way users need it. Opportunity Management. A real-time, 360-degree view of every deal on the table: track stages, products, stakeholders, and updates in one place. Account Management. The full story behind every customer. Salesforce tracks emails, chats, calls, and more, giving companies a complete history that helps build stronger, more personal relationships. User-Friendly Interface. The Lightning UI brings a sleek, drag-and-drop experience to day-to-day work. Automate processes, build custom workflows, and manage tasks without coding. AI-Driven Automation. With Einstein GPT and AI-powered insights baked right in, Salesforce delivers next-level personalization, smarter predictions, and automated content generation, helping teams work faster and close deals smarter. Advanced Security and Compliance. Salesforce\u2019s latest updates double down on data protection. Your sensitive information stays safe and audit-ready with stronger encryption, tighter security controls, and built-in compliance features. Unified Data Cloud. The Salesforce Data Cloud pulls all customer data into a single source of truth, powering real-time reporting, sharper insights, and better decisions without the usual data chaos. Slack Integration. Real-time collaboration just got easier. Deep Slack integration lets teams chat, share files, and trigger workflows, all without leaving Salesforce. Benefits of Integration Marketo with Salesforce The purpose of CRM is to track and manage all kinds of leads and customers, while marketing automation tools allow for automated marketing campaigns, measurement, and retargeting of customers who left the website. The Marketo Salesforce connector can be a huge time-saver for sales and marketing managers as they ultimately track unlike tasks for different purposes. Note : Among the most popular objects for synchronization are Leads, Users, Accounts, Contacts, and Opportunities. Key advantages include : 1. Seamless Lead Management and Handoff. No more lost leads or awkward handoffs. With integration in place, leads nurtured in Marketo flow straight into Salesforce when ready for sales. In other words, marketing hands off better-qualified leads, sales teams get the full context they need, and deals move forward faster. 2. Enhanced Data Synchronization and\u00a0 Accuracy. Forget about double-entry headaches or wondering which system has the correct info. Integrating Salesforce and Marketo automatically syncs leads, contacts, accounts, and campaign data. That means fewer manual updates, fewer errors, and way more trust in your data. 3. Improved Sales and Marketing Alignment. When everyone\u2019s working with\u00a0the same data, collaboration becomes second nature. Sales and marketing can view goals, share insights, and deliver consistent messaging at every customer journey stage. 4. Powerful Lead Scoring and Prioritization. Its lead scoring doesn\u2019t just sit in a silo. It feeds directly into Salesforce, giving your sales reps a clear view of prospects\u2019 buying readiness. Tools like Marketo Sales Insight, make it easy for sales teams to spot hot leads, prioritize outreach, and close deals faster. 5. Comprehensive Reporting and ROI Tracking. Want to know which marketing campaigns drive revenue? The integration lets organizations connect Marketo\u2019s campaign data with Salesforce\u2019s sales outcomes, giving you a full funnel view of performance. Real ROI tracking means smarter budget decisions and better proof of marketing\u2019s impact. 6. Better Personalization and Customer Experience. When sales and marketing share a complete view of the customer, personalization becomes natural. Use combined data to craft more targeted campaigns, deliver more relevant content, and create experiences that feel tailored because they are. Overall, the Marketo integration with Salesforce CRM helps companies to close the circle of sales and allows marketers to measure whether their strategies are improving the customer\u2019s experience and driving revenue. Preparing for data integration Before setting up a Marketo to Salesforce synchronization, it\u2019s essential to understand how the data schema is structured within each tool. Think of it like remodeling your kitchen. You wouldn\u2019t start tearing down walls without knowing where the pipes and wires run. The same goes for the data. The key challenge? Every Salesforce and Marketo setup is unique. Both platforms are flexible, which is great for customization but tricky for integration. Each instance can have custom objects, fields, naming conventions, and workflows that differ wildly from business to business. That\u2019s why, before any sync begins, you need a clear picture of how data is structured on both sides. Start by auditing: What objects and fields exist in Salesforce and Marketo? Which fields need to be synced (and how often)? What data formats you\u2019re working with? Any naming conflicts or mismatches. Once you\u2019ve mapped out fields and relationships, take the time to test. Pro tip: Use a Salesforce Sandbox environment before going live. It\u2019s a safe, isolated testing environment that mirrors your actual Salesforce org but without the risk of messing up live data. You can simulate how the integration behaves, test data mappings, and catch issues early before they impact the team or customers. Bonus prep tips: Back up your data before making changes, just in case. Involve marketing and sales ops early; they\u2019ll catch things others might miss. Define ownership of the integration (Who\u2019s monitoring it? Who fixes errors?). Document everything, and your future self will thank you. Doing this groundwork sets you up for a much smoother integration. It\u2019s the difference between confidently launching sync and scrambling to fix a broken pipeline under pressure. How to Integrate Salesforce and Marketo Once the data is cleaned up and field mappings are in place, it\u2019s time to connect Salesforce and Marketo. There are two main ways to go about it, each with its own strengths and style. 1. Native Integration Marketo\u2019s built-in connector for Salesforce allows the two platforms to talk directly. It\u2019s tightly integrated and can sync leads, contacts, campaigns, and more in near real-time. It\u2019s a good option, especially if the setup is standard. But it\u2019s not exactly plug-and-play. You\u2019ll need to manage user permissions, carefully configure the sync, and monitor things to ensure nothing breaks as the systems evolve. 2. Third-Party Integration Platforms If you\u2019re looking for more flexibility and less technical hassle, third-party tools like Skyvia are a great selection. These no-code platforms make designing, automating, and monitoring your syncs easy. These platforms handle custom fields, build more complex workflows, and help to avoid some of the limitations of the native connector. Plus, such solutions are easy to use, so you can spend less time fixing things and more time using insights strategically. Native Integration vs. Third-Party Platforms: A Quick Comparison Criterion Native Marketo-Salesforce Integration Third-Party Platforms Ease of Setup Moderate. Requires some technical setup and understanding of both platforms. Easier. No-code/low-code setup with user-friendly interfaces. Flexibility Limited to standard use cases and mappings. Highly flexible. Supports complex workflows and custom scenarios. Customization Basic field mappings and sync options. Advanced mapping, transformations, filtering, and scheduling. Maintenance Needs Requires ongoing monitoring and manual adjustments when schema changes. Automated error handling, alerts, and easier reconfiguration. Cost Included with Marketo subscription (but may require more internal resources to manage). Additional cost depending on the tool, but saves time and reduces technical strain. Best For Teams with strong in-house technical support and standard integration needs. Teams looking for speed, flexibility, lower maintenance, and non-technical setup options. Scalability Good for smaller to mid-sized sync needs. Great for growing businesses with expanding data sources and complex processes. In the next sections, we\u2019ll break down both methods to help users find the best approach that fits their team, data, and business goals. Method 1. Native Integration Marketo and Salesforce If you want a direct, out-of-the-box connection between Marketo and Salesforce, the native integration is the first option to consider. Adobe offers a built-in connector that enables these two platforms to sync key records, giving both marketing and sales teams a more connected view of leads and customers. What Data Syncs Natively Here\u2019s a quick breakdown of what\u2019s supported out of the box. Object Type Sync Direction Notes Leads Bidirectional Auto-created in Marketo from Salesforce and vice versa. Contacts Bidirectional Commonly synced with linked accounts. Accounts Salesforce \u2192 Marketo Marketo cannot push updates to Salesforce accounts. Campaigns Bidirectional Syncs status, membership, and updates. Opportunities Salesforce \u2192 Marketo Used for revenue attribution in Marketo. Activities Marketo \u2192 Salesforce Sales alerts, email opens, clicks, and form submissions. Users Salesforce \u2192 Marketo Typically for assigning lead ownership. Custom Objects Optional (read-only in Marketo) Requires API config; syncs from Salesforce to Marketo. Best for Companies already deep in the Adobe/Salesforce ecosystem. Standard marketing/sales processes with limited need for customization. Teams with in-house Salesforce admins or technical support. Organizations prioritize real-time lead sync and visibility. Step-by-Step Guide Marketo Sales Insight App (AppExchange) Marketo offers native integration with Salesforce with its [Marketo Sales Insight](https://appexchange.salesforce.com/listingDetail?listingId=a0N30000001SVZmEAO) app, which can be downloaded from the Salesforce AppExchange. For installation, you have to: Log in with your Salesforce credentials. Choose the way of installation. Verify the installation by checking that \u201cMarketo Sales Insight\u201d appears under Installed Packages. In Salesforce, assign permissions and visibility settings for Marketo users and ensure the sync user has API access. In Marketo, go to the Admin panel and configure the Salesforce Integration settings using your Salesforce credentials. Check object sync status for leads, contacts, opportunities, campaigns, etc., to ensure proper connection However, if you need a custom field to appear in both Salesforce and Marketo, create it in Salesforce first , then sync it into Marketo. Salesforce takes the lead when it comes to the data schema in this integration. Want detailed steps? [Check out the full installation guide here](https://www.sfapps.info/marketo-salesforce-integration-guide/) . Native Integration Documentation (Manual Setup) If you need more control, like syncing custom fields, handling custom objects, or setting up complex lead scoring rules, you\u2019ll want to go beyond the basic MSI app. Marketo\u2019s native Salesforce integration offers greater flexibility, but it requires a few more setup steps. Step 1: Prepare Your Salesforce Org Log in to Salesforce, click the Setup gear icon, and navigate to Object Manager \u2192 Lead \u2192 Fields & Relationships. Create custom fields like Score, Acquisition Program, and Acquisition Date. Set field types, configure field-level security, and update the page layouts for Lead and Contact objects. Map your custom fields to ensure they transfer during lead-to-contact conversions. Tip: Custom formula fields can also be created to automate lead scoring and tracking between systems . Step 2: Create a Salesforce User for Marketo Go to Setup \u2192 Profiles and clone the Standard User profile (you can name it something like Marketo-Salesforce Sync). Edit permissions to allow API access, editing events/tasks, and managing templates/documents. Under Object Permissions, give Read, Create, Edit, and Delete access to key objects like Accounts, Leads, Contacts, Opportunities, and Campaigns. Save the profile, and then create a new user using this custom profile. Step 3: Generate a [Salesforce Security](https://skyvia.com/blog/salesforce-security-best-practices/) Token Log in with your new Marketo user credentials. Click your avatar \u2192 Settings \u2192 My Personal Information \u2192 Reset My Security Token. A new token will be emailed to you. Keep it handy for the Marketo connection. Step 4: Connect Salesforce and Marketo Log in to your Marketo instance. Navigate to Admin \u2192 CRM \u2192 Sync with Salesforce. Enter your Salesforce username, password, and security token. Review the warnings and click Confirm Credentials. Follow the setup prompts to kick off the Salesforce syncs. Pros Built-in, supported by both Adobe and Salesforce. Real-time sync for leads and activities. Seamless visibility into lead engagement for sales reps via MSI. Direct campaign and status syncing between systems. No extra cost if you\u2019re already using both platforms. Cons It requires hands-on configuration and monitoring. Limited flexibility with custom workflows and objects. The Salesforce schema has priority: custom fields must originate there. API limits in Salesforce can throttle sync if exceeded. No automatic re-sync if data is missed during downtime. It is more technical to manage compared to third-party tools. Method 2. Third-party Integration Platforms Native integration works well for many use cases. However, such a solution isn\u2019t a one-size-fits-all. For businesses running on more than just Salesforce and Marketo that need advanced data transformation or want a faster, more flexible setup without digging into code, the native connector can start to feel limiting. That\u2019s where third-party integration platforms are helpful, and [Skyvia](https://skyvia.com/) is one of the standout options. Skyvia is a cloud-based platform that connects both cloud and on-premises applications, enabling users to build ETL, ELT, and reverse ETL pipelines. It provides a broad set of capabilities for data integration, synchronization, and querying, making it easy to extract value from your data with minimal technical effort.\u00a0The service meets the essential needs of a business in today\u2019s digital world: Easily set up, integrate, and maintain. Scalable and flexible (capable of meeting any load and providing flexible pricing). With Skyvia, you can also build and manage complex integration scenarios with many sources by applying [Data Flow and Control Flow](https://skyvia.com/data-integration/) features. With all of the automation features in Skyvia, you can receive fully automated and trouble-free data processes. Best for Teams that need more customization or flexibility than the native connector allows. Companies using more than just Marketo and Salesforce (e.g., adding HubSpot, Snowflake, Google Sheets, etc.). Non-technical teams looking for a no-code solution. Businesses that want better monitoring, automation, and error handling. Organizations managing large data volumes or custom field mappings. Preparing for data integration using Skyvia How to easily and quickly configure automated data integration (as a periodic update) between Marketo and Salesforce with the help of Skyvia? Since the platform is very flexible and has many capabilities, we demonstrate two possible scenarios for data integration. Before implementing them, create connections between Salesforce and Skyvia as well as between Marketo and Skyvia. Sign in with Skyvia and click + Create NEW in the top menu. Click the Connection on the left menu. Note : Create a [Skyvia](https://app.skyvia.com/login) account if you don\u2019t have one yet. A free trial version of the tool is also available here. Select the service you want to connect to on the opened page: Salesforce or Marketo. Sign in with the chosen service. Enter credentials for your account. Click the Create Connection button. In the screenshot below, you can see an example of a Salesforce connection. Optional but smart: Test in a Salesforce Sandbox first, especially for large or high-impact syncs. Note! The default name for a new connection is always Untitled, so don\u2019t forget to rename it to prevent misunderstandings. More details on setting up [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Marketo](https://docs.skyvia.com/connectors/cloud-sources/marketo_connections.html) connections can be found in the Skyvia documentation. Setting up Skyvia Import scenario for data integration This [scenario](https://docs.skyvia.com/data-integration/import) is created for data integrations between two data sources. For example, in this case, we import Leads and Opportunities from Marketo to Salesforce and back. Click +Create NEW in the top menu and select Import scenario in the Integration category. In the opened window, you can find the detailed instructions on the right and the workspace with Source and Target data flows on the left. Select the data connections (those created in the step above). Define the Source Type and apply additional options if needed. Use the Add new button on the right to create a new data import task. Specify Source settings. Select a source file, cloud app object, or database table/view to import data from, and, if necessary, configure data filtering. Alternatively, you can select the Execute Command or Execute Query action to query data (in advanced mode). Specify Target settings. Select the target object (table) or multiple objects (tables) to import data to and the operation to apply when loading data. Configure mapping of target columns to source columns, expressions, constants, lookups, etc. Before saving your Import integration, you can also schedule it for automatic execution. After completing all the Import integration settings, click the Create button. Now you can run this package and get your data integrated. Note: Certain objects, like Opportunity and Company in Marketo, cannot be retrieved through bulk list endpoints. Instead, they must be fetched individually using specific external IDs, such as externalOpportunityId, externalCompanyId, or marketoGUID. Setting up Skyvia Data Flow scenario for data integration This flexible scenario is created for complex data integrations. Being an easy-to-use, no-coding tool, Data Flow helps to manage all integration and transformation requirements in a single unified location. Here, we demonstrate how to synchronize the Leads and Opportunities tables between Marketo and Salesforce. Note :\u00a0 When you open Data Flow for the first time, it will offer the Fast Guide to the main features, which is very useful for newcomers. Create parameters and variables that describe the various objects within data integration. The parameters New1Data and New2Date transfer the date the data was last updated. Create two variables to record successful and failed uploads, as Data Flow doesn\u2019t automatically count successfully loaded rows. Select the pre-built data sources: Salesforce and Marketo. Configure the Source and Target connector accordingly. You can make a query or execute a command with both connectors. In our example, we run the command within the Source connector. Note :\u00a0 We can check if the data source is running with a test. Select Maxdate (an internal variable that stores the maximum date for further data updates). Set up the Target connector settings: select the Insert operation and work with the Leads table. In the Mapping Editor , we see the list of properties/attributes that can be merged. Set up the Result settings. Specifying what we consider a successful number and the number of unsuccessful rows is necessary. Save and run the integration and run the integration package. Pros No-code platform perfect for marketers, ops, and analysts. User-friendly i I nterface. [200+](https://skyvia.com/connectors/) connectors for databases, cloud apps, data warehouses, and more. Drag-and-drop mapping and transformation. Advanced error handling with email alerts and logs. Easy to schedule and automate data syncs. Supports both simple and complex scenarios (including multi-step workflows). Cloud-native, no local software or infrastructure required. Affordable pricing with transparent plans. Cons For real-time sync, scheduling intervals may not be as immediate as native sync (though near real-time is possible). Some complex transformations may still require understanding data logic. Additional subscription cost (though often less than maintaining a custom-built integration). Summary Integrating Marketo and Salesforce is a strategic move that brings your sales and marketing teams into perfect sync. It provides faster lead handoffs, cleaner data, sharper targeting, and a better overall customer experience. But great results don\u2019t happen by accident. A successful integration takes solid planning, careful mapping, the right setup, and a commitment to ongoing maintenance. The native connector is a good starting point if your needs are simple and in-house technical support is strong. But if you\u2019re looking for something more flexible, easier to manage, and ready to grow with your business, Skyvia changes the game. It takes the heavy lifting out of integration and gives you the tools to automate, customize, and scale your data strategy without writing code or calling in a dev team. It fits those just starting their integration journey or looking to level up. F.A.Q. for Marketo to Salesforce Integration How long does Marketo-Salesforce integration take? Initial setup can take a few hours to a couple of days, depending on your data structure, field mapping, and whether you\u2019re using the native connector or a third-party platform like Skyvia . What data is synced between Marketo and Salesforce? Standard sync includes leads, contacts, campaigns, and activities. Some data, like opportunities and accounts, sync only one way (Salesforce \u2192 Marketo). Custom fields and objects can be synced with the right setup. Do I need technical skills to integrate Marketo and Salesforce? With the native integration, some technical knowledge is helpful. Tools like Skyvia offer a no-code setup, making integration accessible for non-technical users while supporting advanced features. Can I customize the data synced? Yes. You can choose which fields to sync , map custom fields, and even apply filters or transformation rules\u2014especially with third-party tools like Skyvia that offer more flexibility than the native connector. How often does Marketo sync with Salesforce? Marketo syncs with Salesforce every 5 minutes by default, but sync time can vary based on system load and API limits. For better control, third-party tools like Skyvia let you schedule syncs at custom intervals. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmarketo-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Marketo+Salesforce+Integration%3A+The+Ultimate+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fmarketo-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/marketo-salesforce-integration/&title=Marketo+Salesforce+Integration%3A+The+Ultimate+Step-by-Step+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-3-ways-to-mass-update-salesforce-records/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Mass Update Records in Salesforce: 6 Best Methods By [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) [0](https://skyvia.com/blog/mass-update-salesforce-records/#respond) 4804 March 6, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmass-update-salesforce-records%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Mass+Update+Records+in+Salesforce%3A+6+Best+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fmass-update-salesforce-records%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/mass-update-salesforce-records/&title=How+to+Mass+Update+Records+in+Salesforce%3A+6+Best+Methods) Salesforce (SF) keeps playing a leading role among integrated CRM platforms. More and more businesses are taking advantage of it. Those who used Salesforce at least once came across a need to quickly update multiple records. However, its built-in tools do not cover all the scenarios that users may require, and that is why third-party tools started gaining more pace. There are a variety of desktop or web-based data loaders to use for this purpose. In this guide, we\u2019ll explore different approaches \u2014 from Salesforce\u2019s built-in tools like List Views, Reports, and Data Loader to advanced methods such as Apex scripting. We\u2019ll also cover automation with third-party solutions like Skyvia, which allow bulk updates via CSV, filters, and SQL queries. Table of contents Why Mass Update Is Important in Salesforce Which Method to Use to Mass Edit Records in Salesforce Method 1. Mass Update Using Salesforce List Views Method 2. Mass Updating with Salesforce Data Loader Method 3. Automating Updates with Salesforce Import Wizard Method 4. Mass Update Records with Salesforce Reports Method 5. Bulk Updates via Apex Method 6. Third-party solutions for Salesforce Mass Updates Step-by-step Guide to Salesforce Mass Update with Skyvia Mass Update Using CSV Mass Update Using Filters Mass Update Using SQL Best Practices for Automating Mass Updates in Salesforce Conclusion Why Mass Update Is Important in Salesforce In Salesforce, mass updating records is a daily routine task. You might need to adjust lead statuses, assign new opportunity owners, or make other bulk changes. Here are some common use cases: Sales and marketing alignment : keeping lead statuses, campaign responses, and opportunity stages consistent to synchronize sales and marketing efforts. Operational efficiency : accelerating routine administrative tasks, such as updating record ownership or changing statuses. Data hygiene and compliance : correcting outdated information and standardizing format to maintain data accuracy and regulatory compliance. Business process changes : supporting [workflows](https://skyvia.com/blog/best-data-pipeline-tools/) during organizational changes, such as company restructuring or sales territory adjustments. Strategic decision-making : updating fields relevant to forecasting, reporting, and analytics to provide leadership roles with actual data. Now, let\u2019s imagine a marketing team just finished a highly successful campaign, and you need to edit thousands of leads to reflect their new status. Mass updates in Salesforce can save hours of manual work. But they come with certain risks \u2014 a single mistake can amplify errors that could take days to correct. Here are potential challenges that may follow the process: Data integrity : unreliable updates can spread errors across many records, leading to inaccurate reports and poor decision-making. System performance : bulk record changing can slow down workflows or trigger automation rules, and impact overall productivity. Audit trails : when many records are updated at once, it can be hard to trace the author, which complicates compliance and audit processes. User permissions : not all users have the same level of access. If permissions are set incorrectly, unauthorized changes could compromise data security. Backup : Salesforce doesn\u2019t have built-in undo options for mass updates, so if something goes wrong, it can be challenging to revert the changes. Always back up your data before making bulk updates. Which Method to Use to Mass Edit Records in Salesforce Bulk editing records in Salesforce can skyrocket efficiency, but the best strategy will depend on your unique requirements. Salesforce offers built-in tools that are effective for simple use cases, such as List Views, Data Loader, and Import Wizard. But for complex scenarios, third-party [solutions](https://skyvia.com/blog/export-data-from-salesforce-to-excel/) and custom automation based on Apex can do the job. In this guide, we\u2019ll walk you through six different methods \u2013 both built-in and external solutions: Method 1: Salesforce List Views Method 2: Salesforce Data Loader Method 3: Salesforce Import Wizard Method 4: Salesforce Reports Method 5: Apex Method 6: Third-Party Solutions Understanding their strengths and limitations will help you choose the approach that best suits your scale and data management plan. Method Features Automation Level Technical Expertise Best For List Views \u2013 Inline editing \u2013 Quick updates without opening records \u2013 Limited to certain field types Manual Beginner Quick UI-based updates Data Loader \u2013 Bulk import, update, and export via CSV \u2013 Handles large datasets \u2013 Requires data formatting and manual file handling Semi-Automated Intermediate Large datasets (up to 150M records) Import Wizard \u2013 Built-in tool \u2013 Easy to use but limited to specific objects \u2013 Cannot delete records Manual Beginner In-browser bulk edits (up to 50K records) Reports \u2013 Bulk updates from report results \u2013 Logs changes \u2013 Limited field and data type support Manual Beginner Filtered updates with an audit trail Apex \u2013 Advanced automation \u2013 Supports large datasets and complex logic \u2013 Requires Salesforce development expertise Fully Automated Advanced (coding required) Advanced automation and bulk updates Third-party Tools \u2013 Advanced data manipulation and automation \u2013 More user-friendly and scalable \u2013 May require additional costs Fully Automated Beginner to Intermediate High customization, flexibility, and performance Method 1. Mass Update Using Salesforce List Views Changes can be made directly within Salesforce\u2019s List View interface. Built-in features allow for change values in text fields, picklists, and dates for multiple records simultaneously. This capability eliminates the need to open each record individually, saving significant time and improving productivity. Users can find and select the necessary records in the view and change values by clicking on the relevant field. Before saving, SF will allow users to review and confirm changes to ensure data accuracy. This helps prevent data errors and discrepancies, leading to better quality. Additionally, it\u2019s possible to customize list views to display the specific fields they need to edit. See more on a [list view](https://help.salesforce.com/s/articleView?id=xcloud.basics_understanding_list_views_lex.htm&type=5) in Salesforce documentation. Pros Quick and easy updates without extra tools No need to open individual records Built-in filters help target specific records Cons Limited to certain field types Cannot update records across multiple objects Best for Users who need a simple way to make quick edits directly within Salesforce. Method 2. Mass Updating with Salesforce Data Loader SF Data Loader is a client application for bulk data import and export. It is a native solution that works with CSV files and can update records in Salesforce via its user interface, or from the command line. Data Loader is used when working with large data sets of up to 150 million records with a file size limit of 150 MB. It is free and can perform such operations as Insert, Update, Upsert, Delete, Hard Delete, Export, and Export All. However, it requires data to be formatted before loading into Salesforce, offers static operations with limited flexibility, and can experience bugs and timeouts. To avoid timeouts, users can lower the default batch size to process fewer records at a time. To mass edit existing objects using the Salesforce API, the 15-character, case-sensitive record ID needs to be converted to an 18-character, case-insensitive version to ensure the correct records are updated. Although Data Loader has limitations, it can be useful for simple, one-off jobs. Check out these articles for in-depth insights on SF Data Loader: [9 Best Free and Paid Salesforce Data Loaders](https://skyvia.com/blog/salesforce-best-data-loaders/) [Salesforce Data Migration Best Practices](https://skyvia.com/blog/salesforce-data-migration-best-practices/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) Pros Handles large volumes of data Supports various formats (CSV, Excel) Allows for advanced field mapping and error handling Can perform updates for multiple objects Cons Requires installation and setup More complex interface compared to native tools Occasional bugs, timeouts, and crashes Best for Salesforce admins dealing with large datasets needing a free tool for basic data updates. Method 3. Automating Updates with Salesforce Import Wizard The [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard is a built-in tool to mass update records, allowing you to import up to 50,000 records at once. With it, businesses can bulk edit accounts and contacts, leads, solutions, campaign members, and custom objects. To update existing records, your file must contain a unique identifier to match with existing records. However, this solution does not allow you to import Cases and Opportunities or to delete unwanted records, and users must find alternative solutions. Pros Import up to 50,000 records at one time Updates contacts, accounts, and leads In-browser tool, no additional installation is required Cons Does not allow importing certain objects like cases and opportunities Users are unable to export data Best for Users needing to update contacts, accounts, and leads in Salesforce in-browser, without technical expertise. Method 4. Mass Update Records with Salesforce Reports This method provides a workaround for making changes when building SF reports. Users can filter records based on various criteria, such as record type, owner, status, or any other field values. Once the report is generated and the records are identified, users can initiate the process directly from the results. The feature supports a limited set of columns and data types. SF Reports enables users to make changes across multiple records quickly but needs to navigate through individual records. Salesforce logs who made changes, when they occurred, and what changed in each record. This audit trail ensures transparency and accountability in the data management process. For more details, visit the [Salesforce Help Center](https://help.salesforce.com/s/articleView?id=analytics.reports_inline_editing.htm&type=5) . Pros Quick updates directly from report results Provides an audit trail of changes made Can filter records based on criteria like record type, owner, or status Cons Is not automatic; users need to define filters and generate the report The feature supports a limited set of columns and data types Not ideal for large volumes of data Best for Power users needing to update multiple records quickly from Salesforce reports while maintaining an audit trail. Method 5. Bulk Updates via Apex [SF Apex](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_dev_guide.htm) is a programming language that allows users to build custom functionalities and automate complex business processes within the platform. When it comes to mass updating records, it can adjust to specific business requirements. Here is a code example to update a contact and its related account using two statements: try {\n // Query for the contact associated with an account.\n Contact queriedContact = [SELECT Account.Name \n FROM Contact \n WHERE FirstName = 'Contact_Name' AND LastName='Contact_LastName'\n LIMIT 1];\n\n // Update the contact's email\n queriedContact.Email = 'New_Email';\n\n // Update the related account industry\n queriedContact.Account.Industry = 'New_AccountIndustry';\n\n // Make two separate calls \n // 1. This call is to update the contact's email.\n update queriedContact;\n // 2. This call is to update the related account's Industry field.\n update queriedContact.Account; \n} catch(Exception e) {\n System.debug('An unexpected error has occurred: ' + e.getMessage());\n} Apex provides users with the flexibility to implement complex logic and manipulate data in ways that may not be achievable with standard functionality alone. Find more information in the [Apex Developer Guide](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_dev_guide.htm?_ga=2.254945864.1169479269.1708599521-509140606.1707046732) . Pros Supports complex business logic and specific requirements Automates bulk updates without manual effort Handle large data volumes with optimized [batch processing](https://skyvia.com/blog/batch-etl-processing/) Enables integration with other systems via custom code Cons Requires coding knowledge and expertise Higher complexity and maintenance overhead Best for Organizations requiring highly customized and tailored automation rules to meet specific business requirements. Method 6. Third-party solutions for Salesforce Mass Updates [Third-party tools](https://skyvia.com/blog/salesforce-best-data-loaders/) for Salesforce offer additional functionality, better performance, and flexibility beyond the built-in features. Examples of these tools include DemandTools, Weflow, and Skyvia, which provide advanced features for data cleansing and pipeline management. However, it\u2019s important to carefully evaluate the pros and cons of each tool to ensure it meets the organization\u2019s needs and requirements. Pros Extended functionality beyond standard SF features User-friendly interfaces for non-technical users Enhanced performance, especially for large volumes of data Integration capabilities with other systems and applications Customization options to tailor the mass update process to specific needs Cons The cost associated with purchasing and maintaining third-party tools Reliance on external vendors for support and updates Best for Organizations looking for more flexibility, advanced features, and high performance for large data updates. You can find more information about third-party tools in these blog articles: [How to import data into Salesforce](https://skyvia.com/blog/importing-data-into-salesforce/) [Most Popular Salesforce Reporting Tools](https://skyvia.com/blog/top-salesforce-reporting-tools/) [Salesforce middleware integration tools](https://skyvia.com/blog/salesforce-integration-tools/) [9 Best ETL Tools for Salesforce](https://skyvia.com/blog/9-best-etl-tools-for-salesforce/) [Salesforce Data Migration Best Practices](https://skyvia.com/blog/salesforce-data-migration-best-practices/) Step-by-step Guide to Salesforce Mass Update with Skyvia In the following sections, we will focus on the Skyvia solution, a cloud-based platform that offers versatile options for mass updating Salesforce records. Users can either upload a CSV file or use filters or [SQL](https://skyvia.com/blog/connect-salesforce-to-sql-server/) to select records directly. Besides, Skyvia provides a [Backup](https://skyvia.com/blog/salesforce-backup-solutions-and-tools/) solution to keep your data safe. Mass Update Using CSV When it comes to the most popular method of updating data in bulk, CSV files take the first place. No wonder it is so widely used for those who have hundreds or thousands of records and need to update records asap. Below, we explain how to mass edit SF records with Skyvia\u2019s import operation. As an example, we mass update leads in Salesforce . Assume we have leads that have changed the employer company, and we would like to change the company in bulk. We add extra lookup keys to map FirstName and LastName columns for reliability. Register a free [Skyvia account](https://app.skyvia.com/register) . Create a Salesforce connection by clicking +Create New > Connection . Select Salesforce among the connectors. Sign in to your Salesforce production, sandbox, or custom environment. The detailed procedure is described in [How to Import Data into Salesforce](https://skyvia.com/tutorials/how-to-import-data-to-salesforce) tutorial. Check it out for more details. Create a new import in Skyvia by clicking +Create New > Import . In the opened window, check whether the source type \u2014 CSV upload manually \u2014 is selected. Then, select the Salesforce connection as a target. When the source and the target have been selected, add a task by clicking the Add new link on the right. When the task editor opens, upload the required CSV file. The columns from CSV will be displayed on the right of the task editor. Pay attention to the CSV Separator parameter. It should be selected correctly to display columns as a table (as in the screenshot). When everything looks fine, click Next step to proceed with target settings. In the Target drop-down list, select the Lead object. Then, select the Update operation type and go further. On the Mapping Definition tab, map the source to target columns. The columns with matching names are mapped automatically. To match columns with different names, use Column or Target Lookup . Skyvia offers several mapping options so you can split data or use complex expressions and lookups. You can find more information on the [Skyvia Documentation Portal](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . In our CSV file, we don\u2019t have IDs, which is why we select the Target Lookup mapping. Then we select Lead in the Lookup Object drop-down list and Id value in the Result Column list. Under the Lookup Key Column , in the first dropdown list, we choose Company. In the second dropdown list, we select Constant and enter the company name we want to be replaced in Salesforce. To add another Lookup Key Column , click +Add Lookup Key . In the first drop-down list, we select FirstName. In the second drop-down list, we keep Column as it is. In the third drop-down list, we select FirstName. We click +Add Lookup Key again and repeat the same action but with LastName. Adding extra lookup keys helps you better find required leads and replace an old employer company with a new one. After finishing with the ID column, we proceed with the Company column. We select Constant and enter the company name, which should be inserted instead of the old one. Click Save to complete your task. Create and run import by clicking the corresponding buttons. You can check your job status on the Monitor tab. Uploading a result CSV will help you to check updated records. As you can see, everything is quite simple and easy to configure. In the same way, you can mass update any Salesforce fields or objects you need to. Mass Update Using Filters To update only the needed data set, you can use filters in [Skyvia](https://skyvia.com/data-integration/salesforce-data-loader) . This is the right solution if the records can be received from SF directly. As an example, we will mass update a contact owner in Salesforce . Create a new import in Skyvia as described above. When the import editor opens, click Data Source database or cloud app and select the Salesforce connection as a source. Then, select the same connection as a target. When both connections are selected, proceed with adding a task. After clicking the Add new link, you are dropped into the task editor window. On the Source Definition tab of the task editor window, select Contact in the Source drop-down list. Apply filters to select records for the update. In the Filter section, click the +Condition button on the right and set the condition according to which contacts will be updated. We select Owner with a certain email. In Skyvia, you can also add multiple filters, which can be combined into groups. Each group can consist of several filters and/or subgroups united with a logical operator AND or OR. Find out more about it in the [Filter Settings](https://docs.skyvia.com/data-integration/common-package-features/filter-settings.html) topic. When everything is ready on the Source Definition tab, click Next step . On the Target Definition tab, select the same Contact object in the Target drop-down list. Then, select Update as an operation type and go further. On the Mapping Definition tab, map source and target columns. In our case, we only need to map the Id and OwnerId columns. The Id column should be mapped through column mapping \u2014 it will be used to search for a record to update, and the OwnerId column should be mapped through target lookup. The Target Lookup mapping allows getting the ID directly from target tables by other fields, identifying rows, such as emails, and names, for example. When it is done, click Save . Then, click Create and run the import. On the Monitor tab, you can check the results by clicking the Run ID line. Mass Update Using SQL Another way to edit Salesforce data in bulk is through [Skyvia](https://skyvia.com/query) Query using SQL. Skyvia Query lets you preview changes before applying them, so the operation is safe. Skyvia Query can be used by both experienced SQL users and SQL beginners. Except for UPDATE statements, it also supports [SQL SELECT, INSERT, and DELETE](https://docs.skyvia.com/supported-sql-for-cloud-sources/) for cloud sources. You can find more in [Skyvia documentation](https://docs.skyvia.com/query/) . Below, we describe in simple steps how to mass update Salesforce opportunity owners. Create a query in a standard way by clicking +Create NEW and selecting Builder in the Query . The query editor opens. To query data, first, you need to [create a connection to Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) by clicking the +New connection link (in case you have not created it yet) or selecting the already created Salesforce connection from the drop-down list on the left. To update an opportunity owner, you need to know the owner ID. If you do not remember it, you can easily extract this ID from the\u00a0User\u00a0table, filtering User by email. To query data from a table, simply drag this table from the Salesforce object list to the Result Fields pane. In our example, we drag\u00a0User\u00a0to the Result Fields pane. Next, to filter data by email, we click User and drag the User.Email column to the Filters pane. Then we click User.Email in the Filters pane and configure the filter in the Details pane on the right side of the query. Finally, we check our query and click Execute to run it. The query result field displays a record with User ID. You need to copy the ID to use it in your second query. Now, we need to create another query to edit the OwnerID. To create the second query, click the + button on the query page tab bar. A new query will be created. To switch to the necessary view, click the corresponding button on the right side of the query toolbar \u2014 in our case, it is the SQL view button. We enter the UPDATE statement to the code editor: UPDATE Opportunity\n\nSET OwnerId='005A0000001gx22BBG'\n\nWHERE OwnerId = '005A0000001gx22IAA' When you have finished your second query, click Execute to run it and mass update Salesforce opportunities (opportunity owner to be exact). Best Practices for Automating Mass Updates in Salesforce The volume of data is constantly increasing, and if businesses continue to rely on outdated manual processes, they risk falling behind their competitors. This makes automation essential for fueling business growth. A solid [automation strategy](https://www.gartner.com/en/articles/process-automation) is key to boosting productivity and making smarter decisions. Two key aspects of automating the process include: Scheduled using SF features or third-party tools Implementing [best practices](https://skyvia.com/blog/salesforce-to-salesforce-integration/) for setting up automation. While Salesforce provides built-in tools for mass updates, third-party solutions offer greater flexibility, customization, and advanced features to keep your data accurate with less effort. Scheduling enables businesses to automatically update information on a daily, weekly, or monthly basis. To schedule updates according to particular criteria or triggers, Salesforce offers built-in functionality like [Flow Builder](https://help.salesforce.com/s/articleView?id=platform.automate_about.htm&type=5) . Alternatively, third-party software like Xapex or Skyvia provides scheduling and more features for complex integration scenarios. Follow the best practices to ensure efficient update execution: Define the objectives and scope of the automation process, including the specific records and fields to be updated. Develop a data governance strategy to maintain data quality and integrity throughout the automation process. Test the automation workflow in a sandbox environment before deploying it to production to identify any potential issues. Back up data before making any changes. Monitor the automation process and make adjustments as needed. Implement proper error and exception-handling mechanisms to address issues that may arise during the process. Document the automation workflow, including the criteria, actions, and scheduling parameters, for better troubleshooting and maintenance. Conclusion Keeping your Salesforce records up to date is important for clean data and smooth operations. The right method depends on your needs \u2014 whether it\u2019s a minor adjustment, a large-scale update, or a highly customized change. For simple updates, List Views and the Data Import Wizard do the job. If you\u2019re dealing with thousands of records or unsupported objects, Data Loader is a solid choice. Need full customization? Apex lets you build tailored solutions, but it requires coding skills. Third-party tools bring more power and flexibility, and Skyvia makes mass updates seamless. With CSV uploads, filter-based updates, and SQL queries, you can update records with control and precision. Plus, you can schedule updates and use Skyvia Query for even more customizable bulk changes \u2014 all with expert support ready to help. FAQ for Mass Update Records in Salesforce Can I do Salesforce mass update records? Yes, you can update multiple records at once using List Views, SF Data Loader, or third-party tools like [Skyvia](https://app.skyvia.com/) . The method you choose depends on the number of records and the complexity of the update. How many records can I edit in Salesforce at once? Salesforce allows up to 200 records to be edited at once via [List View](https://help.salesforce.com/s/articleView?id=xcloud.customviews_edit_inline_listview_lex.htm&type=5) . For larger updates, you can use Data Loader or external tools that support bulk operations. Can I mass delete data in Salesforce? Yes, you can mass delete cases, solutions, accounts, contacts, leads, products, and activities using the [Mass Delete Wizard](https://help.salesforce.com/s/articleView?id=xcloud.admin_massdelete.htm&type=5) . For more complex scenarios, consider Data Loader or third-party tools. Always backup your data, as deleted records may not be recoverable. How to mass update leads in Salesforce? Leads can be updated in bulk using List Views for quick edits, [SF Data Loader](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/data_loader_intro.htm) for large datasets, or third-party tools for automation. Filtering by lead status or source will help narrow the records and make updates accurate. How to change record ownership for multiple Salesforce records? Use the [Mass Transfer](https://help.salesforce.com/s/articleView?id=platform.admin_transfer.htm&type=5) tool to reassign owners for leads, accounts, and custom objects. For complex ownership updates across multiple objects, Salesforce Data Loader or third-party tools like Skyvia provide more flexibility. Can I update multiple contacts at once in Salesforce? Yes, you can update contacts in bulk using List Views for small changes or Data Loader for large-scale modifications. To modify related data, consider using Apex programming language or automation tools like Skyvia. How to mass update opportunities in Salesforce? Opportunities can be updated through List Views for stage or status changes, Data Loader for large updates, or third-party tools for advanced filtering and automation. Ensure updates align with your sales pipeline rules. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmass-update-salesforce-records%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Mass+Update+Records+in+Salesforce%3A+6+Best+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fmass-update-salesforce-records%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/mass-update-salesforce-records/&title=How+to+Mass+Update+Records+in+Salesforce%3A+6+Best+Methods) [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) A dedicated technical writer, Liudmyla brings extensive experience in creating and managing diverse learning materials. Passionate about user-centered documentation, she thrives on enhancing user experiences through clear, engaging, and accessible content. With a keen analytical mindset and a collaborative approach, Liudmyla excels in bridging information gaps and simplifying complex concepts. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/modern-data-stack/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) The Modern Data Stack: Revolutionizing Data Architecture in 2025 By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/modern-data-stack/#respond) 4666 October 20, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmodern-data-stack%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Modern+Data+Stack%3A+Revolutionizing+Data+Architecture+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fmodern-data-stack%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/modern-data-stack/&title=The+Modern+Data+Stack%3A+Revolutionizing+Data+Architecture+in+2025) Companies must be able to measure their performance and develop new strategies to keep up with the competitive business environment. However, significant shifts in digital analytics in the last ten years have led to new crucial challenges shaping how data works today. The first is the rise of cloud technology, and the second is the massive scale of data applied globally. That\u2019s how the concept of the modern data stack appeared. This article focuses on the meaning of a modern data stack and how to build a modern data stack within broad product offerings in the data analytics space. Here\u2019s what we cover in this article: Table of contents What is Modern Data Stack? Modern Data Stack Architecture Advantages of Using a Modern Data Stack Modern vs. Traditional Data Stack Modern Data Stack Tools in 2025 Build a Modern Data Stack in 30 minutes Examples of Applying Modern Data Stack Key takeaways What is Modern Data Stack? Any business has two main goals: earn more and spend less. Having implemented a modern data stack, a business of any size can make data-driven decisions to reach its goals and evolve faster than its competitors, who still run their business operations in Excel. Dmitry Alasania, Head of Product Growth at Skyvia When talking about the modern data stack in a business context, it includes technologies that help companies use data for decision-making. It\u2019s basically the same thing data engineers were already doing, but now using new, cloud-based technologies to create applications that ingest massive amounts of data, run massive data analytics and use those results to generate insights that have never been possible. As such, the definition of a modern data stack cannot be clearly stated since every business tries to adapt modern technologies to their requirements. However, there are definite features of the modern data stack that identify it: It\u2019s cloud-based , requires very little maintenance, is easy to install, and can scale quickly with little effort. It can be used by small and medium-sized data teams , as it has a lot of out-of-the-box functionality and doesn\u2019t rely on the number of data professionals. It offers a lot of integration opportunities for creating a comprehensive data ecosystem. Overall, the modern data stack centerpiece is about democratizing data usage: making data more accessible, covering different dimensions of business, improving analytics capabilities, and simplifying the infrastructure. Modern Data Stack Architecture Being an all-in-one [data integration solution](https://skyvia.com/blog/data-integration-tools/) , Skyvia is a Swiss knife for building a strong data architecture without the need to hire a set of tools with different UI-s, functionalities, and billings that need to be managed by a dedicated team. Dmitry Alasania, Head of Product Growth at Skyvia Over the past five years, the amount of data processed has increased so much (up to 2025, global data creation is projected to grow to [more than 180 zettabytes](https://www.statista.com/statistics/871513/worldwide-data-created/) ) that it has become practically unmanageable for small data teams, as they have to work with the growing number of different and disparate data sources. The age of tribal knowledge is already over: the possibility of simply asking a colleague what kind of data is stored in the dataset is no longer available. Employees now need tools to manage and process data at a scale, from operational analytics and monitoring to data visualization and high-speed accessibility. Accordingly, the Modern Data Stack architecture must meet such requirements. The commonly agreed categories of MDS architecture are: Data ingestion . Usually, data is collected from 1st and 3rd party sources and used to build a single source of truth. Data storage . For storing data, it\u2019s better to have two options: a data lake to keep historical data and a data warehouse for interpretation and processing. Data transformation . Data is transformed (from compliance of field formats in different data sources to complex data validation), cleaned, etc. Business intelligence . Services and platforms used for reporting, analytics, and visualization. However, it\u2019s an incomplete version that doesn\u2019t fully serve the needs of product and growth teams. A mature MDS should also include: Data science . For extracting insights from [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) . Reverse ETL . For activating data by sending it back to business applications from a DWH. Data orchestration . For organizing a large amount of data, which is critical when the modern data stack has just been established. Once the system is up and running, data orchestration becomes a number one priority. Data quality and governance are also critical, especially when you\u2019ve launched the system and have a large amount of data. Once the system is up and running, data quality will become your number one priority. To summarize, a modern data stack architecture must be designed to make the organization\u2019s work with big data efficient: extract the insights from the data and then act upon them in new ways. Advantages of Using a Modern Data Stack Switching to a modern data stack might take some time and effort. Anyway, the result pays it off \u2013 the MDS brings millions of advantages in a longer perspective, and we explore some of those. Optimized Costs. As a modern data stack is deployed in the cloud, there is no need to invest in the maintenance of the on-premises hardware. Spending for IT and engineering teams decreases as everything is set up faster and easier. Flexible Scalability. Cloud-based solutions ensure a high degree of scalability, so businesses allocate storage and computing power according to their needs. Getting more or less cloud resources is possible at any time to match the current workload. High Customizability. Modern data stack keeps modularity in mind \u2013 ensuring compatibility of its components. This means businesses are flexible in picking up tools for data intelligence, data transformation, and other processes without worrying about how to colligate them. Superior Data Governance. At any stage of the lifecycle, a modern data stack ensures superior data governance. Real-Time Data Processing. An MDS may comprise technologies that support real-time data processing and analytics. Big Data Support. Owing to the distributed architecture of the cloud solutions included in an MDS, storing and processing large amounts of data is no longer rocket science. Modern vs. Traditional Data Stack If the advantages of MDS don\u2019t seem convincing enough, let\u2019s have a look at how it differs from the traditional data stack. Such a comparison helps businesses decide whether they really need it or if they\u2019re good to go with the traditional one. Traditional Data Stack Modern Data Stack Hosted on-premises Hosted in the cloud Has coupled structure Has modular structure Complex setup requiring large IT teams Less time on technical configuration Requires serious technical background Suitable for users without extensive technical background Contains traditional RDBMS Works with RDBMS as well as big data, unstructured data Modern Data Stack Tools in 2025 Every year, the number of tools on the data/AI landscape increases rapidly as market leaders and a new generation of data startups enrich their product offerings. As seen on the annual map by [Matt Turcks](https://mattturck.com/data2021/) , there are literally myriads of various products in each category, so how can you choose the best tools for your modern data stack? Full resolution version of the landscape image is [here](http://46eybw2v1nh52oe80d3bi91u-wpengine.netdna-ssl.com/wp-content/uploads/2021/12/2021-MAD-Landscape-v3.pdf) . It takes a lot of planning to get a clean and neat data stack, as every tool and application should be flexible and work harmoniously with each other. Companies are embracing the use of microservices and REST APIs that break apart the entire data architecture into more manageable pieces, allowing businesses to choose the right tools for separate problems. Each component should be self-reliant enough to be swapped out independently of the other. It shouldn\u2019t take a Ph.D. to understand what it does, so you can change it for another that works better for you, saving time and money. While data architectures vary by company, there still are the core components used in almost all modern data stacks. Data Ingestion Ensuring that all the teams in a company use the same data and operate within a single source of truth is one of the essential features. Data ingestion is a process of taking data from one place and moving it into a different place to make it available for further manipulation and analysis. Popular options: [Stitch](http://stitchdata.com) , [Fivetran](https://www.fivetran.com/) , [Skyvia](https://skyvia.com/platform/) . Data Warehouse Data volumes continue to grow, and, as a result, companies mainly orchestrate various tools and frameworks based on relational databases. Meanwhile, NoSQL databases are ideal for storing unstructured data; with some effort, they can be deployed in hybrid environments. However, these databases lack compatibility with most tools used in such environments. As a result, companies are shifting to cloud [data warehouse solutions](https://skyvia.com/blog/best-data-warehouse-tools/) to overcome these limitations. Popular options: [Azure](https://azure.microsoft.com/en-us/) , [BigQuery](https://cloud.google.com/bigquery) , [Snowflake](https://www.snowflake.com/en/) , [Redshift](https://aws.amazon.com/redshift/?nc1=h_ls) . ETL Data Transformation Data transformation is about changing data from one structure or format to another structure or format. These steps are crucial in data integration since they prepare data for further analysis, visualization, and reporting. It\u2019s done using extract, transform, and load [(ETL) techniques](https://skyvia.com/blog/elt-vs-etl/) . Popular options: [dbt](https://www.getdbt.com/) , [Skyvia](https://skyvia.com/platform/) , [Improvado](https://improvado.io/) . Reverse ETL Reverse [ETL tools](https://skyvia.com/blog/etl-tools/) construct a data pipeline, where a DWH is a source and a business app is a target. They extract data from a data warehouse, transform it to match the format of the destination platform, and load it there. That way the operational teams obtain the data they need right in the business applications of interest, and can work with it onwards. Popular options: [Polytomic](https://www.polytomic.com/) , [Skyvia](https://skyvia.com/platform/) , [Census](https://www.getcensus.com/about?utm_term=&utm_campaign=Performance+Max&utm_source=adwords&utm_medium=ppc&hsa_acc=5525437358&hsa_cam=20151868524&hsa_grp=&hsa_ad=&hsa_src=x&hsa_tgt=&hsa_kw=&hsa_mt=&hsa_net=adwords&hsa_ver=3&gad=1&gclid=Cj0KCQjwzdOlBhCNARIsAPMwjbzLNcvsgL-QTwbnU59fqOVeJ2mLEZYKCgofsa1Dqtcb7o72Gf19hM8aAu2AEALw_wcB) . Data Orchestration [Data orchestration tools](https://skyvia.com/blog/best-data-orchestration-tools/) automate data collection from various sources by configuring multiple data streams. They also consolidate data in the target platform for its further use in analysis or forecasting. Popular options: [Prefect](https://www.prefect.io/) , [Skyvia](https://skyvia.com/platform/) , [Apache Airflow](https://airflow.apache.org/) . Business Intelligence Business intelligence tools are in charge of analyzing data and then presenting it to users in an easy-to-understand manner. They transform data into visual elements such as charts, graphs, and tables. Almost all business intelligence tools are capable of helping non-technical users understand and analyze data without the need for any programming knowledge. Popular options: [Tableau](https://www.tableau.com/) , [Power BI](https://powerbi.microsoft.com/en-au/) , [Google Data Studio](https://datastudio.google.com/) . Data Science Similarly to BI tools, data science applications analyze data and extract valuable insights from it. What makes them different from BI solutions is that data science tools predict and forecast the future of business based on the already existing data. Popular options: [Apache Spark](https://spark.apache.org/) , [Project Jupyter](https://jupyter.org/) , [SAS](https://www.sas.com/) . While choosing the modern data stack tools, consider the following moments to ensure that your stack is future-proof: Services should be flexible enough to adapt quickly to new platforms, tools, or technologies. Product is capable of scaling with large volumes of data. The customer support team provides a fast response to customer inquiries and issues. Governed data should be accessible by all solutions, consumable by every tool that uses it, and not disrupt when one tool is swapped for another. Reasonable and flexible pricing plans that can be changed according to business goals. You need no special qualifications to get started working with the services. Build a Modern Data Stack in 30 minutes Advanced data tooling historically has been a costly endeavor. But thanks to the latest cloud computing innovations, it\u2019s easier than ever to set up a data stack and use it to power real-time analytics solutions. Regardless of your business size, it\u2019s possible to implement a cloud-based warehouse with data from multiple sources connected to an analytics/business intelligence (BI) platform that\u2019s up and running fast. The foundation of your modern data stack is built on three pillars: a cloud-based warehouse, data pipelines, and an analytics platform. Step 1. Choose a cloud-based data warehouse If you want to store and process data efficiently, you need a cloud-based warehouse \u2014 the foundation of a modern data stack. With cloud-based solutions, businesses will have fewer data infrastructure and management costs, as well as the ability to seamlessly access data from any device. As the need to manage the data becomes more complex, it\u2019s only natural that organizations spend more time trying to manage this environment. Today, cloud-based services can handle performance and scaling details in a variety of ways, through an administrative interface, with few requirements from the engineering team. What to consider? Easy to set up and maintain Existing technology stack Uses plain SQL A strong ecosystem Cost and cost control Popular options: [Azure](https://azure.microsoft.com/en-us/) , [BigQuery](https://cloud.google.com/bigquery) , [Snowflake](https://www.snowflake.com/en/) , [Redshift](https://aws.amazon.com/redshift/?nc1=h_ls) . Step 2. Integrate data from all of your applications You should find a data pipeline management tool that provides scalable data loading options, can capture data from various systems (e.g., CRM, billing systems, inbound marketing platforms, etc.), and store diverse data streams in the data warehouse of choice. What to consider? Connection to all data sources business needs How reliable it is Data extraction at the frequency of business needs Uses plain SQL Cost and cost control Since Skyvia meets all these requirements, let\u2019s see how it can help businesses of all sizes in covering their needs while creating a Modern Data Stack. [Skyvia](https://skyvia.com/) is an all-in-\u043ene, cloud-based platform for data integration that performs different integration techniques and supports a broad set of integrations, including over 130 cloud applications and the most widely used databases and cloud data warehouses. It\u2019s flexible enough to support a wide range of data integration scenarios, it includes cloud application data import and export functions (ETL, ELT, [Reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) ), database import and export, one-direction and bi-directional synchronization. Skyvia offers business solutions for all sizes of businesses: its pay-as-you-go pricing and scaling capabilities support businesses upon evolvement. You can begin applying Skyvia as a startup with just a few gigabytes of data, and it can be further adapted as the team collects terabytes of data. Whether it\u2019s a small or medium-sized business or a larger one, Skyvia has the solution for you. Sometimes, more often than we all would like, real-life integration scenarios are complex and require a more flexible tool. Skyvia offers Data Flow and Control Flow features that are applied for such complicated business cases as: Complex, multistage transformations. Running integrations in a specific order. Obtaining data from one data source, enriching it with data from another one, and finally load into the third source. Performing pre- and post-integration tasks. Popular options: [Skyvia](https://skyvia.com/platform/) , [Stitch](https://stitchdata.com/) , and [best ETL tools in 2022](https://skyvia.com/blog/etl-tools/) . Step 3 . Analyze the data Now that the data is available (extracted, loaded, and transformed), you need a tool to analyze it. While choosing the proper tool, keep in mind that self-service analytics and business intelligence (BI) tools let business users work directly with data and gain insight. The difference between showing and seeing data cannot be overstated, as visual presentation is a real superpower. In a nutshell, dashboards are as useful for managers as control panels are for drivers. Today\u2019s BI tools help non-technical users explore data without needing to know SQL. It frees business users from depending on developers and analysts and encourages everyone to explore and learn from data. What to consider? Ease of use and scalability Vendor ecosystem Cost Deliver results quickly Uses plain SQL Easy to share Popular options: [Tableau](https://www.tableau.com/) , [Power BI](https://powerbi.microsoft.com/en-au/) , [Looker](https://www.looker.com/) , etc. Examples of Applying Modern Data Stack Thanks to the modularity of the modern data stack, it assures a kaleidoscope of tool composition options. Here we would mention only some examples of such combinations in applying the MDS across industries: Analytics team using Skyvia + SQL Server + PowerBI. In this case, Skyvia is used for data ingestion, transformation, and loading in SQL Server. As SQL Server goes under the Microsoft license, it perfectly coordinates with PowerBI for analytics and reporting. Product growth team using Snowflake + Skyvia + HubSpot. In this case, Snowflake accumulates data from multiple data sources (Microsoft Dynamics, Shopify, Dropbox). After transforming data into a unified format, Skyvia applies reverse ETL to move it to HubSpot. Marketing and product growth teams have a unified view of the customer base, which provides insights into customer behavior and preferences, allowing them to customize marketing and ad campaigns, improve retention rates, and so on. The contents of a data stack may vary across the companies depending on the industry of operation. Some businesses would prefer a data warehouse (Snowflake, Amazon Redshift, Google BigQuery), while others would need a data lake (Amazon S3) for storage. Depending on the data flow, some companies might select ETL tools (Skyvia, Talend) for data ingestion and transformation, while others might require real-time [data processing tools](https://skyvia.com/blog/best-data-processing-tools/) (Apache Kafka). Those who need deep analysis of the incoming streams would benefit from data intelligence and data science applications (Looker, Tableau). Key takeaways As data itself is turning into the product, there\u2019s a shift in how companies run their data functions. The world of business has never been as digital or cloud-focused as it is today, meaning that people can finally reach their desires in working with data together with their ability to work with data. Moreover, the human dimension has also changed a lot as there are now more and more data-literate people. Obviously, these shifts in the digital landscape open up extensive opportunities for businesses. Can Modern Data Stack become a lingua franca on how data is transformed, accessed, and queried? Is it bending the curve? With modern data architecture, companies can quickly build a real-time data stack and get it up and running. Among the reasons for applying it, we can mention such wishes as: Reducing engineering time spent on maintaining the data analytics services. Outgrowing of manual reporting throughout businesses. Better integration of the data from multiple systems. High costs of analytics platform. Modern Data Stack provides businesses with a bias for action. Creating the modern data stack enables organizations to devote more time to analyzing their data and less time engineering their data processing pipelines. Since inaccurate or out-of-date data can cause companies to miss opportunities, waste money, or incur unnecessary risk, we recommend using flexible platforms with lots of features like Skyvia. It\u2019s flexible and robust enough to handle answers to complex business questions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fmodern-data-stack%2F) [Twitter](https://twitter.com/intent/tweet?text=The+Modern+Data+Stack%3A+Revolutionizing+Data+Architecture+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fmodern-data-stack%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/modern-data-stack/&title=The+Modern+Data+Stack%3A+Revolutionizing+Data+Architecture+in+2025) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/netsuite-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce NetSuite Integration: The Complete 2025 Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/netsuite-salesforce-integration/#respond) 4119 May 16, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fnetsuite-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+NetSuite+Integration%3A+The+Complete+2025+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fnetsuite-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/netsuite-salesforce-integration/&title=Salesforce+NetSuite+Integration%3A+The+Complete+2025+Guide) Customer relationship management (CRM) and enterprise resource planning (ERP) tools are two necessities that companies use to make their business productive. Though Salesforce and NetSuite both offer CRM-like features, they serve different purposes from a global point of view. Salesforce CRM, for instance, is a perfect choice for efficient customer management and satisfaction rates. NetSuite, in turn, is a stable ERP system that allows organizations to effectively manage their daily operations and resources. At some point, organizations might need the Salesforce to NetSuite integration to enhance their business operation effectiveness and improve collaboration between finance and sales teams. This article provides a step-by-step guide that is applicable to any workflow. It also includes a range of tools to connect NetSuite and Salesforce, along with a real-world example of integration implementation. Table of contents Why Integrate Salesforce with NetSuite? Benefits of Integrating NetSuite with Salesforce Integration Methods: Choose What Works Best Comparison of Integration Methods: Which One to Select Step-by-Step Guide to Integrating Salesforce with NetSuite Common Use Cases for Salesforce NetSuite Integration Top Tools for Salesforce and NetSuite Integration Skyvia Breadwinner Celigo (Integrator.io) Dell Boomi MuleSoft Comparison of Integration Tools Example: Integrating Salesforce and NetSuite with Skyvia Option 1. Data Integration (Import Scenario) Option 2. Data Integration (Data Flow Scenario) Option 3. Automation (Trigger-Based Integration) Challenges of the Salesforce and NetSuite Integration and How to Overcome Them Conclusion: Build a Smarter Workflow with Salesforce NetSuite Integration Why Integrate Salesforce with NetSuite? The connection between Salesforce and NetSuite is only a drop in the ocean of similar cases where a CRM system needs to be synchronized with ERP software. Many companies face a number of difficulties bringing together a customer management tool with an enterprise planning system, since their purposes and structures differ. Anyway, organizations feel the need to connect these services since such a fusion grants numerous benefits. In this section, we cover a number of reasons to integrate Salesforce and NetSuite: Align all business operations across departments. Provide a comprehensive view of each customer to grant a personalized experience for them. Avoid duplicate data entries and records across organizations. Eliminate delays in revenue processing and budgeting. Enhance collaboration between the Sales and Finance departments. Improve the accuracy of reporting. Benefits of Integrating NetSuite with Salesforce Once integrated, Salesforce and NetSuite enable a seamless data flow between front- and back-office operations. In this section, we cover some positive outcomes businesses can expect from integration Up-to-Date Data Across Systems Businesses involve vast amounts of data from sales, finance, marketing, support, and other departments. However, it needs to be gathered and managed properly to allow organizations to run effectively. For instance, changes in Salesforce data or invoices should be reflected in the NetSuite accounting software and vice versa. It\u2019s possible to update databases by using different methods. Traditional data entry techniques, for example, are time-consuming and inefficient since any mistake made while data processing may prevent a company from receiving its payments on time. Thus, companies prefer the NetSuite Salesforce integration as a cost-effective practice to keep their data up-to-date. 360\u00b0-View of Customer Profile No customer wants to wait long to get the information requested. The NetSuite Salesforce integration offers the chance to reduce the time to connect with concerned clients who use your products or services. Once a customer request is submitted, your support team no longer has to check multiple systems to comprehend the situation. Instead, they can check claim details in a single source of truth as a result of the integration process. That way, the resolution time of the request will be decreased, and the customer satisfaction rate will improve. Sales and Finance Team Coordination It\u2019s essential to have strong sales and finance teams aligned around the same data and vision. With Salesforce and NetSuite working as one, both teams have a shared source of information, which allows them to eliminate miscommunication and gain a shared, up-to-date view of key business data (products, customers, orders, payments, etc.). Both sales and finance teams can spend more time growing their business and less time worrying about balancing data. Streamlined Reporting and Forecasting With the Salesforce and NetSuite integration, it\u2019s easy to create data reports and dashboards to receive insights into the current business process. What\u2019s more, it provides an opportunity to accumulate historical data and takes it as a base for compound forecasting. Such results may significantly contribute to business strategy creation and elaboration. Enhanced Productivity The Salesforce NetSuite integration automates multiple cross-departmental processes and reduces manual input. As a result, the workload of repetitive tasks diminishes, which contributes to better productivity and provides a possibility to concentrate on creative aspects. Integration Methods: Choose What Works Best Given the positive effect that the integrated Salesforce and NetSuite offer to businesses, companies look for solutions to connect these two systems. As the demand triggers supply in the market, there are dozens of available methods and tools for integrating [CRM and ERP](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) platforms. Here, we explore some efficacious methods of integrating Salesforce and NetSuite. Native Connectors Third-Party Integration Platforms Manual Data Transfer Custom API Integration Native Connectors SuiteTalk NetSuite provides a SOAP-based web platform, [SuiteTalk](https://www.netsuite.com.au/portal/au/developers/resources/suitetalk.shtml) , that facilitates integration of NetSuite with other services. It works well with the recent tools as well as the legacy systems. Here are some key features of SuiteTalk: Interaction with any platform or programming language that supports the SOAP standard. Role-based authentication model that is similar to that of NetSuite. Error-handling architecture with clear warning messages. SuiteCloud [SuiteCloud](https://www.netsuite.com/portal/platform.shtml) is a platform that allows companies to extend the capabilities of the core NetSuite product. It provides a set of solutions for ERP system configuration and customization to adapt it to specific business needs. SuiteCloud contains many components, but the ones presented below are the most frequently used for integration purposes: SuiteScript allows users to customize and automate business processes in a programmatic way using scripts. SuiteFlow enables businesses to create event-based workflows to streamline daily operations. NetSuite Connector [NetSuite Connector](https://www.netsuite.com/portal/products/connectors.shtml) is a native solution that allows users to quickly map data between NetSuite and e-commerce platforms, logistics applications, payment systems, and many others. It automates data integration between the ERP systems and the selected third-party tools. That way, the information gets synchronized between the organizational apps, so the quality and precision of business operations are improved. Third-Party Integration Platforms iPaaS Solutions IPaaS stands for [integration platform as a service](https://skyvia.com/blog/best-ipaas-solutions/) , which can connect NetSuite to other tools with a no-code or low-code approach. Such solutions allow for transforming data on the go before sync. In general, IPaaS platforms promote fast setup, handle errors, and offer templates that considerably simplify the process of integration. Some of the popular examples of [IPaaS solutions](https://skyvia.com/blog/best-ipaas-solutions/) are: Celigo Dell Boomi MuleSoft Skyvia Salesforce-Native Apps Connecting NetSuite to Salesforce is possible by using the extensions on the CRM side. There are multiple apps that can be installed directly in Salesforce via AppExchange. Some popular examples of Salesforce-native apps for integration with NetSuite include: [Breadwinner](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3A00000G0opvUAB) provides real-time synchronization of financial data between systems. [Peeklogic](https://appexchange.salesforce.com/appxListingDetail?listingId=e76b980e-c76a-4d1c-8bf6-89cf8397bf7f) offers object management and tracking, data synchronization, and workflow automation. Manual Data Transfer In certain cases, manual integration could also be a good solution for data exchange between Salesforce and NetSuite. It involves the export of a CSV file from one app and its subsequent import into another system. This process is extremely time-consuming and error-prone. Anyway, manual data transfer might be the only effective one when there is a need for one-time data transfer. Custom API Integration This approach is suitable for teams with technical expertise and programming skills. It involves the use of NetSuite and Salesforce APIs. Note that the NetSuite REST API services are available for those users who have enabled these features in their accounts. Users also need the required permissions assigned to the user role. Check the documentation of the [NetSuite REST API Browser](https://system.netsuite.com/help/helpcenter/en_US/APIs/REST_API_Browser/record/v1/2022.2/index.html) to find out the latest updates. In a nutshell, the custom API integration process includes the following steps: Adding a RESTlet script in NetSuite, where a separate JS file needs to be created for each object. Creating authentication rules in Salesforce. Start data transfer between NetSuite and Salesforce. Though the data sync process can be performed in real-time, [batch processing](https://skyvia.com/blog/batch-etl-processing/) is a better option. Comparison of Integration Methods: Which One to Select After having explored the most popular NetSuite Salesforce integration approaches, let\u2019s confront them. For that, we provide a comparison table with their main characteristics. We hope it will help you spot the differences between the methods and see which one could be applicable for your business situation. Method Real-time sync Coding required Setup difficulty Best for Key limitations SuiteTalk \u2705 \u2705 Medium Customized workflows High development effort, slower implementation SuiteCloud tools \u2705 \u2705 Medium Business process automation and customization High development effort, slower implementation Native Connector \u2705 \u274c Low Standard use cases, fast deployment Limited flexibility, early in rollout IPaaS platforms \u2705 \u274c Low Scalable flows, automation Depends on vendor capabilities Salesforce-native apps \u2705 \u274c Low Financial visibility, sales-finance sync Narrow focus, usually finance-centric Manual method \u274c \u274c Low One-time data transfer for legacy systems Error-prone, unscalable Custom API \u2705 \u2705 High Proprietary logic, total control over integration Time-consuming, costly, requires maintenance Step-by-Step Guide to Integrating Salesforce with NetSuite Before you start moving data from your Salesforce CRM system to NetSuite, let\u2019s have a general overview of the integration steps. It\u2019s a tool-agnostic approach that could be applied to any integration workflow you select. Step 1: Define Your Integration Goals At this initial stage, it\u2019s necessary to define these three core aspects: The frequency of data transfer (real-time or scheduled batch loading). What needs to be transferred (Opportunities, Invoices, Accounts, and others). The direction of data flow (one-way or bi-directional synchronization). Step 2: Map Core Data Entities Depending on the data entities chosen during the previous step, you need to think of how to map them. For instance, when syncing customer records or orders, ensure that key fields (names, IDs, dates, etc.) are correctly aligned. Validate data types and formats on both sides, and apply transformations if needed to ensure compatibility. Step 3: Choose the Right Integration Method At this step, feel free to refer to the integration methods described in this article. Preferably, you need to consider an approach that would allow you to send the objects with all the required settings. Also, pay attention to the following factors: Technical expertise of the team Customization requirements Budget Urgency of the integration implementation Step 4: Connect Salesforce and NetSuite Securely Establish a connection between the CRM and ERP systems before the actual data transfer takes place. With the IPaaS platforms, the whole process is semi-automated and takes several minutes at most. With a programmatic-based setup, it takes a bit more time to establish secure access to data using OAuth authentication methods and secure credentials, and configure user permissions correspondingly. Step 5: Build Your Workflows At this point, you need to implement the process designed during the primary step. As all the necessary preparations have taken place during steps 2-4, it\u2019s the right time to build the pipeline for connecting Salesforce and NetSuite. When using no-code tools, for instance, you will need to create logic by choosing the source objects for transfer, applying filters, defining mapping, configuring the target fields, and so on. When writing custom scripts, make sure to handle all the defined integration points \u2013 data extraction, transformation, error handling, and loading by using the appropriate APIs and logic for both systems. Step 6: Test the Integration End-to-End It\u2019s highly recommended that you test your integration flow before implementing it in the production environment. Feel free to use non-sensitive data or test datasets for these purposes. That way, you can check whether the mapping was performed correctly and validate the error-handling logic. Step 7: Monitor, Maintain, and Optimize Once the test flow works correctly, implement the pipeline in the production environment. Use the monitoring and logging mechanisms provided by your integration platform, or implement custom monitoring scripts in case of working with APIs. Common Use Cases for Salesforce NetSuite Integration With so many objects in Salesforce and fields in NetSuite, identifying the right data to sync can be overwhelming. Here are some of the most popular examples of data synchronization between these two systems: Synchronizing Salesforce opportunities to NetSuite sales orders; Sending information on NetSuite invoices and payment status back into Salesforce. Creating NetSuite customers once new accounts are created in Salesforce. Changing Salesforce sales status based on payment activities. Top Tools for Salesforce and NetSuite Integration As mentioned previously, using third-party tools is one of the ways to connect CRM and ERP. These solutions usually don\u2019t require programming skills, include a GUI for easy integration and flow building, and provide monitoring boards for sync status checks. In this section, we provide some of the most popular and efficient tools for Salesforce to NetSuite integration. Skyvia [Skyvia](https://skyvia.com/) is a comprehensive cloud platform that performs a wide range of data-related tasks. It contains solutions for data integration, workflow automation, endpoint creation, backup, and data querying. Its [Data Integration](https://skyvia.com/data-integration) tool is specifically designed to execute various kinds of operations, including import, export, synchronization, and replication. It also allows users to create complex [data flows](https://docs.skyvia.com/data-integration/data-flow/how-to-build-or-edit-data-flow.html) with multiple sources and advanced transformations. Another product of Skyvia called [Automation](https://skyvia.com/automation) could also be helpful for business workflow optimization by automating repetitive tasks. Skyvia provides the following benefits for NetSuite Salesforce integration: Pre-built connectors to both systems and other [200+ data sources](https://skyvia.com/connectors) Rich functionality to implement both simple and complex integrations No-code GUI Web-based access Task monitoring in real-time with detailed information on errors Flexible pricing plans for any business Breadwinner [Breadwinner](https://breadwinner.com/netsuite-salesforce-integration/) is a Salesforce-native app that comes in various editions, including one specifically dedicated to integration with NetSuite. It facilitates the creation of NetSuite records (invoices, sales orders, etc.) directly from Salesforce. This application ensures easy management of custom objects and multiple subsidiaries. It is also compliant with SOC II, Type 2, to provide a secure environment for data exchange. The core benefits of Breadwinner are the following: Built-in Einstein AI for obtaining transparency over your data Customization options to set up the integration to your needs Robust security in data management Celigo (Integrator.io) [Celigo](https://www.celigo.com/) is an [integration platform as a service](https://skyvia.com/blog/what-is-ipaas/) that employs a low-code approach to connecting various services on the web. It\u2019s primarily designed for enterprises to help them automate routine processes and cross-platform workflows. With Celigo, companies can bring various SaaS tools together using already pre-configured templates, which accelerate the setup of the integration processes. At the same time, you can easily create custom integrations that meet your data transfer requirements. Here are some of the key advantages of Celigo: An extensive library of templates for Salesforce NetSuite integration Pre-built flows for lead-to-cash, order sync, and other workflows AI-driven error handling and support Full management of B2B app-to-app management Dell Boomi Another iPaaS solution to use for NetSuite Salesforce connection is [Boomi](https://boomi.com/) by Dell. This tool employs the power of AI to help users create, customize, and monitor data flows. Boomi is popular among large enterprises because it can handle high-volume, real-time data streams. It also supports custom workflows and can orchestrate complex processes with ease. The most notable features of Boomi are: API deployment and management Support for event-driven integration workflows B2B/EDI management Compliance with modern security protocols MuleSoft [MuleSoft](https://www.mulesoft.com/) is an all-in-one platform that allows users to connect various systems within a single workspace. This service is now a Salesforce-owned tool, so it\u2019s often used for integrating CRM data with other systems, such as NetSuite. This platform offers a high degree of customizability in the workflows and focuses on API connectivity, which makes it a top choice among developers. MuleSoft\u2019s key benefits: API design, test, and validation with built-in mocking service AI-based assistance for API development Support of SOAP and REST API designs Comparison of Integration Tools In the table below, find the comparative analysis of the tools that can be used for Salesforce and NetSuite integration. Tool Setup difficulty Coding Best for Strength Skyvia Low \u274c SMBs Easy to use, cloud-native, time-to-value Breadwinner Low \u274c Finance teams Embedded UI, sales-finance alignment Celigo Medium \u274c Mid-size and large businesses Templates, robust error handling Dell Boomi High \u2705 IT teams Scalable, advanced orchestration MuleSoft High \u2705 Enterprises, custom API integrations Full control, excellent API management Example: Integrating Salesforce and NetSuite with Skyvia To switch from theory to practice, we\u2019d like to show how to easily and quickly configure automated data integration between Salesforce and NetSuite using Skyvia. It offers an efficient way to get fast results out of your data. This platform connects to local and cloud-based services, so you can build [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) solutions. Create Connections for NetSuite and Salesforce Before implementing any integration scenarios, establish connections between the two services and Skyvia. Note: [Create a Skyvia account](https://app.skyvia.com/login) if you don\u2019t have one yet. You can also try Skyvia with a free trial version. Sign in to your Skyvia account. Click +Create New in the top menu. Select the Connection option on the left menu. Select the service you want to connect to: NetSuite \u2013 [see the instructions](https://skyvia.com/connectors/netsuite) Salesforce \u2013 [see the instructions](https://skyvia.com/connectors/salesforce) Follow the provided guidelines to connect each tool to Skyvia. Sample Business Case Description Marketing, finance, sales, and support teams must work together to better target and segment customers and prospects. This allows businesses to reach those most likely to purchase or convert. The most popular objects in integrations are Accounts/Customers, Contacts, Opportunities, Quotes/Estimates, Products/Items, Orders/Sales Orders, Invoices, Credit Memos, and Payments. Since Skyvia is very flexible and has many capabilities, we demonstrate several possible scenarios for data integration, both simple and advanced: Import Data Flow Automation Option 1. Data Integration (Import Scenario) This simple scenario is created for data integrations between two data sources. For example, you can import Customers and Contacts from Salesforce to NetSuite and back. Click +Create New in the top menu and select Import in the Integration category. In the opened window, you can find the detailed instructions on the right and the workspace with Source and Target data flows on the left. Define the Source type and connection (Salesforce, in this example). Select the Target connection (NetSuite, in this example). Specify additional options if needed. Use the Add new button on the right to create a new data import task. Specify Source settings: select an object for import, and, if necessary, configure filtering. Alternatively, you can select the Execute Command or Execute Query action to query data (in advanced mode). Specify Target settings: select the target object (table) or multiple objects (tables) to import data to and the operation to apply when loading data. Configure mapping of target columns to source columns, expressions, constants, lookups, etc. Important! Make sure to map required fields, otherwise, the integration will fail. In this example, the required fields are company_internalID and title. The company_internalID field specifies an entity in NetSuite which the support case is linked to. This is typically a customer, but it can also be a contact or vendor. This ID is retrieved from an external system (Salesforce in this example). This mapping ensures that each case is properly linked to the correct NetSuite customer record. To achieve this, we\u2019ll use a source lookup to retrieve the internal ID of the customer from Salesforce: This is how it works: Skyvia checks the Salesforce source field AccountId. It then looks up a Customer in NetSuite where entityId matches Account.Name in Salesforce. If it finds a match, it retrieves the Customer\u2019s internalId. That internalId is written into company_internalID on the SupportCase. The title field in NetSuite SupportCase represents the subject or summary of the case. We\u2019ll map it to the equivalent Salesforce field Subject, using column mapping: Note: Before saving your Import integration, you can schedule it for automatic execution. After completing all the import task settings, click the Create button. Now, you can run this integration and get your data transferred from Salesforce to NetSuite. Advantages of the Import integration scenario The Import integration scenario works best for ordinary data flows that require regular updates. For instance, when there is a need to import closed Salesforce opportunities into NetSuite sales orders. With the Import tool, it\u2019s possible to select the required amount of data for migration using filters. It also allows for working with data from a historical perspective, in contrast to the trigger-action integration approach, where data starts to be imported only from the current moment. Option 2. Data Integration (Data Flow Scenario) This flexible scenario is created for complex data integrations. Being an easy-to-use, no-coding tool, Data Flow helps to manage all data integration and transformation requirements in a single unified location. As an example, let\u2019s synchronize data between the Customer tables in Salesforce and NetSuite. Since we only need to process newly added records, we create a NewData parameter. We use the Value component to update this parameter after each integration run. We also define two variables \u2013 RecordsInserted and RecordsFailed \u2013 to monitor the integration outcome. Successfully inserted records are sent to NetSuite, while failed ones are logged along with an error message. This routing is managed with the Conditional Split component. Advantages of the Data Flow scenario The Data Flow scenario works best for companies that need to integrate data from more than two sources and need constant and regular data updates with complex logic. It allows for connecting multiple data sources at once, applying multistage transformations, enriching data, etc. Option 3. Automation (Trigger-Based Integration) This scenario is designed to automate regular and repetitive workflows. It uses event-based mechanisms that trigger data transfer to the destination once a certain action occurs on the source side. Skyvia allows users to build automation flows in a visual editor. You can either create diagrams manually or run event-based workflows. Alternatively, the automation flows can be executed on a schedule. In our example, we create an event-driven automation: once an opportunity is closed-won in Salesforce, the appropriate changes are made on the NetSuite customer records side automatically. In the upper menu, click +Create New -> Automation . Type the automation flow name and select the trigger type. Click on the Trigger icon and perform the needed configurations depending on its type. Select the items from the left panel, drag them to the flow, and specify logic for operations. Drag the Action item to the flow. Specify the connection details and perform further configurations required. Save the automation flow. Switch the automation item to the Enabled state in the upper-right corner of the screen to activate it. Advantages of the Automation Scenario Workflows with the Automation tool would work best for companies that need event-based or scheduled integrations of data between Salesforce and NetSuite. It transfers only the recently added data from one system to another, helping to avoid unnecessary data loading operations. This tool also provides a wide range of logic operations, so you can configure automations to your taste. Challenges of the Salesforce and NetSuite Integration and How to Overcome Them Integrating Salesforce with NetSuite is a complex task that can pose challenges even for seasoned teams. Possible issues may result primarily from the fundamental differences in data structures of each platform. Below are some typical challenges that may interrupt your integration, along with practical solutions to address them. Challenge Solution Field mapping Apply data transformations to ensure the correct alignment of fields across systems. Record duplicates Use filters and lookups to find identical records. API limitations Explore the maximum number of API calls per minute available in Salesforce and NetSuite so as not to run into API limitations. Error handling Implement logging, alerts, and retry mechanisms. Schedule regular audits and reviews. Conclusion: Build a Smarter Workflow with Salesforce NetSuite Integration Integrating Salesforce and NetSuite isn\u2019t just about syncing data \u2013 it\u2019s about enabling better collaboration, faster financial processes, and greater visibility into your operations. In this NetSuite Salesforce integration guide, we have demonstrated a number of methods and tools that can be used for that. We also presented a bunch of real-world examples showing how to connect these systems using various Skyvia tools. In order to select the method that would be right for you, it\u2019s necessary to analyze your team\u2019s expertise, evaluate business goals, and calculate the budget. No matter whether you choose a native connector, an iPaaS platform, or a custom API solution, the main thing is that you achieve your objectives. If you value speed, flexibility, and ease of use, Skyvia could be a good option for that. It offers various tools and plans so any business can find the most applicable option for them. What\u2019s more, it makes the integration process easy through the added guidelines right within the integration space! F.A.Q. for NetSuite and Salesforce Integration Can I integrate Salesforce and NetSuite without code? Yes. It\u2019s possible to integrate Salesforce and NetSuite using no-code iPaaS tools like Skyvia and Salesforce-native apps like Breadwinner. Does Salesforce have a native NetSuite connector? NetSuite launched an official connector in 2024, powered by Oracle Integration Cloud. Can I sync custom fields or objects? Yes. Some native and third-party tools support the transfer of custom data fields from one system to another. Such operations are also possible with custom scripts and APIs. How secure is the integration? Both native and third-party tools embed encryption mechanisms, comply with modern security protocols, and implement role-based access control to ensure safe data transfer. Can I trigger workflows automatically? Yes. Tools like Skyvia support event-based automations to streamline business workflows. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fnetsuite-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+NetSuite+Integration%3A+The+Complete+2025+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fnetsuite-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/netsuite-salesforce-integration/&title=Salesforce+NetSuite+Integration%3A+The+Complete+2025+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/odata-cheat-sheet/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) OData Cheat Sheet for SQL Users By [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) [0](https://skyvia.com/blog/odata-cheat-sheet/#respond) 4514 November 13, 2019 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fodata-cheat-sheet%2F) [Twitter](https://twitter.com/intent/tweet?text=OData+Cheat+Sheet+for+SQL+Users&url=https%3A%2F%2Fblog.skyvia.com%2Fodata-cheat-sheet%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/odata-cheat-sheet/&title=OData+Cheat+Sheet+for+SQL+Users) OData is a widely accepted open standard for data access over the Internet. OData protocol provides powerful features for querying data via URLs, similar to SQL. This article helps to quickly understand how to query data via OData and shows how OData features correspond to the most commonly used SQL features. MAIN ODATA FEATURES Feature SQL OData Number of records in a table SELECT\u00a0COUNT(*)\u00a0FROM\u00a0Emp /Emps/$count Querying specific table columns SELECT\u00a0ENAME, JOB, SAL\u00a0FROM\u00a0Emp /Emps?$select=ENAME,JOB,SAL Querying the second 5 records SELECT\u00a0*\u00a0FROM\u00a0Emp\u00a0ORDER\u00a0BY\u00a0(SELECT\u00a0NULL) OFFSET 5\u00a0ROWS\u00a0FETCH\u00a0FIRST\u00a05\u00a0ROWS\u00a0ONLY /Emps?$top=5&$skip=5 Ordering Data SELECT\u00a0*\u00a0FROM\u00a0Emp\u00a0ORDER\u00a0BY\u00a0ENAME\u00a0DESC, SAL /Emps?$orderby=ENAME desc,SAL Querying joined tables SELECT\u00a0*\u00a0FROM\u00a0Dept\u00a0LEFT\u00a0OUTER\u00a0JOIN\u00a0Emp\u00a0ON Dept.DEPTNO = Emp.DEPTNO /Depts?$expand=Emps Filtering data SELECT\u00a0*\u00a0FROM\u00a0EMP\u00a0WHERE\u00a0(SAL/2 > 500\u00a0AND HIREDATE <=\u00a0\u201901/01/1985\u2032)\u00a0OR\u00a0(COMM\u00a0IS\u00a0NOT NULL\u00a0AND\u00a0ENAME\u00a0LIKE\u00a0\u2018J%\u2019) /Emps?$filter=(SAL div 2 gt 500 and HIREDATE le 1985-01-01) or (COMM ne null and startswith(ENAME,\u2019J\u2019)) Aggregating data SELECT\u00a0SUM(SAL)\u00a0AS\u00a0Sum,\u00a0MAX(SAL)\u00a0AS\u00a0Max, Min(SAL)\u00a0AS\u00a0Min,\u00a0AVG(Sal)\u00a0AS\u00a0Avg\u00a0FROM\u00a0Emp /Emps?$apply=aggregate(SAL with sum as Sum,SAL with max as Max,SAL with min as Min,SAL with average as Avg) Filter Expressions OData protocol supports a number of different mathematical, logical, etc., operators and functions in the $filter expression. Here you can find a brief list of these operators and functions that you can use in your OData requests together with their SQL analogs. Note that they are case-sensitive in OData requests. OPERATORS SQL OData = eq != ne > gt >= ge < lt <= le AND and OR or NOT not + add \u2013 sub * mul / div % mod () () IS NULL eq null IS NOT NULL ne null X LIKE \u2018%Y%\u2019 OData v1 \u2013 v3:\u00a0substringof(\u2018Y\u2019,X) OData v4:\u00a0contains(X,\u2019Y\u2019) X LIKE \u2018Y%\u2019 startswith(X,\u2019Y\u2019) X LIKE \u2018%Y\u2019 endswith(X,\u2019Y\u2019) STRING FUNCTIONS SQL OData LEN(X) length(X) CHARINDEX(X,\u2019Y\u2019) indexof(X,\u2019Y\u2019) REPLACE(X,\u2019Y\u2019,\u2019Z\u2019) replace(X,\u2019Y\u2019,\u2019Z\u2019) SUBSTRING(X,2,3) substring(X,2,3) LOWER(X) tolower(X) UPPER(X) toupper(X) TRIM(X) trim(X) CONCAT(X,Y) concat(X,Y) DATE FUNCTIONS SQL OData DATEPART(year,X) year(X) DATEPART(month,X) month(X) DATEPART(day,X) day(X) DATEPART(hour,X) hour(X) DATEPART(minute,X) minute(X) DATEPART(second,X) second(X) MATHEMATICAL FUNCTIONS SQL OData ROUND(X) round(X) FLOOR(X) floor(X) CEILING(X) ceiling(X) Skyvia Connect [Skyvia Connect](https://skyvia.com/connect/) is an OData server-as-a-service solution that allows creating an OData interface for your data, stored in various data sources, via drag-n-drop in just a couple of minutes. It creates endpoints supporting all the OData features listed here and more. Publish your data via Skyvia Connect and try the listed OData features. Skyvia Connect provides endpoint access control and logging features, and you can use Skyvia Connect endpoints in a wide range of OData consumer applications. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fodata-cheat-sheet%2F) [Twitter](https://twitter.com/intent/tweet?text=OData+Cheat+Sheet+for+SQL+Users&url=https%3A%2F%2Fblog.skyvia.com%2Fodata-cheat-sheet%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/odata-cheat-sheet/&title=OData+Cheat+Sheet+for+SQL+Users) [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) Technical Writer Continue Reading [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Salesforce Connect for Your Business](https://skyvia.com/blog/salesforce-connect-guide/) [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Using Salesforce API in Skyvia: SOAP API vs Bulk API](https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/)" }, { "url": "https://skyvia.com/blog/odata-rest-api-for-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) OData REST API for MySQL By [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) [0](https://skyvia.com/blog/odata-rest-api-for-mysql/#respond) 4377 January 24, 2020 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fodata-rest-api-for-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=OData+REST+API+for+MySQL&url=https%3A%2F%2Fblog.skyvia.com%2Fodata-rest-api-for-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/odata-rest-api-for-mysql/&title=OData+REST+API+for+MySQL) Skyvia Connect allows you to\u00a0easily expose MySQL data via OData service \u2014 RESTful API for quick, real-time data access and manipulation. OData is a\u00a0widely accepted open standard built on HTTP, ATOM/XML, and JSON, intended for data access over the\u00a0Internet with no firewall configuration. With Skyvia, you do not need to build web API manually. Instead of building RESTful API with Node.js and MySQL, you can create an OData endpoint for quick data access. Skyvia Connect is a universal cloud-based solution requiring no coding or technical skills. You do not need to develop MySQL restful web service, care about security, hosting and domain, obtain respective certificates, deploy, administer or maintain it. Our cloud-based solution will do it for you. You can use it to expose any data in a fast and convenient way \u2013 both from cloud and on-premise servers. To connect to on-premise servers, use our new Agent application developed by the Skyvia team exactly for these needs. Table of contents What Is Agent? How Can We Configure It? Creating an Agent in Skyvia Creating a Connection in Skyvia Creating an Endpoint in Skyvia What Is Agent? How Can We Configure It? The Agent is a secure tunnel application developed to connect your Skyvia account to local, on-premise databases with ease and comfort and to establish secure communication by bypassing the firewall. Try its functions by installing the Agent application to your PC and creating the MySQL connection you want to work with. Follow our easy step-by-step tutorial on how to do it in Skyvia. Creating an Agent in Skyvia First, you need to create a [free account](https://app.skyvia.com/) in Skyvia. Second, you need to proceed with creating an agent. We\u2019ve tried to provide the explicit instructions below on how to do it. To create an agent, perform the following steps: Click +NEW in the top menu and select Agent from the list on the left. When the Agent editor page opens, click Download.exe to download an Agent application Click the downloaded file to install the Agent application. When you do it, the Skyvia Agent window pops up. Click the INSTALL button. By default, agent will be installed to C:\\Program Files (x86)\\Skyvia Agent. After you have downloaded the Skyvia agent, you need to download the security key. To do it, click Download Key at the top right corner. When the key file is downloaded to the Downloads folder, move it to the folder where the Agent is installed. Note that the key file name should always be skyvia_security_agent.key. Then, in the Skyvia Agent folder, click Skyvia.Agent.Client to start the program. If you have done everything correctly, on the Agent editor page you will see Connected sign and green light next to it. Optionally test the agent connection by clicking the Test agent button. Creating a Connection in Skyvia To create a connection, perform the following steps: Click +NEW in the top menu. Select Connection from the list on the left. The Select Connector page will open. Select MySQL from available connections. When the Connection editor page opens, select Agent under Connection Mode and fill in all the required parameters. Click Test Connection to check whether the connection is successful. Then click Create Connection to create and save the connection. Creating an Endpoint in Skyvia To create MySQL OData endpoint, perform the following steps: Click +NEW in the top menu and click on OData Endpoint under Connect . On the page that opens, select MySQL connection from the created ones and you are transferred to the Choose Endpoint Editor Mode page. Click Advanced mode to design your OData endpoint on a diagram. On the Endpoint editor page, drag-n-drop tables you need and manage connections between them. In Skyvia, you can optionally allow only authenticated users to access endpoint data and optionally limit IP addresses, from which the data of the endpoint can be accessed. Click Users on the toolbar and add a user to enable authenti\u0441ation for your endpoint. The same you can do with IPs . When you have already created the endpoint, you can copy its URL from the Overview tab and use it for different purposes as well as in your OData consumer applications. You may also test the endpoint URL in the browser. Having made a request in the browser, you will receive a response containing entity sets. In our example, we have received a response with 5 entity sets in JSON format \u2014 products, orders, orderdetails, productcategories and companies. To receive all companies and see their properties, like CompanyID, CompanyName, Web, Email, Address, City, Region, Country , etc., we add the corresponding name of the entity set, companies , in the URL bar and check the displayed data. You may use the endpoint URL in any other OData consumer applications, such as Excel or [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) , etc. You determine it yourself based on your business needs and actual demand. We welcome you to try OData REST API for MySQL and enjoy full functionality of our platform. Give us your [feedback](https://skyvia.com/company/contacts) and share your thoughts! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fodata-rest-api-for-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=OData+REST+API+for+MySQL&url=https%3A%2F%2Fblog.skyvia.com%2Fodata-rest-api-for-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/odata-rest-api-for-mysql/&title=OData+REST+API+for+MySQL) [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) Technical Writer Continue Reading [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Salesforce Connect for Your Business](https://skyvia.com/blog/salesforce-connect-guide/) [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Using Salesforce API in Skyvia: SOAP API vs Bulk API](https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/)" }, { "url": "https://skyvia.com/blog/onedrive-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) How to Integrate OneDrive with Salesforce: Step-by-Step Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/onedrive-salesforce-integration/#respond) 6524 April 9, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fonedrive-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+OneDrive+with+Salesforce%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fonedrive-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/onedrive-salesforce-integration/&title=How+to+Integrate+OneDrive+with+Salesforce%3A+Step-by-Step+Guide) Jumping between OneDrive and Salesforce all day is a productivity killer. Sales reps waste time hunting for proposals, contracts, and presentations stored in shared folders. Meanwhile, support teams struggle to link case-related documents to the correct records. Without integration, files get scattered, version control becomes a nightmare, and collaboration slows to a crawl. Integrating OneDrive with Salesforce can fix all of that. In this guide, we\u2019ll show exactly how to connect the two platforms and streamline companies\u2019 document management once and for all. Table of contents Key Features of Salesforce Key Features of OneDrive Methods for Integrating Salesforce and OneDrive Method 1: Native Tools Method 2: Third-Party Integration Platforms Method 3: Custom API Integration Conclusion Key Features of Salesforce Salesforce isn\u2019t just a CRM but a full-blown ecosystem helping businesses sell smarter , support faster , and grow stronger . Here are some of the platform\u2019s core features and the problems they help solve: Lead and Opportunity Management. Tracking every customer interaction, deal stage, and next step means that all the info is in one place. No more chasing down email threads or sticky notes. Customizable Dashboards and Reports. Get real-time visibility into sales performance, pipeline, and KPIs without waiting on analysts. Great for fast, data-driven decisions. Case and Service Management. Help support teams resolve customer issues quickly by organizing all case-related info, notes, and responses inside Salesforce. Collaboration with Chatter and Tasks. Sales and service teams can assign follow-ups, tag teammates, and stay aligned without needing to leave the CRM. Automation with Flows and Process Builder. Automate repetitive tasks like sending reminders, updating records, or notifying reps, reducing manual effort and human error. AppExchange and Integrations. Extend Salesforce with thousands of third-party apps (including OneDrive) to centralize workflows and reduce platform-hopping. Key Features of OneDrive OneDrive is a secure, collaborative workspace for modern teams. Let\u2019s see what it brings to the table and how it solves real-world file management headaches: Cloud-Based File Storage. Store and access documents from anywhere. No more emailing attachments or worrying about which version is the latest. Real-Time Collaboration. Multiple team members can work on the same document at once. Edits sync instantly, so everyone stays on the same page. Version History and Recovery. Accidentally overwrite something? OneDrive keeps a detailed version history, so you can easily roll back changes and avoid costly mistakes. Folder Sharing and Permissions. Share documents with internal teams or external clients, with fine-tuned access control to protect sensitive info. Seamless Integration with Microsoft 365. Open, edit, and comment directly in the cloud on Word, Excel, and PowerPoint files. No downloads are required. Mobile and Desktop Access. Whether your team is in the office, on the road, or working remotely, OneDrive ensures they can access and upload files anytime, anywhere. Methods for Integrating Salesforce and OneDrive There\u2019s no one-size-fits-all when it comes to integrating Salesforce with OneDrive. Depending on users\u2019 goals , technical skills , and budget , you have a few solid options, each with pros, trade-offs, and best-fit scenarios. Here\u2019s a quick overview of the three main ways to make the connection happen. 1. Native Integration via Microsoft or Salesforce Apps This method is most straightforward, using out-of-the-box features or officially supported apps to link Salesforce records to OneDrive folders. It\u2019s simple, no-code, and great for teams needing basic file access and sharing within Salesforce. 2. Third-Party Tools (like Skyvia) Want something more flexible but still user-friendly ? No-code integration platforms like [Skyvia](https://skyvia.com/data-integration) let users build automated data flows between Salesforce and OneDrive. You can move files , sync metadata , or even schedule uploads. And forget about coding. 3. Custom API Integration For those with in-house developers or very specific requirements, building a custom integration using Salesforce and Microsoft Graph APIs gives total control. This route is more technical but allows for deep customization and advanced logic. The table below compares such methods, providing users with a clear, comprehensive view of each one. Method Group Method Best For Skill Level Customizability Native Integration Salesforce Files Connect Teams that need simple file access from OneDrive inside Salesforce. Beginner / Admin-level Low Third-Party Tools Skyvia (No-Code Platform) Businesses looking for automation, scheduled syncs, or file syncing. Beginner\u2013Intermediate Medium Custom Integration Salesforce + Microsoft Graph API Enterprises with dev teams and advanced, tailored requirements. Advanced / Developer High (Full control) Method 1: Native Tools [Files Connect](https://help.salesforce.com/s/articleView?id=experience.collab_admin_files_connect.htm&type=5&utm) is a great starting point for those looking for a simple, no-fuss way to access OneDrive files right inside Salesforce. It lets organizations link external documents to records without switching tabs or downloading anything. You won\u2019t get fancy automation or deep syncs, but the job gets done with minimal setup for everyday file access and sharing. It\u2019s already built into many Salesforce editions. Users need to turn it on and configure. Best For Sales teams who want to quickly attach proposals, quotes, or presentations stored in OneDrive to leads or opportunities without leaving Salesforce. Support teams who need to view customer-submitted files (like screenshots, logs, or forms) without chasing down email threads. Small to mid-sized businesses looking for a cost-effective solution that doesn\u2019t require extra software or development work. Note : This method solves common pain points like scattered file storage , lost version control , and the constant back-and-forth between platforms . It\u2019s especially useful when teams mostly need read-only file access directly within Salesforce records. Step 1: Enable Files Connect in Salesforce Before linking OneDrive files to Salesforce records, you\u2019ll need to activate Files Connect in your org settings. This is a one-time setup step that only takes a few minutes, but unlocking the integration features is essential. Here\u2019s how to get started: Click the Gear icon in the top-right corner of your Salesforce screen and select Setup . In the Quick Find box on the left, type Files Connect . Select Files Connect . Here in the \u201c Edit \u201d section, you will find specific options. Enable Files Connect . To access remote content repositories from within Chatter. File Sharing. Specifies how files get stored in external repositories, such as SharePoint. Use External object search layout. Provides a search interface similar to the one used for custom objects. Enable Links conversion . Allows external document URLs to get converted to file references posted in the feed. Click Edit , and select Enable Files Connect for file sharing. Choose the authentication method (usually OAuth 2.0 for OneDrive/SharePoint). Save your changes. Step 2: Creating an Authentication Provider To connect Salesforce with OneDrive for Business, you first need to create an authentication provider in Salesforce. Then, you\u2019ll register a matching app in Office 365 to complete the handshake. Part 1: Set Up the Authentication Provider in Salesforce Go to Setup in Salesforce. In the Quick Find box , type and open Auth. Providers (under the Identity category). Click New to create a new provider. For Provider Type, select Microsoft Access Control Service. Fill in the following fields: Name: You can put your name as you want it to appear in Salesforce. URL Suffix : This is where you can add a word or phrase to the end of the URL path. By default, the suffix is the same as the entry for the Name. Consumer Key : At this point, you can enter a temporary value here and replace it with the actual key later. Consumer Secret : At this point, you can enter a temporary value here and replace it with the actual key later. Authorize Endpoint URL : This is where you can put a blank that starts with HTTPS. Token Endpoint URL : This is where you can put a blank that starts with HTTPS. Default Scopes : Optional or leave blank. 6. Click Save . After saving, copy the Callback URL Salesforce generates. You\u2019ll need it when creating the Office 365 app. Part 2: Register an App in Office 365 (SharePoint) Log into your Office 365 Admin Account and go to: https://xyzenterprise-my.sharepoint.com/layouts/15/appregnew.aspx (Replace xyzenterprise with your actual SharePoint domain). Fill out the registration form: Client Id: Click Generate , then save it somewhere safe. Client Secret: Click Generate, then save it. Title: Name your app (e.g., \u201cSalesforceOneDrive\u201d). App Domain: Enter your Salesforce domain. Redirect URL: Paste the callback URL you copied earlier. Click Create. Part 3: Grant App Permissions Now go to: https://xyzenterprise-my.sharepoint.com/_layouts/15/appinv.aspx Enter the following: App Id: Paste the Client Id from the previous step, then click Lookup. Title, App Domain, Redirect URL: Leave the defaults. Permission Request XML: Paste this snippet, replacing [PERMISSION] with your needed access level (Read, Write, Manage, or FullControl): \n \n Click Create . Set the following options once more: App Id: Enter the Client Id you got in the last step, and click Lookup. Title: Keep the default value. App Domain: Use the default value. Redirect URL: Use the default value. Permission Request XML: Here, enter a string that looks like this: Replace the [PLACEHOLDER] with one of these values: Read Write Manage FullControl 5. Under the same Salesforce Auth. Providers tab, click Edit next to the Authentication Provider you already made and change the following fields: Consumer Key : Type in the Client ID you got in the last step. Client Secret : Type in the Client Secret you got from the steps before. Authorize Endpoint URL : Type in the URL of the Office 365 OAuthAuthorize.aspx page. Note : Here\u2019s how the URL looks: https://xyzenterprise-my.sharepoint.com/_layouts/15/OauthAuthorize.aspx Token Endpoint URL: Here, input the URL in the following format: [https://accounts.accesscontrol.windows.net/xyzenterprice.onmicrosoft.com/tokens/OAuth/2?resource=00000003-0000-0ff1-ce00-000000000000/xyzeterprise-my.sharepoint.com@xyzenterprise.onmicrosoft.com](https://accounts.accesscontrol.windows.net/*xyzenterprice*.onmicrosoft.com/tokens/OAuth/2?resource=00000003-0000-0ff1-ce00-000000000000/*xyzeterprise*-my.sharepoint.com@*xyzenterprise*.onmicrosoft.com) 6. Click Save when finished. The Authentication Provider is now ready for use. Now that your Salesforce CRM is set up, you can use the power of OneDrive Salesforce Integration by using the Salesforce CRM to access all the information on OneDrive. Part 4: Update Salesforce Auth. Provider with Real Credentials Return to Auth. Providers in Salesforce. Click Edit on the provider you created earlier. Replace the placeholder values: Consumer Key. Your Client ID from Office 365. Consumer Secret.\u00a0Your Client Secret. Authorize Endpoint URL. https://xyzenterprise-my.sharepoint.com/_layouts/15/OauthAuthorize.aspx Token Endpoint URL: [https://accounts.accesscontrol.windows.net/xyzenterprise.onmicrosoft.com/tokens/OAuth/2?resource=00000003-0000-0ff1-ce00-000000000000/xyzenterprise-my.sharepoint.com@xyzenterprise.onmicrosoft.com](https://accounts.accesscontrol.windows.net/xyzenterprise.onmicrosoft.com/tokens/OAuth/2?resource=00000003-0000-0ff1-ce00-000000000000/xyzenterprise-my.sharepoint.com@xyzenterprise.onmicrosoft.com) Click Save . Pros Organized attachments. Keep contracts, quotes, and case files neatly linked to the right leads, opportunities, or support tickets. Better collaboration. Teams can work on shared files in real-time, with no duplicates or version chaos. Faster onboarding. Sales reps and support agents instantly find the files they need, even if they didn\u2019t upload them. Fewer storage headaches. Offload big files to OneDrive while keeping access within Salesforce so you don\u2019t hit storage limits. Improved customer experience. Speed up responses with all the proper docs in the right place at the right time. Cons Limited file interaction. Tools like Files Connect offer basic access but not deep file sync or automation. Setup can be tricky. Configuring authentication providers and registering apps in Office 365 takes a few careful steps. Write access may be restricted. Depending on the Salesforce edition and permissions, you might only get read access to OneDrive files. Not available in all editions. Files Connect isn\u2019t included in some lower-tier Salesforce plans. Method 2: Third-Party Integration Platforms If native tools feel too limited and custom development seems like overkill, third-party integration platforms offer a practical middle ground. These tools connect apps like Salesforce and OneDrive with minimal setup and no need to write code. Depending on the platform, you can automate file transfers, trigger workflows, or even sync structured data between systems. It\u2019s a great option for teams that want more control and flexibility without getting too technical. Several tools are available in this space, including Skyvia , Zapier , Workato , and Integromat (Make) . Each has strengths depending on the companies\u2019 use cases. Some focus on real-time automation, others on scheduled data sync or advanced mapping. In the next section, we\u2019ll walk through how to use [Skyvia](https://skyvia.com/data-integration/integrate-salesforce-onedrive) as a practical example of setting this up. Best For Sales and marketing teams that want to automatically upload or sync customer-facing files (e.g., contracts, brochures, proposals) between Salesforce and OneDrive. Operations and data teams that need to keep folder structures and metadata aligned across platforms. Support teams that manage large volumes of file attachments and want to offload them to OneDrive without losing context in Salesforce. Companies with limited dev resources who still need advanced logic, like scheduling, conditional file handling, or multi-step workflows. Note : If the pain points of your business are manual uploads , scattered file versions , or repetitive data movemen t between platforms, a third-party integration tool like Skyvia helps eliminate the busy work and bring order to the document processes. How to Create Skyvia Connections To integrate OneDrive with Salesforce, you need to have a Skyvia account [registered](https://app.skyvia.com/register) and connections to [OneDrive](https://docs.skyvia.com/connectors/file-storages/onedrive_connections.html) and [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) created. Use the above links to see how to connect to Salesforce and OneDrive. How to Import Data from OneDrive to Salesforce Sign in with Skyvia and create a new Import . Click CSV from the storage device . Select OneDrive as a Source and Salesforce as a Target . Note : You may also specify additional options if needed, such as configuring the batch size and defining the task execution order when working with multiple tasks. Create the package task: Click + Add New on the top right of the Skyvia page. Select the source file on the Source Definition tab and adjust the [CSV options](https://docs.skyvia.com/common-platform-features/working-with-csv.html) if needed. Select the target in the Target definition table, select DML operations. Map the fields and save the task. You can transform source data with any available [mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/) type. When completing the mapping, save the task and run the integration. You can check the package results on the Monitor and Log tabs. How to Export Data from Salesforce to OneDrive Skyvia offers an export solution to connect Salesforce with OneDrive in the opposite direction. It queries the content of the needed Salesforce table, generates the CSV file, and places it in the OneDrive folder. To integrate Salesforce with OneDrive: Sign in with Skyvia and create a new Export integration. Select your Salesforce connection as a Source . Click on the CSV to storage service Target type. Select your OneDrive connection as a Target . Choose the folder where the future CSV file has to be placed. Enable the other available integration and file options. Create the export task. Click + Add New, same as was described above, and do the following: On the source definition tab, select the task mode. Advanced is more relevant for complex custom commands or Salesforce report exports. We will just export the record from the table without complicated transformations; thus, we select the Simple task mode. Under Object, select the source Salesforce table and needed fields. Add filtering conditions if necessary. You can change the result file column order and rename the columns on the Output Columns tab. Save the task and the integration and run it. Note : Skyvia allows running the integration automatically using a schedule. You can adjust the frequency according to your business needs. Go [here](https://skyvia.com/data-integration/integrate-salesforce-onedrive) to explore all available Salesforce OneDrive integration opportunities. Pros Cloud-based and hassle-free . No installation, updates, or maintenance. Just log in and build integrations from your browser. Truly no-code . Even non-technical users can create and manage data flows using a clean, wizard-based UI. Wide connector support . Connect to over 200+ cloud apps, databases, and file storage services, including Salesforce and OneDrive. Flexible functionality . Supports ETL, ELT, reverse ETL, replication, and scheduling in one platform. Affordable and scalable . Offers a free [plan](https://skyvia.com/pricing) for light users and cost-effective tiers for growing teams. Cons No real-time triggers . Skyvia works with scheduled jobs, not live webhook-based automation like Zapier or Make. Limited support for document-level file management . Users can move metadata and CSV data easily, but full native OneDrive file syncing (like folder structure automation) may require workarounds. Some advanced features require a paid plan . While the free version is great for testing, features like advanced data transformation and support of complex integration scenarios require paid subscriptions. Method 3: Custom API Integration If your team has some special requirements or needs deep, custom logic that off-the-shelf tools can\u2019t provide, a custom API integration might be the way to go. This approach includes manually connecting Salesforce and OneDrive using their respective APIs: Salesforce REST/SOAP APIs and Microsoft Graph API for OneDrive. While it gives users total control, it comes with a price: it\u2019s complex, time-consuming, and requires developer expertise. You\u2019ll need to handle authentication, error handling , data formatting , rate limits , and ongoing maintenance yourself. It\u2019s rarely the fastest or most efficient option, especially when platforms like Skyvia already handle 80\u201390% of typical business scenarios out of the box. But it may be worth the investment if you\u2019re incorporating a custom workflow into a more extensive system or need integration deeply tailored to internal processes. Best For Large enterprises that need to embed Salesforce\u2013OneDrive integration into custom-built internal platforms or portals. IT and dev-heavy teams with in-house engineering capacity to maintain long-term custom integrations. Businesses with strict data governance or compliance requirements that require complete control over how data moves and where it\u2019s stored. Organizations needing advanced logic like conditional routing, complex file transformations, or syncing data across multiple internal systems, not just Salesforce and OneDrive. Note: If the company\u2019s pain points include limited flexibility , data security requirements , or integration with legacy systems , custom APIs give users the power to build exactly what they need but expect a longer path to production. Pros Total control. Users can build exactly what the team needs, with complete flexibility over logic, data flow, error handling, and security. Deep customization. It is ideal for workflows that go beyond what standard platforms can support. An ability to be embedded in existing systems. Easily tie into proprietary platforms, internal tools, or larger enterprise architecture. Support of advanced use cases. Great for multi-step processes, file transformation, metadata management, or integrating with multiple APIs simultaneously. Cons High complexity. It requires skilled developers, a detailed API documentation review, and extensive testing. Longer setup time. Custom solutions often take weeks (or months) to design, build, and stabilize. Maintenance burden. Users will need to handle ongoing updates, authentication token renewals, error logging, and API version changes. Costly in the long run. Dev time, infrastructure, and support can make this option significantly more expensive than no-code tools. Overkill for simple use cases. This route may be unnecessarily complex if you\u2019re just syncing files or exporting data. Conclusion Integrating OneDrive with Salesforce can seriously streamline workflows, improve file management, and reduce the daily tab-juggling chaos. There\u2019s no one-size-fits-all solution. The best method depends on each team\u2019s needs, skills, and goals. If you\u2019re looking for something simple and built-in , native tools like Salesforce Files Connect are a solid starting point, especially for basic file access and sharing. Got complex processes, security policies, or systems that just won\u2019t play nice with plug-and-play tools? Custom API Integration might be the way to go, but be ready to invest in development resources and long-term maintenance. Need more automation, scheduling, or flexibility without hiring a developer? A third-party no-code platform like Skyvia gives companies powerful tools without the complexity, making it perfect for growing teams and business users. FAQ for OneDrive Salesforce Integration What are the main advantages of integrating Salesforce and OneDrive? Integration helps streamline workflows, reduce storage costs, and improve collaboration. You can access, attach, and manage OneDrive files directly from Salesforce records. No more tab-hopping or version chaos. It also centralizes documentation, making it easier for teams to stay aligned. Can I connect my personal OneDrive account to Salesforce, or do I need OneDrive for Business? You\u2019ll need\u00a0OneDrive for Business. Salesforce Files Connect and Microsoft Graph API (used in third-party or custom integrations) only support enterprise-grade OneDrive accounts. Personal OneDrive accounts are not compatible with these integration methods. Is coding required to integrate Salesforce and OneDrive using Skyvia? Nope. Skyvia is a\u00a0no-code platform, so you can set up and manage integrations using a visual interface and simple wizards. No programming skills are needed. Just connect your accounts, define the data flow, and let it run. What types of data can I synchronize between Salesforce and OneDrive? You can sync\u00a0CSV file data, such as exported Salesforce objects (Leads, Opportunities, Cases, etc.), into OneDrive folders. You can also import structured data from CSVs stored in OneDrive back into Salesforce. While full-document syncing isn\u2019t native, metadata and file records can be integrated. What happens if there\u2019s an error during the data synchronization process? Platforms like Skyvia include\u00a0built-in logging and monitoring. If an error occurs, you\u2019ll see detailed logs explaining what went wrong (e.g., permission issues, format mismatches, API limits). You can then fix the issue and rerun the job manually or let it retry based on the schedule. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fonedrive-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+OneDrive+with+Salesforce%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fonedrive-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/onedrive-salesforce-integration/&title=How+to+Integrate+OneDrive+with+Salesforce%3A+Step-by-Step+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/oracle-and-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Integrate Salesforce and Oracle: Step-by-Step Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/oracle-and-salesforce-integration/#respond) 3082 March 31, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Foracle-and-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Salesforce+and+Oracle%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Foracle-and-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/oracle-and-salesforce-integration/&title=How+to+Integrate+Salesforce+and+Oracle%3A+Step-by-Step+Guide) In the company\u2019s daily running, the top management is forced to spend significant efforts on obtaining unambiguous, accurate, and objective information about the state of affairs. The business defines the conditions for which analysts collect and transform data to create a single point of truth to rely on when managing the organization. One of the main questions that has become of interest to businesses recently is strategy and forecasting, like finance, sales, production, logistics, etc. Companies are developing.\u00a0 The value of analytics is becoming more apparent.\u00a0Businesses are no longer satisfied with performance indicators for building effective strategies and forecasts. The common practice of data management and analytics improvement strategy is to use Salesforce and Oracle integration solutions for [CRM and ERP](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) processes. Let\u2019s review the connection methods provided, their benefits, and business cases to select the most suitable one for your needs. Table of Contents Benefits of Salesforce and Oracle Connection Methods Overview: How to Integrate Salesforce with Oracle Method 1: Using APIs for Salesforce Oracle Integration Method 2: Oracle Integration Cloud with Prebuilt Salesforce Adapter Method 3: Using Salesforce Connect and Skyvia Connect Method 4: Using Middleware & ETL Tools Case 1: Integrate Oracle to Salesforce\u00a0(Step-by-Step Guide) Case 2: Replicate Salesforce to Oracle (Step-by-Step Guide) Best Practices for a Successful Integration Conclusion Benefits of Salesforce and Oracle Connection So, why does your business need to use the [Salesforce and Oracle connect](https://skyvia.com/data-integration/integrate-salesforce-oracle) options? What benefits may you obtain here, depending on the user scenarios? Let\u2019s review the examples of dependencies in the table below. Scenarios Benefits Data Warehousing \u2013\u00a0 Establish a single source of truth in a centralized DWH. \u2013 Enable advanced analytics, BI, and machine learning workflows. \u2013 Support Reverse ETL to push insights back into operational tools (e.g., Salesforce, marketing platforms). \u2013 Store historical data for compliance and trend forecasting. Data Synchronization \u2013 Maintain a single source of truth within Salesforce or any operational system without routing through DWH. \u2013 Enable real-time or scheduled updates between systems. \u2013 Automate business processes by syncing key objects (e.g., contacts, leads, orders). \u2013 Ensure data consistency across departments and tools. Data Connectivity / Integration Hub (e.g., via OData or Webhooks) \u2013 Seamlessly connect multiple systems (e.g., CRMs, ERPs, marketing platforms, custom apps) for cross-functional data usage. \u2013 Suitable for AI/ML pipelines, especially in developer-heavy or integration-driven environments. \u2013 Build composite applications using data APIs, webhooks, and event-driven integrations. Corporate Data Repository \u2013 Build a well-governed, standardized, and accessible data store across the company. \u2013 Prepare data for enterprise analytics, compliance, and reporting. \u2013 Support of data stewardship and metadata management. Now, let\u2019s go through the methods that provide Salesforce and Oracle connections using different services and tools. Methods Overview: How to Integrate Salesforce with Oracle Before diving into the detailed walkthroughs, let\u2019s first look at the four main methods of such an integration . Each offers different capabilities in real-time synchronization, setup complexity, customization, and cost, so choosing the right one depends on a company\u2019s business needs, technical resources, and long-term goals. Let\u2019s consider them briefly. 1. API-Based Integration (REST/SOAP) A custom integration approach that uses Salesforce and Oracle APIs. Offers maximum flexibility but requires development expertise and ongoing maintenance. Best for companies with in-house developers and highly specific integration requirements . It allows companies to tailor every part of the data flow, business logic, and error handling. However, building and maintaining this kind of integration can be time-consuming and costly in the long run. 2. Oracle Integration Cloud (OIC) Oracle\u2019s native, low-code integration platform with pre-built connectors for Salesforce and other systems. Best for Oracle-centric organizations seeking a streamlined, supported solution. It offers visual flow builders , built-in monitoring , and tight integration with other Oracle products. While the solution simplifies setup, it may require additional licensing and could be more complex than needed for smaller-scale use cases. 3. Salesforce Connect Allows Salesforce to access external Oracle data in real-time using OData or external objects without storing it in Salesforce. Best for companies that want real-time visibility without physically moving data between platforms. It\u2019s ideal for reducing storage costs and avoiding data duplication, especially when only viewing data is required. However, external objects have limitations , like restricted support for automation, reporting, and specific UI customizations in Salesforce. 4. Middleware & ETL Tools (MuleSoft, Skyvia, Informatica, Boomi, etc.) This method uses third-party platforms to build flexible, reusable, automated integrations . It often comes with prebuilt templates, scheduling, and transformation tools. Best for businesses looking for scalable, no-code, or low-code integration options with broader connectivity. These tools reduce development time and support a wide range of data sources beyond just Salesforce and Oracle. However, pricing and complexity vary between platforms, and some may require technical setup depending on the level of customization needed. The table below displays the differences between these approaches based on real-time sync, ease of setup, customization level, and cost. Method Best For Real-Time Sync Ease of Setup Customization Level Cost 1. APIs (REST/SOAP). Tech-savvy teams needing full control and custom logic. Possible. Complex (Developer. Max. Low (but high dev time). 2. Oracle Integration Cloud (OIC). Oracle customers seeking pre-built, supported integration. Yes. Medium. Limited. License required. 3. [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) . Viewing Oracle data in real time without storing it in SFDC. Real-time View. Medium. Minimal. OData + SF license. 4. Middleware / ETL (Skyvia, Boomi\u2026). Businesses needing scalable, no-code/low-code integration. Yes (Middleware)/No ETL- Batch Processing. Easy to Medium. High. Varies by vendor. Method 1: Using APIs for Salesforce Oracle Integration This approach provides maximum control and flexibility. So, businesses can design highly customized, real-time data flows tailored to specific workflows. It involves directly calling Oracle APIs from Salesforce (or vice versa) using REST or SOAP protocols, allowing precise data handling, authentication, error logic, and more. Unlike no-code or pre-built tools, API-based integration requires developer resources , API documentation knowledge , and ongoing monitoring and maintenance. However , it offers unmatched granularity and performance tuning for complex use cases. Key Features Salesforce APIs REST API. Lightweight and easy to use, great for real-time data exchange and mobile/web integrations. SOAP API. Offers robust data handling and is better suited for complex data structures. Bulk API / Bulk API 2.0. Ideal for high-volume data transfers. Streaming API. For event-based messaging, e.g., syncing changes as they happen. Oracle APIs Oracle REST Data Services (ORDS). RESTful interface to Oracle databases, widely used for exposing Oracle data via HTTP. SOAP Web Services (e.g., Oracle E-Business Suite). Enterprise-grade operations using WSDL-based services. Oracle Fusion Cloud APIs. Modern REST/SOAP endpoints are available for Oracle ERP, HCM, etc. GraphQL APIs. Supported in some newer Oracle services for flexible data querying. Best For Businesses needing a fully customized integration with developer expertise , especially those with: complex business logic, highly specific transformation or validation rules, security/compliance requirements that pre-built tools can\u2019t support. Use Cases Syncing custom Salesforce objects with Oracle ERP modules (e.g., Orders, Inventory). Pushing real-time transaction data from Salesforce to Oracle for fulfillment and billing. Creating a custom data sync scheduler or retry mechanism for high-reliability integrations. Supporting hybrid integration needs, such as connecting Salesforce, Oracle, and a legacy on-prem system. Cost High upfront and ongoing costs due to: Developer time for integration logic and testing. API management (authentication, throttling, monitoring). Maintenance, debugging, and versioning as systems evolve. Integration tools do not charge licensing fees, but their total cost of ownership (TCO) can exceed that of middleware or low-code platforms. Pros Maximum flexibility and control over data flow, transformation, and logic. Real-time or near-real-time data sync capabilities. No dependency on third-party integration tools. Ideal for complex and custom business logic scenarios. It can be highly optimized for performance. Cons Requires significant development effort and expertise. High ongoing maintenance and support burden. Error handling, retry logic, and monitoring must be built from scratch. Tight coupling to API versions can introduce upgrade risk. Not suitable for non-technical teams or quick deployments. Method 2: Oracle Integration Cloud with Prebuilt Salesforce Adapter OIC is a low-code integration platform that offers a prebuilt Salesforce adapter, allowing users to connect Oracle apps with Salesforce without extensive coding. It provides a visual interface, reusable templates, and out-of-the-box connectors to manage system data flows and events. The system supports real-time and scheduled integration and can orchestrate complex workflows involving Salesforce, Oracle ERP/HCM, and other third-party applications. Key Features Drag-and-drop workflow automation for building integrations without coding. Pre-configured Salesforce & Oracle connectors to accelerate setup. Supports APIs, batch, and event-driven integrations for flexible use cases. AI-powered data mapping to simplify field matching and reduce manual work. Built-in monitoring and error tracking for visibility and issue resolution. Secure connectivity with role-based access control and audit logging. Support for hybrid cloud and on-premises environments. Extensive library of integration templates to streamline common workflows. Seamless integration with Oracle SaaS applications (ERP, HCM, SCM, etc.) Best For Organizations that are already Oracle customers and want to integrate with Salesforce quickly using supported, enterprise-grade tools without building everything from scratch. Use Cases Syncing Salesforce leads and opportunities with Oracle ERP sales orders or quotes. Sending Oracle HCM employee records into Salesforce for onboarding workflows. Connecting Oracle Financials with Salesforce to reflect payment and invoice statuses. Creating bi-directional sync between Salesforce and Oracle SCM for order tracking. Cost License-based [pricing](https://www.oracle.com/integration/pricing/) tied to Oracle Cloud usage. It may be bundled for existing Oracle SaaS customers. Cost varies depending on the number of connections, transactions, and required services . Pros Easy bi-directional integration with Salesforce. Easy mapping to and from Salesforce business objects. Cons The pricing is high enough (about $4000 per month). The number of created external objects is limited (100). The number of connections between external and other entities per request limitation (4). The number of OAuth tokens by an external system limitation (4000 characters). The total page size for server-managed paging limitation (2000 lines). Click [here](https://docs.oracle.com/en/cloud/paas/integration-cloud/sforce-adapter/create-connection.html#GUID-10DBB72D-F6F9-4851-B2BA-9E593CF65F2E) for more details on how to connect to apps to share data. The limitations and unfriendly pricing of such a semi-native tool make data exchange entirely on the Skyvia platform a more profitable solution. Method 3: Using Salesforce Connect and Skyvia Connect This approach allows Salesforce to access data stored in Oracle in real-time without importing it into Salesforce. It uses external objects to represent Oracle data as native to Salesforce. Combined with Skyvia Connect, which exposes Oracle data as OData endpoints, it becomes a powerful solution for viewing and interacting with Oracle insights live from Salesforce. This method is a miraculous choice for organizations that need real-time access to Oracle data in Salesforce but want to avoid data duplication and storage costs. Key Features With it, you may connect data from the Oracle database with Salesforce without programming, using a combination of OData connection for Oracle and native Salesforce Connect. The method supports OData 2.0 and 4.0 protocols or custom adapters, depending on the Oracle setup. OData allows users to create an OData endpoint for an Oracle database that will be exposed as an external object in Salesforce using Salesforce Connect. External objects in Salesforce can mirror the Oracle data structure. Users may read/write access (depending on the Salesforce edition and setup). You don\u2019t need to schedule syncs or manage data replication. Best For Organizations that: Need up-to-date Oracle data in Salesforce without importing or duplicating it. Want to reduce Salesforce storage usage and costs. Require real-time visibility , not batch updates. Use Cases Viewing Oracle customer or order records in Salesforce accounts without syncing data. Allowing support reps to check Oracle case statuses in real time from within Salesforce. Displaying Oracle ERP or HR data in Salesforce dashboards and reports using external objects. Creating a unified 360-degree view of customers by merging Oracle and Salesforce data in the UI. Cost Salesforce Connect license is required for using external objects (included in some editions, paid add-on in others). Pros OData provides real-time data export from legacy systems such as SAP, Microsoft, etc. There is no duplication, and data stays in Oracle, reducing storage use in Salesforce. Enables real-time access to Oracle records directly within Salesforce. Simplifies integration with no sync schedules or data maintenance. Fast setup with Skyvia Connect (no code required). Great for read-heavy use cases and dashboards. Cons OData isn\u2019t as fast as REST because REST works directly over HTTP/HTTPS, and OData is an extension of HTTP/HTTPS. Salesforce Connect requires additional licensing (depending on the edition). External objects are limited in functionality compared to native ones (e.g., no triggers, workflows). Performance depends on Oracle\u2019s availability and response time. Write-back operations may be limited or require extra configuration. Click [here](https://help.salesforce.com/s/articleView?id=sf.platform_connect_setup.htm&type=5) for detailed instructions on connecting Salesforce with external data systems via OData adapters. OData 2.0 vs. OData 4.0 Let\u2019s compare these two popular solutions to see which suits your business scenario best. Parameter OData 2.0 OData 4.0 Methods for entities update MERGE HTTP method HTTP PATCH method with Edit semantics Supported formats XML and JSON JSON by default, but the metadata may be retrieved in the XML format Certifications OSP ISO, OASIS Amount metadata control of query response for JSON \u2013 \u2013 Full \u2013 Minimal \u2013 None Batch requests support \u2013 + Search abilities Search filter (allows a query to specify that the selected properties match some appropriate criteria) Search (can search by any entity property) Please go [here](https://skyvia.com/data-integration/integrate-salesforce-oracle) to review the most helpful scenarios of using Skyvia for Salesforce and Oracle data integration. Method 4: Using Middleware & ETL Tools Middleware platforms and [ETL tools](https://skyvia.com/blog/etl-tools/) like [Skyvia](https://skyvia.com/data-integration) , [MuleSoft](https://www.mulesoft.com/) , [Informatica](https://www.informatica.com/) , [Boomi](https://boomi.com/) , etc., offer flexible, scalable, and low-code/no-code options for integrating Salesforce and Oracle. These tools act as intermediaries between the systems, enabling users to extract, transform, and load (ETL) data or synchronize it in real-time or on a schedule. They support a wide range of integration patterns, like [batch ETL](https://skyvia.com/blog/batch-etl-processing/) , real-time sync, reverse ETL, and even complex data orchestration involving multiple systems. Key Features Prebuilt connectors for Salesforce, Oracle, and hundreds of other platforms. Drag-and-drop interfaces for building data pipelines. Advanced transformation capabilities, including data mapping, filtering, enrichment, and lookups. Scheduling and automation of recurring data syncs. Support multiple integration types: ETL, ELT, reverse ETL, sync, and replication. Monitoring, logging, and alerting dashboards. Best For Organizations that require scalable and reusable integrations. Teams seeking automated data movement without heavy development effort. Businesses working with multiple data systems, not just Salesforce and Oracle. Use Cases Extracting Oracle ERP sales data and loading it into Salesforce for account visibility. Syncing Salesforce opportunities with Oracle order processing systems. Running scheduled data loads from Salesforce to Oracle for financial reconciliation. Moving product catalogs from Oracle into Salesforce for sales enablement. Integrating Salesforce, Oracle, and marketing tools into a unified data warehouse. Cost Pricing varies depending on platform and usage. [Skyvia](https://skyvia.com/pricing) offers a free plan and affordable paid tiers. Enterprise platforms like [MuleSoft](https://www.mulesoft.com/anypoint-pricing) or [Informatica](https://www.informatica.com/products/cloud-integration/pricing.html) may require larger investments. Cost is typically based on data volume, number of connectors, and advanced features used. Pros No-code or low-code setup with intuitive UI. Such solutions support complex workflows with multiple data sources and destinations. They reduce manual tasks and human error through automation. Tools like Skyvia include logging, monitoring, and rollback options. Easy to scale across departments and systems. Cons This method may require platform-specific learning curves. Real-time sync depends on tool capabilities (some are better for batch). Costs may increase with the volume of data, connectors, or premium features. Limited control compared to fully custom API-based integrations. Skyvia is a great tool for [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) . And, of course, it offers an ETL approach for [Salesforce and Oracle connection](https://skyvia.com/data-integration/integrate-salesforce-oracle) among its [200+](https://skyvia.com/connectors) connectors Let\u2019s review two real-life use cases showing how it works. Case 1: Integrate Oracle to Salesforce\u00a0(Step-by-Step Guide) Imagine that company leads decided that even passive sales via the Internet must be under management control. In this case, for CRM Salesforce Source of Truth, there will be packages of import, e.g., \u201cCreate Salesforce records (Contacts, Opportunity, Product) from rows in Oracle Table.\u201d Let\u2019s go down the steps on how to do it: Go to the top menu and click + Create New . Select Import in the Integration column. Select the Source Type (databases or cloud apps), Connection (Oracle), and Target (Salesforce ) appropriately. Go to the Task Editor, click Add New, and select the necessary tables like Contacts in the Source drop-down list. (Note: You may add the specific conditions here by clicking +Condition). On the next step, select Contacts in the Target drop-down. Go to the Mapping Definition tab and configure the mapped columns. Set up the package\u2019s schedule and click the Save button. The same scenario works with Oracle\u2019s single source of truth. You just have to change the source and target name. And, of course, you may use [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) or [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) features to create and manage multi-stage data pipelines for more complex integration. Case 2: Replicate Salesforce to Oracle (Step-by-Step Guide) Mostly, this is the replication of all systems in a single repository. Replicate Salesforce tables (Contacts, Opportunity, Product) to the data warehouse and keep it up to date automatically. How to do it: Start with the top menu by clicking +Create New . Select Replication in the Integration column. Choose the Connection (New Salesforce) and Target (BQCon) in the Source Type fields. Check the necessary Options from the list below: Incremental updates. Update Schema. Create Tables. Drop Tables. Create Foreign Keys. Note: In this case, you need the ones checked on the screenshot. Choose the replication objects and click Run and Save to finish the process. Note: You may use the Schedule option for the automated replication procedure, as described in the Contacts Import scenario. You have to use the Import package to load Oracle data into DWH. Best Practices for a Successful Integration Define clear objectives . Identify what data needs to be integrated, why, and how it will support your business processes. Choose the right integration method . Consider real-time needs, data volume, team skills, and system complexity when selecting between API, middleware, OData, or prebuilt tools. Establish strong data mapping and transformation rules . Ensure consistency between systems by aligning field formats, data types, and business logic early in the process. Implement error handling and monitoring . Set up logs, alerts, and retry mechanisms to detect and respond to issues proactively. Use a sandbox for testing . Always test integration flows in a non-production environment to prevent disruptions and validate functionality. Plan for long-term maintenance . Monitor for API changes, schema updates, and shifting business requirements to keep the integration reliable over time. Conclusion If you compare the methods and solutions, each is fine for solving the daily routine and some usual tasks depending on your business needs. However, while selecting the most valuable scenarios,\u00a0Skyvia\u00a0is a bit ahead with its capabilities, meaning simplicity, intuitive UI, and keeping the offering and pricing in balance. In any case, these integration methods automate workflows , maintain data consistency , and gain a unified view across systems . Whether you need real-time access , custom data transformation , or secure connectivity , selecting the right approach will directly impact the team\u2019s productivity and data strategy. Focus on scalability, maintainability, and alignment with your infrastructure to ensure long-term integration success. FAQ for Salesforce Integration with Oracle What\u2019s the easiest way to integrate Salesforce and Oracle without coding? The easiest no-code option is\u00a0Skyvia, which offers prebuilt connectors, an intuitive interface, and scheduling capabilities. It\u2019s perfect for businesses without in-house developers. Can I get real-time data from Oracle and put it into Salesforce? Yes, real-time data access is possible using\u00a0Salesforce Connect with Skyvia Connect. This method lets Salesforce display live Oracle data using external objects without storing it inside Salesforce. What\u2019s the difference between ETL and Salesforce Connect? ETL\u00a0transfers and transforms data physically between systems, often on a schedule.\u00a0Salesforce Connect\u00a0uses OData and external objects to access Oracle data in real-time without moving it. Is Oracle Integration Cloud the best option if I already use Oracle ERP or HCM? Yes,\u00a0Oracle Integration Cloud (OIC)\u00a0is optimized for Oracle\u2019s ecosystem and includes a prebuilt Salesforce adapter, making it a strong low-code solution for Oracle-centric companies. How do I know which method to choose? Choose based on your priorities: \u2013 Use\u00a0APIs\u00a0for full control and customization. \u2013 Use\u00a0OIC\u00a0for fast Oracle-native integration. \u2013 Use\u00a0Skyvia/Middleware\u00a0for flexible, no-code automation. \u2013 Use\u00a0Salesforce Connect\u00a0for real-time data visibility without data replication. Does Salesforce Connect require an extra license? Yes,\u00a0Salesforce Connect\u00a0is not available in all editions and may require a separate license, depending on your Salesforce plan. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Foracle-and-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Integrate+Salesforce+and+Oracle%3A+Step-by-Step+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Foracle-and-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/oracle-and-salesforce-integration/&title=How+to+Integrate+Salesforce+and+Oracle%3A+Step-by-Step+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/oracle-rest-apis/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) REST API for Oracle Integration By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/oracle-rest-apis/#respond) 953 November 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Foracle-rest-apis%2F) [Twitter](https://twitter.com/intent/tweet?text=REST+API+for+Oracle+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Foracle-rest-apis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/oracle-rest-apis/&title=REST+API+for+Oracle+Integration) According to [Statista](https://www.statista.com/statistics/809750/worldwide-popularity-ranking-database-management-systems/#:~:text=As%20of%20June%202024%2C%20the,rounded%20out%20the%20top%20three.) , Oracle is the most popular DBMS solution in the world in 2024. Oracle databases have remained a top choice for enterprises and SMBs for decades due to their excellent performance, ease of administration, and security. Oracle databases are usually not standalone solutions but serve as a foundation for multiple apps. So, companies use Oracle REST API to connect apps with a database for data exchange. In this article, we provide an Oracle REST API tutorial on integrating a database with other apps. Given that the REST API method requires strong technical expertise and programming skills, its implementation might be challenging and time-consuming. Meanwhile, Skyvia adds ease to the integration process by offering solutions for data exchange. In this article, we\u2019ll also discover Skyvia for connecting Oracle to 200+ other data sources with no coding . Table of Contents What Is a REST API? Setting Up Oracle REST APIs Integrating Oracle REST APIs with Applications Benefits of Using Oracle REST APIs Skyvia\u2019s Integration with Oracle REST APIs Conclusion What Is a REST API? Representational State Transfer Application Programming Interface, better known as [REST API](https://www.ibm.com/topics/rest-apis) , is a set of rules and guidelines for interaction between web services. REST APIs conform to the following standards: A client-server architecture with clients, servers, and resources. HTTP protocol is used for managing requests. Stateless communication doesn\u2019t preserve client information between GET requests; each request is an independent process. Cacheable data for streamlined interactions. Unified interface for standardization of the data exchange between clients and servers. REST uses the HTTP methods for communication: GET \u2013 to retrieve data from a server. POST \u2013 to send data to a server to create/update a resource. PUT \u2013 to update or replace existing data on a server. DELETE \u2013 asks a server to remove a specified resource. Exchanging information between client and server via API implies that data is encapsulated in a file. [JSON](https://www.json.org/json-en.html) is the most commonly used format since it\u2019s readable by humans and computers, lightweight, and suitable for speedy data transfer. Sometimes, XML files are also used to exchange data. Overview of Oracle REST APIs Oracle REST APIs have characteristics similar to those of all REST APIs. Let\u2019s have a look at them in detail: Stateless communication provides a possibility to scale without considering the previous interactions. Each API call is independent and doesn\u2019t store any session state on a server. OAuth 2.0 and API keys are used for authentication by Oracle REST APIs. JSON file format is common for request and response bodies in the data exchange process, though XML is also supported. Standard HTTP methods for communication are used: GET, PUT, POST, and DELETE. Data and services come as resources , each coming with a unique resource identifier (URI). The URIs or URLs within API are endpoints that specify where a client can access a particular resource. Endpoints often correspond to actions like retrieving, creating, updating, or deleting resources, and they typically work with HTTP methods such as GET, POST, PUT, or DELETE. See the list of all [REST endpoints](https://docs.oracle.com/en/database/oracle/oracle-database/19/dbrst/rest-endpoints.html) for Oracle databases. Setting Up Oracle REST APIs To use [Oracle REST APIs](https://docs.oracle.com/en/database/oracle/oracle-database/19/dbrst/index.html) , follow the detailed step-by-step instructions provided below. Prerequisites: Before installation, make sure you have the following tools at your disposal. Oracle Database (Enterprise Edition, Standard Edition, or Standard Edition One) release 11g Release 2 or later, or Oracle Database 11g Release 2 Express Edition. Oracle Java 8 or later. Web browser: Microsoft Internet Explorer 8.0 or later. Mozilla Firefox 3.0 or later. Google Chrome 2.0 or later. Step 1. Install and Configure Oracle REST Data Service (ORDS) ORDS is a Java-based service used to expose data from an Oracle database via RESTful endpoints. NOTE: Install ORDS on the service with access to your Oracle database. Download the file ords.version.number.zip from the [ORDS official website](https://www.oracle.com/database/technologies/appdev/rest.html) . Extract data from the downloaded zip file into a chosen directory. Select the installation option: An advanced mode requires a user to enter [CLI prompts](https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/19.2/aelig/installing-REST-data-services.html#GUID-34C2C8C8-4AF1-402A-A956-E0A42FCED85E) manually. A silent mode doesn\u2019t require a user intervention. The installation is done using the parameter specified in the /ords_params.properties file under the location where ORDS is downloaded and extracted. To perform installation in the advanced mode, execute the command: java -jar ords.war install advanced To verify the ORDS installation is successful, go to the directory with the ords.war file and enter the following command: java -jar ords.war validate [--database ] [Enable ORDS Database API](https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/19.4/aelig/enabling-ords-database-api.html#GUID-8730051B-7C03-487B-954A-7D6786B7EC74) since this feature is disabled when installing ORDS for the first time. [Run ORDS in a Standalone Mode](https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/19.4/aelig/installing-REST-data-services.html#GUID-3DB75A67-3E66-48EF-87AC-6948DE796588) . [REST-enable the Oracle database SCHEMA](https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/19.4/aelig/rest-enabled-sql-service.html#GUID-7C0E4F73-D253-4251-9789-ED3F4021B561) and grant the DBA and PDB_DBA roles to the schema user. Step 2. Deploy ORDS on a Web Server Oracle REST Data Services supports the following Java EE application servers: Application Server Supported Release Oracle WebLogic Server 12g Release 2 (12.2.1.3) or later Apache Tomcat Release 8.5 or later In this example, we show how to deploy ORDS as a web application on the Apache Tomcat web server. If not yet installed, download [Apache Tomcat](https://tomcat.apache.org/download-80.cgi) . Copy the ords.war file into the webapps directory where Apache Tomcat is installed. Access Oracle Application Express by typing the following URL in your web browser: http://:/ords/ is the name of the server where Apache Tomcat is running. is the port number configured for the Apache Tomcat application server. Step 3. Define Oracle REST APIs Create REST APIs and expose them from your Oracle database. This is possible with SQL Developer or by simply using PL/SQL. In this example, we use SQL Developer since it offers a graphical UI for administering Oracle REST APIs. Configure administrator user. java -jar ords.war user adminlistener \"Listener Administrator\" Enter the password for the adminlistener user. Confirm the password for the adminlistener user. In the SQL Developer, connect to your Oracle database and navigate to REST Data Services under the Connection tab. Right-click on the RESTful Services and select New RESTful Service . Define the REST API endpoint, HTTP method (GET, POST, PUT, DELETE), and SQL query or PL/SQL logic. Step 4. Test APIs To test the recently created REST API endpoint, try accessing it via a web browser. Check [Oracle\u2019s official documentation website](https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/19.2/aelig/installing-REST-data-services.html#GUID-6F0C8455-EA8B-4CF3-B297-3533A50D184C) for more detailed information on ORDS deployment and API endpoint creation. Integrating Oracle REST APIs with Applications REST APIs allow you to access and manipulate data in the Oracle database from an app. To integrate Oracle REST APIs with your application, follow the detailed step-by-step instructions. Step 1. Install cURL To access Oracle Database REST API, install and download the [cURL command-line tool](https://curl.se/) . Make sure that the selected version supports SSL to establish a secure connection to the server. Once the zip package with cURL is downloaded, extract its contents and put them in the preferred folder. Step 2. Authentication To access Oracle Database REST API resources, you need to authenticate. Provide an SSL certificate authority (CA) certificate file. You can download an [SSL CA certificate bundle](http://curl.haxx.se/docs/caextract.html) or provide your own. [Verify the SSL CA certificate](https://curl.se/docs/sslcerts.html) using the cURL tool. Provide name and password credentials for your Database REST API account. For that, use the -u command in cURL to pass the username and password. Specify the custom header X-ID-TENANT-NAME to identify the identity domain ID. Use the -H command in cURL to pass X-ID-TENANT-NAME . Example: curl -i -X GET -u : \n-H : https://..com// Step 3. Make API Calls Start sending HTTP requests to Oracle Database API endpoints. You can also do this with the cURL tool. Example: curl -X POST https:///ords/schema/table/ \\\n-H \"Authorization: Bearer \" \\ \n-H \"Content-Type: application/json\" \\ \n-d '{ \"column1\": \"value1\", \"column2\": \"value2\" }' Step 4. Handle API Responses After calling Oracle Database REST API endpoints, the Response header returns [standard HTTP status codes](https://docs.oracle.com/en/database/oracle/oracle-database/21/dbrst/Status-Codes.html) in JSON format. Write programming code within your application to handle these responses properly. Parse JSON and implement business logic that would handle each response appropriately. Step 5. Error Handling In case the status code returned signals an issue, you need to implement an error-handling mechanism. Here are some HTTP status codes that report a problem: Unauthorized access (HTTP 401) Forbidden access (HTTP 403) Not Found (HTTP 404) Internal server error (HTTP 500) Benefits of Using Oracle REST APIs No doubt, REST APIs facilitate integration with Oracle databases and other applications. Here are some of the notable advantages of using Oracle Database REST APIs: Scalability. ORDS is designed to handle large volumes of requests. Even with the increasing number of API calls, the performance remains stable and optimal. Customization. You can create custom queries with Oracle REST APIs to fetch only the data you need. This positively impacts performance since no useless data is retrieved. Security. ORDS grants secure access and manipulation of data since it supports OAuth 2 for authentication, role-based access control (RBAC), and data encryption. This helps companies to secure their sensitive data. Versatility. REST APIs are platform-independent, allowing developers to integrate Oracle services into applications running on any mobile, web, or desktop platform. Documentation. Oracle provides comprehensive documentation on REST APIs, which makes the lives of developers easier. Skyvia\u2019s Integration with Oracle REST APIs Skyvia provides a pre-built [connector for the Oracle database](https://skyvia.com/connectors/oracle) . You can configure it by providing the necessary details in the visual wizard. Then, use the Oracle connector to build zero-code integration scenarios with Skyvia to transfer data to and from your database. [Skyvia](https://skyvia.com/) is a universal cloud platform for various data-related tasks. It connects to [200+ sources](https://skyvia.com/connectors) , including Oracle services, and allows you to perform various operations on data, starting from integration to OData endpoint creation. All this is done within an intuitive graphic user interface with no programming language requirements. How Skyvia Enhances Oracle REST APIs Skyvia has five core products, four of which can work with Oracle databases: [Data Integration](https://skyvia.com/data-integration) . Use a set of tools to transfer data to/from the Oracle database with ETL, ELT, and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) pipelines. [Automation](https://skyvia.com/automation) . Reduce manual work on repetitive tasks and streamline workflows with event-based integrations. [Query](https://skyvia.com/query) . Use a visual SQL query builder or write SQL queries to retrieve data from an Oracle database. [Connect](https://skyvia.com/connect) . Create OData and SQL endpoints without coding and obtain ready-to-use endpoint URLs with no need to manage hosting and administration. Before using Skyvia products on Oracle data, connect to your database. Skyvia supports both on-premise servers and Oracle Cloud. Log into your [Skyvia account](https://app.skyvia.com/register?) or create a new one. Click + Create New in the top menu and select Connection . Select Oracle from the list of available data sources and click on it. Provide all the requested information in the window. Server \u2013 name or IP address of the Oracle server host. Port \u2013 Oracle server connection port; the default value is 1521. Connection Syntax \u2013\u00a0 this parameter determines whether to use Service Name or SID to connect. Service Name \u2013 alias to an Oracle database instance (or many instances). SID \u2013 a unique name for an Oracle database instance. User \u2013 user name to log in with. Password \u2013 password to log in with. Connect as \u2013 this parameter specifies how you want to connect to the Oracle server. This parameter is set to \u2018Normal\u2019 by default. Alternatively, you can select administrative privileges, such as SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM. Learn other details on how to [create an Oracle connection](https://docs.skyvia.com/connectors/databases/oracle_connections.html) in Skyvia. Business Benefits with Skyvia Integration Skyvia has enormous potential when it comes to various data integration scenarios and operations. This tool greatly facilitates data retrieval from your database, transfer of new or updated information, and even deleting unnecessary instances. Overall, using Skyvia is associated with such tangible advantages for your business: Simplicity. Connect to Oracle service with no coding, which is much faster and simpler than API integration. Connectivity. Set up data integration between Oracle and [200+ sources](https://skyvia.com/connectors) , including cloud applications, databases, data warehouses, [legacy systems](https://skyvia.com/learn/legacy-system) , storage services, and flat files in a standardized format. Data transformation . Cleanse and organize data with Skyvia\u2019s powerful [data transformation](https://skyvia.com/learn/what-is-data-transformation) capabilities. Affordability. Skyvia is suitable for businesses with any workload, from startups to enterprises. There are several pricing plans at reasonable prices. Conclusion Both Oracle REST APIs and Skyvia aim to integrate Oracle databases with other tools. However, each does it in its own way, which makes each method suitable for a different scope of tasks. Oracle REST APIs would be an excellent option for enabling database interactions within your custom app. If you need to send data from other corporate apps, such as HubSpot, Dynamics CRM, QuickBooks, etc., to your Oracle database, then Skyvia would definitely be a better choice. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Foracle-rest-apis%2F) [Twitter](https://twitter.com/intent/tweet?text=REST+API+for+Oracle+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Foracle-rest-apis%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/oracle-rest-apis/&title=REST+API+for+Oracle+Integration) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/overcoming-erp-data-integration-challenges/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Overcoming ERP Data Integration Challenges By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/overcoming-erp-data-integration-challenges/#respond) 2088 March 14, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fovercoming-erp-data-integration-challenges%2F) [Twitter](https://twitter.com/intent/tweet?text=Overcoming+ERP+Data+Integration+Challenges&url=https%3A%2F%2Fblog.skyvia.com%2Fovercoming-erp-data-integration-challenges%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/overcoming-erp-data-integration-challenges/&title=Overcoming+ERP+Data+Integration+Challenges) Resource counting and management are the backbone of each business in the modern world. To make correct strategic decisions, companies\u2019 leadership must orchestrate into a unified system the following core processes: Finance. HR. Manufacturing. Supply chain. Services. Procurement. Enterprise Resource Planning (ERP) is a category of business software designed to automate and manage this zoo of tools. Why is ERP so essential? The answer is that such systems allow us to see the puzzle pieces within the whole picture to allocate resources. It\u2019s like a chess board when you always play white, step first, and win. Let\u2019s walk through the ERP data integration challenges and solutions and explore how to reach success. Table of Contents Common \u0441hallenges in ERP data integration How Skyvia can help overcome ERP data integration challenges Technical hurdles in integration Security and compliance concerns Managing organizational change Cost and resource constraints Strategies for overcoming ERP data integration challenges Conclusion Common challenges in ERP data integration Implementation of the ERP data integration is about both employees and technologies. The main challenge here is that people often dislike new things that require extra effort. The other side of the coin is technical pitfalls. Let\u2019s explore the typical difficulties and solutions with the examples in the table below. Challenge Description Solution Data Silos Disparate systems and applications can create data silos, complicating the consolidation of data into a unified ERP system. Implement a data integration platform to connect disparate systems, promoting a unified data environment. Encourage cross-department collaboration to ensure data flows seamlessly across the company. Data Quality Ensuring the accuracy, completeness, and consistency of integrated data poses challenges, affecting the reliability of the ERP system. Establish data governance policies that define data standards, cleanliness, and maintenance procedures. Use data quality tools to cleanse and deduplicate data before integration. Complex Data Mappings Mapping data fields from various sources to the ERP system can be complex and time-consuming, requiring a deep understanding of the source and target data structures. To simplify data mapping, use advanced [ETL tools](https://skyvia.com/blog/etl-tools/) with graphical interfaces. Engage data experts early in the project to define clear mapping rules. Real-time Data Integration Ensuring data is updated in real-time across systems can be difficult, especially when dealing with legacy systems that do not support real-time data exchange. Employ integration platforms that support real-time, or close to real-time data synchronization and consider adopting a microservices architecture to facilitate faster data exchanges. Technical Expertise The lack of in-house technical expertise can be a barrier, as successful data integration requires knowledge of both the ERP system and the external data sources being integrated. Invest in training for staff or partner with experienced vendors and consultants who can provide the necessary expertise. Sure, the list of the challenges businesses may face in ERP data integration is longer. It includes technical difficulties, security, compliance troubles, etc., but the main question is how data integration solutions presented in the market can help them. How Skyvia can help overcome ERP Data Integration challenges Skyvia is the perfect tool for companies seeking to manage large volumes of data within their ERP systems and beyond. [Skyvia Data Integration](https://skyvia.com/data-integration/) scenarios provide users with the following capabilities: Easily schedule and manage large-scale data transfers and transformations without coding. The application is cloud-based and scalable, so businesses can process increasing amounts of data and forget about the underlying infrastructure or performance degradation. [180+](https://skyvia.com/connectors/) pre-built connectors for popular ERP systems and other business apps simplify the setup of integration workflows, reducing the time and effort to get started. Data transformation features help to modify and map data as part of the integration process, so data from various sources fits seamlessly into the target ERP system\u2019s schema. Error logging and handling mechanisms help identify and correct integration data issues. Encryption and compliance measures protect sensitive information during integration, which is vital when dealing with large datasets containing confidential data. A user-friendly interface allows the visual creation of data integration flows, making it easier to configure and manage integration scenarios, even for non-tech users. Here are a few simple steps on how to visualize your ERP data with Skyvia: Technical hurdles in integration Enterprise data ecosystems are always complex since they use various data formats and processes. It\u2019s complicated to integrate them, especially in the context of API limitations, middleware dependencies, addressing scalability, and performance bottlenecks. Let\u2019s look closely at these hurdles. API Limitations Rate Limits . Many APIs have rate limits for the number of requests per appropriate timeframe. This limitation is especially painful for large datasets, slowing down data synchronization. Feature Restrictions . You may not expose all ERP functionalities via APIs. Complex processes may not be supported, requiring workarounds or manual inputs. Versioning and Deprecation . Users must monitor and adjust APi updates or deprecations to ensure continuous operations. Middleware Dependencies Complexity . Despite third-party tolls being a bridge between ERP systems and other apps, their configuration and maintenance may add complexity to the IT infrastructure, requiring specialized management skills. Cost . Many such solutions, especially enterprise-grade ones, might be too costly for setup and maintenance. Vendor Lock-in . Dependency on specific third-party solutions limits flexibility and increases costs while the business grows. Scalability and Performance Bottlenecks Data Volume . As data volume increases, the ERP system and integration infrastructure have a chance of overload, which means slower response times and decreased user satisfaction. Concurrency . Concurrent users and processes lead to similar troubles, causing delays and impacting critical business operations. Resource Allocation . Talking about on-premise apps, computing resources, like CPU, memory, bandwidth, etc., allocated to the ERP system or integration components can be insufficient, which hinders performance. Optimization Needs . Often, companies may need to optimize the ERP system\u2019s architecture, DB, or integration logic to increase its efficiency. Security and compliance concerns Maintaining data security and ensuring compliance with regulations (e.g., GDPR, HIPAA, etc.) during the integration process can be complex, mainly when sensitive or personal data is involved. Here are key issues and challenges in these areas. Data Protection and Privacy Issues Sensitive Data Exposure . ERP systems handle sensitive data, like personal information, financial records, and proprietary business data. Protecting this information from unauthorized access or exposure is a must-have. Data Access Control . Ensuring that users have access only to the data necessary for their role is often complicated, especially in large companies with complex hierarchies and varied data access needs. Data Encryption . Encrypting data at rest and in transit to prevent interception or unauthorized access is essential, but implementing encryption can be challenging and resource-intensive. Data Residency and Sovereignty . Global businesses must navigate varying data protection laws, ensuring that data storage and processing comply with local regulations regarding data residency. Regulatory Compliance Challenges Compliance Documentation and Reporting . Demonstrating compliance often requires extensive documentation and reporting capabilities, which are challenging to maintain in dynamic business environments. Cross-border Data Transfers . For multinational companies, transferring data across borders can complicate compliance with laws restricting or regulating international data flows. Audit Trails and Monitoring . Regulatory bodies require detailed audit trails of data access and changes. Maintaining these logs and ensuring they are comprehensive and tamper-proof is complex. Third-party Vendor Management . ERP systems often integrate with third-party services and applications, posing potential compliance risks. Ensuring all third-party vendors comply with relevant regulations is sometimes a big deal. Data Minimization and Retention . Regulations like GDPR emphasize the principles of data minimization and retention, challenging businesses to implement policies that only collect necessary data and retain it for no longer than needed. Managing organizational change Implementing a new ERP system or integrating new components into an existing one often faces the challenge of user adoption and training. Users accustomed to a particular workflow may resist changing to a new system due to discomfort with the unknown or fear of increased workload. Effective training is crucial but can be hampered by inadequate resources, the complexity of the system, or a mismatch between user needs and the training provided. Overcoming these difficulties requires: Customized Training Programs . Tailoring training to meet the specific needs of different user groups within the organization. Engagement and Support . Actively engaging users in the integration process and providing continuous support to address their concerns and challenges. Iterative Learning. Adopting an iterative approach to training, allowing users to gradually acclimate to the new system through hands-on experience and feedback. Companies must also align ERP projects with their business goals to ensure they deliver value and support strategic objectives. This step helps to avoid wasted resources, systems that don\u2019t meet operational needs, or missed growth opportunities. Aligning ERP integration with business goals includes: Strategic Planning . Involving stakeholders from across the organization in the planning process to ensure the ERP system meets diverse needs and supports overall business strategy. Clear Objectives . Defining clear, measurable objectives for the ERP integration that relate directly to business goals, such as improving operational efficiency, enhancing customer satisfaction, or supporting expansion. Continuous Evaluation . Regularly review the performance and impact of the ERP system on business goals and be prepared to adjust strategies in response to changing conditions or objectives. Cost and resource constraints Cost and resource constraints are another pain of businesses providing ERP data integration that may cause initial budget estimates due to unforeseen technical challenges, scope creep, or the need for additional resources. Budget Overruns Budget overruns occur when the actual cost of ERP integration exceeds initial estimates. Typical troubles may include unforeseen technical issues, scope creep where project requirements expand beyond original plans, and underestimating data migration\u2019s complexity or customization needs. Managing this challenge effectively requires: Thorough Planning . Detailed upfront planning to anticipate potential costs and complexities. Scope Management . Strict project scope management to avoid unnecessary expansions or changes mid-project. Contingency Budgeting . Allocating a contingency budget to cover unexpected expenses. Hidden Costs Hidden costs in ERP integration often appear in the areas not always accounted for in initial budgets, like additional software licenses, third-party integration tools, training, and ongoing support and maintenance. To avoid this trouble: Comprehensive Analysis . Analyze all potential costs before launching the project. Vendor Transparency . Work closely with vendors to understand all possible charges associated with the ERP system and integration tools. Resource Allocation and Expertise Availability Allocating the right mix of internal and external resources and ensuring access to necessary expertise are must-haves for the ERP integration\u2019s success. Challenges arise from limited internal IT resources, the need for specialized knowledge not available in-house, and competition for highly skilled professionals. Approaches to address these challenges include: Strategic Outsourcing . Leveraging external partners and consultants to fill expertise gaps. Training and Development . Regular training for internal staff to obtain appropriate skills. Resource Planning . Careful planning and allocating internal resources to balance day-to-day operations with project demands. Strategies for overcoming ERP data integration challenges Each company has its own business specifics, but while implementing any changes, the leadership must understand the general goal and build strategies to reach it. Such strategies may include ERP integration planning, leveraging the correct modern integration platform, and further monitoring and optimization. ERP Integration Planning Step-by-Step Guide Follow these steps to navigate the complexities of ERP integration planning, ensure a smoother implementation process, and take the best from ERP investments. Define Clear Objectives . Clarify the goal to achieve with the ERP integration. Define specific, measurable objectives that align with your overall business strategy. Analyze Requirements . Engage stakeholders from all relevant departments to gather comprehensive requirements. Define the data flows, processes, and system interactions that the integration will affect. Choose the Right Integration Approach .\u00a0Evaluate different integration approaches (point-to-point, middleware, API-based, etc.) and tools, considering your objectives, budget, and technical capabilities. Select the one that best fits your needs and offers scalability. Map Data Meticulously. Pay detailed attention to data mapping. Understand how data is structured in your existing systems and how to transform it for the ERP system. This step is crucial for maintaining data integrity and accuracy. Prioritize Data Security and Compliance .\u00a0Ensure the integration plan includes robust measures for data security and compliance with relevant regulations, like data encryption, secure data transfer mechanisms, and compliance checks. Plan for Customization and Flexibility. Balance customizations with the need to keep the system maintainable and upgradeable. Develop a Detailed Integration Timeline .\u00a0Create a realistic project timeline that includes milestones, testing phases, and buffer periods for unexpected delays. Communicate this timeline to all stakeholders. Prepare for Change Management .\u00a0Develop a change management plan that includes training, communication, and support strategies to ease employee transition. Invest in Testing .\u00a0Allocate sufficient resources for thorough integration testing, including unit testing, system testing, and UAT (user acceptance testing). Testing should cover functionality, performance, and security. Ensure Scalability and Performance. Design the integration with future growth in mind. The system should be scalable, performant under varying loads, and ready to accommodate increased data volumes and business expansion. Establish a Monitoring and Support Framework. Plan for ongoing integrated system monitoring to detect and resolve issues promptly. Check if users have access to support resources. Review and Iterate . Treat ERP integration as an ongoing process. Regularly review the integration\u2019s performance against objectives and make adjustments as necessary. Leveraging the suitable integration platform Leveraging modern integration platforms offers scalability, flexibility, and enhanced data management capabilities. This approach solves technical challenges and supports strategic business objectives, allowing more efficient, secure, and insightful data usage. Modern solutions like [Skyvia](https://skyvia.com) offers no-code or low-code environments to create and manage integrations without deep technical expertise, making the integration process more accessible and faster to implement. [iPaaS platforms](https://skyvia.com/blog/best-ipaas-solutions/) support a variety of integration patterns, including real-time, batch, event-driven, and more, providing the flexibility to address diverse business requirements. Data flow automation reduces manual data entry and the associated risk of errors. Data transformation and mapping ensure data is correctly formatted and aligned between systems. Such systems provide encryption, secure data transmission, and compliance with data protection regulations, reducing the compliance burden on businesses. The set of pre-built connectors for popular ERP systems and other business apps decreases the time and effort required for integration projects. The pay-as-you-go pricing reduces the cost of maintenance and updates. Advanced error handling features help promptly solve issues, ensuring uninterrupted data flows. A centralized dashboard monitors and manages all integrations, offering visibility into data flows and system performance to optimize it if necessary. Conclusion Enterprise Resource Planning is like housekeeping. You cannot implement it once and forget for a decade or more. The market is scalable, and the business goals might change according to current trends, organizational needs, and scenarios, so check trends and select suitable solutions. But in any case, overcoming ERP data integration challenges means combining strategic planning, the right technological tools, and a commitment to continuous improvement. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fovercoming-erp-data-integration-challenges%2F) [Twitter](https://twitter.com/intent/tweet?text=Overcoming+ERP+Data+Integration+Challenges&url=https%3A%2F%2Fblog.skyvia.com%2Fovercoming-erp-data-integration-challenges%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/overcoming-erp-data-integration-challenges/&title=Overcoming+ERP+Data+Integration+Challenges) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/pipedrive-integration-with-quickbooks/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Pipedrive Integration with QuickBooks: Automate & Grow By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/#respond) 47 May 29, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-quickbooks%2F) [Twitter](https://twitter.com/intent/tweet?text=Pipedrive+Integration+with+QuickBooks%3A+Automate+%26+Grow%C2%A0&url=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-quickbooks%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/pipedrive-integration-with-quickbooks/&title=Pipedrive+Integration+with+QuickBooks%3A+Automate+%26+Grow%C2%A0) Data transparency is always key, especially when it comes to sales and finance. Those who\u2019ve spent hours syncing with accountants or chasing sales reps for invoice details know how frustrating these situations can be. Often such silos lead to errors, delays, and duplicated efforts. An efficient way to address this is by integrating your CRM with accounting software. Take Pipedrive, a smart, sales-focused CRM, and QuickBooks, a leading accounting solution. In this blog post, we\u2019ll explore how to integrate them, the benefits of doing so, and best practices to help you get started quickly and effectively. Table of Contents Benefits of Integrating Pipedrive and QuickBooks What to Integrate Between Pipedrive and QuickBooks? Methods for Pipedrive QuickBooks Integration Method 1. Native Integration Method 2. Point-to-Point Connector Tools Method 3. Dedicated Data Integration Platforms Best Practices for a Smooth Pipedrive QuickBooks Integration Conclusion Benefits of Integrating Pipedrive and QuickBooks When Pipedrive and QuickBooks work in sync, the advantages are more than obvious \u2013 they\u2019re tangible. In a nutshell, this integration is a powerful way to increase the efficiency of your teams, leading to cleaner operations and better decision-making. Let\u2019s take a closer look at the key points: Less manual data entry & errors . First and foremost, this integration frees your team from tedious manual work like re-entering customer details, payment terms, or invoice amounts. By automating these handoffs, tasks are completed faster and with greater accuracy \u2013 no more human-induced errors slowing you down. Streamlined sales-to-accounting workflow . A deal is marked as \u201cWon\u201d in Pipedrive, and an invoice is instantly created in QuickBooks. No lag, no duplicate work \u2013 just one smooth, continuous workflow where each action naturally triggers the next. 360-degree view of customer financials. No more frantic tab-switching or cross-referencing. With both platforms connected, teams get a shared view of customer interactions and financial status. The result? Smarter conversations and better-informed decisions \u2013 both internally and with customers. Improved cash flow management. The faster you invoice, the faster you get paid. Pipedrive QuickBooks integration makes one a corollary of the other, removing delays and making tracking payments easier. Enhanced reporting & forecasting . Bringing sales and financial data together allows you to align revenue projections with real-time performance. The result is richer, more actionable reporting \u2013 and forecasts you can actually trust. Better customer experience . Errors on invoices or delays in billing are a great source of customer dissatisfaction. Integration reduces these friction points, leading to faster, more accurate invoicing. What to Integrate Between Pipedrive and QuickBooks? So, what are the most commonly used objects in Pipedrive QuickBooks synchronization? Let\u2019s explore some real-world examples of popular integration scenarios and the benefits they bring: Integration Objects Pairs Description Benefits Pipedrive Deals \u2192\u00a0\u00a0QB Invoices / Sales Receipts A classic trigger-action use case: when a deal is marked as \u201cWon\u201d in Pipedrive, an invoice or sales receipt is automatically created in QuickBooks. No delays, no manual steps. Accelerated billing; reduced manual effort. Pipedrive Organizations \u2192 QB Customers A foundational sync with direct mapping. Whenever an organization is added or updated in Pipedrive, it\u2019s reflected in QuickBooks for accurate invoicing and reporting. Data consistency across platforms. Pipedrive People \u2192\u00a0\u00a0QB Customer Contacts Syncing Pipedrive contacts to QuickBooks Customer Contacts helps you build a reliable link and never miss the right person in communication. Verified communication channel; accurate invoicing. Pipedrive Products \u2192\u00a0\u00a0QB Products / Services Even the smartest accounting system can\u2019t invoice for items it doesn\u2019t know. Syncing product catalogs between platforms ensures that line items match exactly. Aligned product catalog; accurate invoicing. QB Payment Status \u2192 Pipedrive Deal (custom field or note) QuickBooks tracks payments and invoice statuses; syncing this data back to Pipedrive keeps sales teams informed and ready to take the required follow-up action. Note: This integration is possible only with tools that allow custom field mapping and conditional logic, like Skyvia or Make. Up-to-date payment statuses. Methods for Pipedrive QuickBooks Integration Method 1. Native Integration As software adoption goes hand in hand with its connectability, vendors invest in making their products more integrable. Native integration refers to prebuilt apps developed and endorsed by a software vendor to enable plug-in connectivity with other systems. Pipedrive supports over 400 such integrations, including QuickBooks, which is available as a connector app in the Pipedrive Marketplace. Best for Small to mid-sized businesses looking for a quick and easy setup. Users with minimal technical expertise. Basic syncing needs (e.g., deals to invoices, contact matching). Teams who prefer UI-based configuration over coding or custom workflows. Step-by-step Guide Log in to your Pipedrive account. You must have the necessary admin rights to install apps. Access the Pipedrive Marketplace: use the [direct link](https://www.pipedrive.com/en/marketplace) or click on the profile icon and select Tools and apps \u2192 Marketplace . Search for QuickBooks-related apps. Follow the installation prompts to install the connector app. For this example, we\u2019ll use SyncQ, a Pipedrive \u2013 QuickBooks automation tool. Connect your QuickBooks account: log in to QuickBooks and grant the required permissions for the connector to access your customer, invoice, and product data. Once the application is installed, the invoicing feature becomes available directly within your Pipedrive interface. Define what you want to sync: Note : Configure invoicing preferences in Pipedrive to have the data entered here automatically synced with your QuickBooks account. Some apps also let you specify trigger events, such as syncing when a new contact is added or a deal stage changes. Pros Quick deployment with guided setup. Easy to use as a no-code solution. Official support and plenty of documentation. Cons Support for QuickBooks Online only; integration with QuickBooks Desktop requires third-party solutions. Not suitable for custom workflows because of predefined sync logic. Object mapping is limited to basic objects only. Possible vendor lock-in because of lack of transparency under the hood. Method 2. Point-to-Point Connector Tools Point-to-point solutions, also known as workflow automation platforms, offer a fast and simple way to connect two systems. They operate on an event-based basis, where one system triggers a defined action in another \u2013 hence the name point-to-point. Although automation platforms lack full capabilities of true [ETL tools](https://skyvia.com/blog/etl-tools/) \u2013 like bulk transformations, advanced mapping, or scheduling \u2013 they handle their narrow-defined tasks diligently and reliably. Examples of point-to-point solutions include Zapier, Make, Tray.io, IFTTT, Integrately, and others. Best for Small businesses with limited technical resources. Simple cases of trigger-action or event-response workflows. Lightweight automation scenarios between two apps. Step-by-step Guide Let\u2019s walk through creating a workflow in Zapier that automatically generates a QuickBooks invoice each time a deal is marked as \u201cWon\u201d in Pipedrive. Log in to Zapier. \u0421lick Create in the dashboard to start a new zap (automation). Set up the trigger: Choose Pipedrive as the source application. Set the trigger event as Deal Matching Filter . Connect your Pipedrive account and authorize access. Choose the pipeline and filter deals marked as \u201cWon\u201d. Set up the action: Choose QuickBooks Online as the target app. Set the required action event, e.g., create an invoice. Connect your QuickBooks Online account and authorize access. On the Configure tab, specify invoice details by mapping fields from the Pipedrive trigger. Test to ensure the automation runs as expected. If successful, click Publish to activate it. Pros Quick setup with visual workflows. No-code, drag-and-drop interface. Wide application support. Real-time triggers. Cons Not suitable for multi-step workflows or large data volumes. Allows for connecting only two systems at a time. No deep data mapping. Pricing escalation with usage volume. Method 3. Dedicated Data Integration Platforms Unlike point-to-point automation tools, dedicated [data integration platforms (iPaaS)](https://skyvia.com/blog/what-is-ipaas/) provide a more advanced and scalable approach. Their capabilities go well beyond basic connectivity or simple rule-based automations. With features like scheduling, multi-object mapping, and support for custom logic, these platforms offer full control over how and when data moves between systems. The growing integration needs of modern businesses make [iPaaS solutions](https://skyvia.com/blog/best-ipaas-solutions/) more in demand than ever. [Popular platforms](https://skyvia.com/blog/best-ipaas-solutions/) include SnapLogic, Workato, Boomi, MuleSoft, Skyvia, and others. While they differ in data handling, automation features, and pricing models, most of them offer a common core set of capabilities: A library of pre-built connectors to various data sources. Intuitive, no-code UI. Tools for building, scheduling, and monitoring of integration workflows. Support for data mapping and transformations. Cloud-based scalability. Best for Mid-sized to large businesses with evolving integration needs. Non-technical users who need powerful capabilities without relying on engineering. Cases of high-volume data exchange across sales and finance systems. Why Use Skyvia for Pipedrive & QuickBooks Integration? [Skyvia](https://skyvia.com/) is a comprehensive data integration platform built to automate a variety of data integration tasks, including ETL, ELT, reverse ETL, data migration, and both one-way and bi-directional synchronization. Alongside the core [iPaaS features](https://skyvia.com/blog/what-is-ipaas/) outlined above, Skyvia has its strengths that set it apart from the competition: Support for multiple data sources : beyond straightforward Pipedrive-QuickBooks integration, Skyvia lets you weave these tools into broader workflows involving CRMs, marketing platforms, databases, and cloud storage services. Data backup and recovery : the platform offers automated daily backups and easy data restoration options, safeguarding against data loss. Customizable workflows : with the support of wizard-based and visual design tools, Skyvia provides the flexibility to create complex multi-step workflows aligned with specific business scenarios. Scalability : from small start-ups to large enterprises, the platform scales with your business, efficiently handling growing amounts of data. Last but not least, Skyvia supports integrations with both [QuickBooks Online](https://skyvia.com/connectors/quickbooks) and [QuickBooks Desktop](https://skyvia.com/connectors/quickbooksdesktop) , providing dedicated connectors for each. In the next section, we\u2019ll show how to synchronize data from the QuickBooks SalesReceipt object to Pipedrive Deals in just a few clicks. Step-by-step Guide [Log in](https://app.skyvia.com/) to your Skyvia Account. Create connections to both [QuickBooks](https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html) and [Pipedrive](https://docs.skyvia.com/connectors/cloud-sources/pipedrive_connections.html) as described in the documentation. In this example, we\u2019re connecting to QuickBooks Online, a SaaS-based solution. Note : Connecting to QuickBooks Desktop requires installing the [Skyvia Agent](https://docs.skyvia.com/agents.html) on your local machine. In the top menu, click Create New and select Import . Set the source type to database or cloud App. Select your QuickBooks connection as the Source and your Pipedrive connection as the Target. Click Add new to create an import task. In the Task Editor, configure your Source settings: \u0441hoose the object you want to import data from (e.g., SalesReceipt). Apply filters if needed. In the Target settings, select the destination object in Pipedrive (e.g., Deals), and choose the desired operation (e.g., Insert or Upsert). On the Mapping Definition page, map source columns to target fields. Fields marked with an asterisk are required to create a valid task. Note : For this integration, the Title field is required on the Pipedrive side. It identifies the deal with a short description or label. When importing from QuickBooks SalesReceipt, map Title to any identifying field that gives the deal context, such as DocNumber or CustomerRefName. Save the task and run the integration. Click the Schedule tab to set up automatic execution at your preferred intervals. Monitor integration progress on the Monitor or Logs tabs. Pros Advanced automation options with support for multi-step workflows, conditional logic, triggers, and scheduling. Scalability to handle growing data volumes without performance degradation. Bi-directional synchronization. Extensive mapping & transformation options. Multi-system integration in a unified data workflow. No-Code/low-code setup. Cons Learning curve when configuring logic-based flows and mappings. Can be an overhead for simple one-to-one syncs. Pricing can be too high for companies with limited budgets. Best Practices for a Smooth Pipedrive QuickBooks Integration Define your integration goals . Naturally, if you decide to integrate Pipedrive with QuickBooks, you probably know what data should be synced and under what conditions. Define clearly the objects for integrations, such as invoices, contacts, or both, and the triggers that should launch the workflow. Clean your data before integrating . Integration is a step toward optimization, and there\u2019s no point in optimizing messy data. Before launching the sync, make sure the data in both systems is accurate and standardized. Remove duplicates, fill in missing key fields, and review naming conventions. Remember: garbage in, garbage out. Start simple, then scale. Begin with one essential use case before layering in more complexity. This reduces risk and makes troubleshooting much easier. Understand data mapping requirements. Mapping holds the key to the success of the whole operation. It ensures accurate data transfer, and poor mapping can stop your integration in its tracks. Pay attention to field compatibility, data types, and required fields on both sides. Test thoroughly . Even if you\u2019re not a first-timer, a test run is always worth it. It helps you catch potential issues like missing fields or currency mismatches \u2013 things that can derail your workflow. At best, it gives you peace of mind. At worst, it saves your integration from failing unexpectedly. Monitor your integration regularly. Keep an eye on sync logs, error messages, and performance over time. Most integration platforms offer logs or alerts \u2013 use them to proactively catch and fix issues. Involve both sales and accounting teams . This integration exists to support both sales and accounting, but their priorities are different. Sales might care about deal stages and contacts, while accounting focuses on billing details and tax codes. Make sure both teams have input when building the workflow. Plan for QuickBooks Online vs. Desktop differences. As mentioned earlier, some integration tools only support QuickBooks Online, while others (like Skyvia) support both. Be sure your chosen solution matches the version of QuickBooks you use \u2013 the data structures and connection methods are different. Document your integration setup . Keep a clear record of how the integration is configured \u2013 triggers, field mappings, error-handling rules, and access credentials. This is a lifesaver when updating the flow or handing it off to someone else. Review and update as processes evolve . As your business grows, so do the workflows and data you rely on. New fields may fall through the cracks if not included in the workflow. Review your integration regularly to ensure it keeps up with your current processes, not last quarter\u2019s setup. Conclusion If a minus times a minus equals a plus, then combining two pluses doubles the benefit. That\u2019s exactly what you get with Pipedrive-QuickBooks integration \u2013 a classic CRM-accounting sync setup that helps eliminate mismatched records, invoice discrepancies, and the delays caused by manual data entry. In this article, we explored three integration methods: Connector applications for basic sync scenarios. Point-to-point automation tools for simple, event-driven tasks. Integration platforms for handling large data volumes, complex mappings, and multi-entity workflows. Choose the method that best fits your business size, scalability needs, and team\u2019s technical comfort level. And whichever path you take, remember that starting with Skyvia is both effortless and cost-effective. F.A.Q. for Pipedrive and QuickBooks Integration What are the main benefits of integrating Pipedrive and QuickBooks? It eliminates manual data entry, reduces errors, speeds up invoicing, improves cash flow visibility, and aligns sales and accounting teams for smoother operations. Can I sync custom fields from Pipedrive to QuickBooks using an integration tool? Yes, many integration tools like Skyvia support custom field mapping, allowing you to sync non-standard fields based on your specific business needs. How often can data be synced between Pipedrive and QuickBooks? It depends on the tool. Some support real-time sync, while others offer scheduled syncs at custom intervals \u2013 hourly, daily, or on-demand. Is it possible to integrate Pipedrive with QuickBooks Desktop, or only QuickBooks Online? Yes, some tools like Skyvia support both QuickBooks Online and Desktop. Just note that QuickBooks Desktop integration may require installing a local agent. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-quickbooks%2F) [Twitter](https://twitter.com/intent/tweet?text=Pipedrive+Integration+with+QuickBooks%3A+Automate+%26+Grow%C2%A0&url=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-quickbooks%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/pipedrive-integration-with-quickbooks/&title=Pipedrive+Integration+with+QuickBooks%3A+Automate+%26+Grow%C2%A0) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/pipedrive-integration-with-slack/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Pipedrive Integration with Slack: Key Methods & Benefits By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/pipedrive-integration-with-slack/#respond) 58 May 26, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-slack%2F) [Twitter](https://twitter.com/intent/tweet?text=Pipedrive+Integration+with+Slack%3A+Key+Methods+%26+Benefits&url=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-slack%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/pipedrive-integration-with-slack/&title=Pipedrive+Integration+with+Slack%3A+Key+Methods+%26+Benefits) It happened again. A deal slipped through the cracks just because no one saw the update in time. Sound familiar? That\u2019s exactly the kind of mess a Pipedrive Slack integration can fix. Pipedrive tracks the deals. Slack moves the conversation. But when they\u2019re disconnected, things fall apart\u2014missed updates, delayed follow-ups, and lost revenue. This guide shows you how to bring them together. I\u2019ll walk you through native tools, point-and-click connectors like Zapier, and powerful platforms like Skyvia. Whether you\u2019re a tech pro or not, you\u2019ll be up and running in no time. Let\u2019s fix the cracks once and for all. Table of Contents Why Pipedrive Slack Integration is a Game-Changer for Sales Teams 4 Smart Ways to Set Up Pipedrive Slack Integration (No Matter Your Skill Level) Method 1: Native Integration with Dealbot Method 2: Point-to-Point Connectors Method 3: Custom API Integration Method 4: Third-Party Integration Platforms Real-Life Use Cases of Pipedrive Slack Integration That Save Time Daily Troubleshooting Your Pipedrive Slack Setup (So You Don\u2019t Pull Your Hair Out) Final Thoughts: Streamline Sales, One Slack Ping at a Time Why Pipedrive Slack Integration is a Game-Changer for Sales Teams Let\u2019s face it\u2014your sales team doesn\u2019t live inside Pipedrive. They live in Slack. That\u2019s why connecting the two isn\u2019t just convenient\u2014it\u2019s transformational. Here\u2019s how: Real-Time Sales Alerts = No More \u201cOops, I Missed That Deal\u201d Stay on top of every deal update with instant Slack notifications\u2014no need to refresh Pipedrive every five minutes. Example: Sarah, a sales manager, gets a Slack ping:\u00a0\u201cNew deal created by Jason: ACME Corp, $12K.\u201d\u00a0She drops a quick emoji to celebrate, then pings Jason for context. No one had to send a manual update. Less Context Switching = More Mental Bandwidth for Selling Your reps shouldn\u2019t have to jump between tabs all day. Integration keeps the sales context inside the tool they\u2019re already using\u2014Slack. Example: Tom\u2019s on a Zoom call and sees a Slack message:\u00a0\u201cDeal moved to Negotiation \u2013 Global Freight Solutions.\u201d\u00a0He replies right there to coordinate next steps with the team\u2014no CRM login needed. Smoother Team Collaboration = Better Deal Outcomes Sales isn\u2019t a solo sport. When updates land in shared Slack channels, everyone stays in sync. Example: When a deal enters the \u201cContract Sent\u201d stage, it auto-posts to the #sales-legal channel. The legal team jumps in, reviews the doc, and responds in minutes, not hours. Faster Response Times = Shorter Sales Cycles Speed matters. Integration lets your team act on deal changes the moment they happen. Example: A Slack alert says:\u00a0\u201cFollow-up task overdue: Call back Janet @ TechNova.\u201d\u00a0Jake sees it, calls her right away, and reopens the deal before the competition even notices. Data Visibility = Smarter Decisions, Less Guesswork When everyone can see key updates in Slack, you reduce blind spots and make better calls faster. Example: Every Friday, a Slack summary posts:\u00a0\u201cThis Week\u2019s Closed Deals: 5 | Total Value: $58K.\u201d\u00a0Managers use this to prep for Monday\u2019s pipeline review\u2014no extra reporting needed. Automated Workflows = Less Manual Work, More Selling Time Let automation handle the grunt work so your team can stay focused on closing. Example: As soon as a new lead enters Pipedrive, a Slack message assigns it to a rep, links to the deal, and tags the rep with next steps\u2014all without touching a keyboard. Ready to make this happen? Let\u2019s explore four easy ways to integrate Pipedrive with Slack\u2014from plug-and-play to pro-level. 4 Smart Ways to Set Up Pipedrive Slack Integration (No Matter Your Skill Level) Not all teams are built the same, and neither are their tech skills. Some want a plug-and-play setup. Others love fine-tuned control. Good news? There\u2019s a Pipedrive Slack integration method for everyone. Whether you\u2019re a no-code newbie, a Zapier fan, or someone who eats Python scripts for breakfast, we\u2019ve got you covered. Let\u2019s break it down. Pipedrive Slack Integration Methods at a Glance Check out the comparison table for all the Pipedrive Slack integration methods discussed: Method Group Method Best For Setup Time Skill Level Customizability Native Dealbot Instant alerts \u2b50 Beginner \ud83d\udd39 Low \u2013 limited to built-in triggers and formats Point-to-Point Zapier / Make Custom triggers without code \u2b50\u2b50 Intermediate \ud83d\udd38 Medium \u2013 flexible logic, multi-step, filter support Custom API Integration API Full control \u2b50\u2b50\u2b50 Developer \ud83d\udd3a High \u2013 total control over data, format, timing Third-Party Skyvia Complex, scalable workflows \u2b50\u2b50 Intermediate/Advanced \ud83d\udd38 Medium \u2013 powerful, GUI-based, supports transformations Method 1: Native Integration with Dealbot Best for Busy teams who want real-time deal alerts in Slack, without touching code. How it works Dealbot is Pipedrive\u2019s built-in integration for Slack. Once you connect your Slack workspace, you can get updates when deals are created, moved, won, or lost. You can even ask Slack to fetch deal info for you. Step-by-step Guide Go to Pipedrive Marketplace \u2192 Find \u201cDealbot for Slack.\u201d Click Dealbot for Slack as seen above. On the next page, click Authorize . Select the users who will be included in the integration and receive notifications, then click Next . On the next page, click Allow and Install . Pick which Slack channel gets updates and click Allow . Choose your trigger events (e.g., new deals, stage changes). At this point, you should see the automations under Slack similar to the one below. Test your integration by creating a new deal in Pipedrive and see the Deals added integration work. Adjust when necessary. Pros and Cons Easy and fast it may be, but there are pros and cons with this method: Pros: Super quick to set up Minimal learning curve Works right out of the box Cons: Limited customization Can only post to public Slack channels (not DMs or private ones) Only supports predefined triggers Method 2: Point-to-Point Connectors Best for Teams that want more control without writing code. How it works Point-to-point tools like [Zapier](https://zapier.com/) or [Make](https://www.make.com/) sit in the middle. You choose a trigger (like \u201cnew deal in Pipedrive\u201d) and an action (like \u201csend message to Slack\u201d). Example (Using Zapier) You can use Zapier to build Zaps or automations that will send a message to Slack depending on the deal status in Pipedrive. Let\u2019s try this example: Trigger: Deal closed in Pipedrive Action: Send a Slack message to #deals-workspace with deal details Check out the sample Zap below: Pros and Cons Point-to-point tools like Zapier are better than the native tools. But then again, there\u2019s a good and bad side: Pros: Tons of flexibility Works with private channels, DMs Can chain multiple steps (e.g., update Google Sheet, then notify Slack) Cons: Slight learning curve May cost money depending on volume Can get messy if you build too many Zaps Method 3: Custom API Integration Best for Devs and tech teams who want full control and aren\u2019t afraid to get hands-on. How it works Pipedrive and Slack both have public APIs. With a bit of scripting, you can build custom workflows, fine-tune what data is sent, and decide exactly when and how notifications are delivered. Example use case: Notify a sales channel when a deal is marked \u201cwon\u201d with a celebratory GIF. Sample Python snippet Below is a snippet on how to send a message to Slack using Python: import requests\n\n# Pipedrive deal info (replace with your actual deal data)\ndeal_title = \"MegaCorp Upgrade\"\nvalue = \"$24,000\"\n\n# Slack webhook URL\nslack_webhook_url = \"https://hooks.slack.com/services/your/webhook/url\"\n\nmessage = {\n \"text\": f\"\ud83c\udf89 Deal Won! {deal_title} worth {value} just closed!\"\n}\n\n# Send to Slack\nrequests.post(slack_webhook_url, json=message) Pros and Cons If you have a knack for scripting and find this fun, this could be a great choice. But note the following: Pros: Total control Custom logic, filters, formatting Works with any Slack workspace or Pipedrive event Cons: Requires dev time and testing Must handle error logging, auth, etc. Not ideal for non-tech users Method 4: Third-Party Integration Platforms Best for Teams with multiple tools to sync, not just Pipedrive and Slack. How it works Platforms like [Skyvia](https://skyvia.com/) , Boomi, or Tray.io offer powerful cloud-based integrations. Think of them like Zapier, but with enterprise-level features, scheduling, and deeper data handling. Skyvia, for example, lets you connect Pipedrive with Slack and other tools (like CRMs, databases, or marketing apps). You can schedule data syncs, apply transformations, and build full workflows. It\u2019s a complete data platform for most requirements. Step-by-step Guide Let\u2019s use Skyvia Automation in our example. You need two connectors: One for Pipedrive and another for Slack. Then, the Skyvia Automation will use these connectors. How to Create Skyvia Connections You only need a few steps to create a Skyvia connection: Click + Create New from the upper left of the Skyvia workspace, and select Connection . A new page will appear. Search for the connector you need by typing it into the search box (e.g., Pipedrive or Slack). Click the name of the connector in the search results. The connection configuration page will display. Configure the connection, test, and save. Now, follow these steps in creating Pipedrive and Slack connections below: Create a Skyvia connection for Pipedrive following the steps earlier. To configure, sign-in with your active Pipedrive account to get the Access Token. See a successful connection below: Create a Skyvia connector for Slack. Note that one Slack connector is for one workspace. This will be the destination of the message you will send. Sign in with Slack to get the security Token. Check out a successful Slack connection below: How to Create a Skyvia Automation A Skyvia automation needs a trigger and an action. In this sample, you need to create a trigger that will get deals won from Pipedrive every 30 minutes. You can change the polling schedule as needed. The fastest is 5 minutes. To do that, follow the steps below: A. Create the Automation Click + Create New and select Automation . The Trigger and Stop components should be available in the new automation canvas. B. Configure the Trigger Click the trigger and configure to get Deals won in Pipedrive. Change the name of the trigger (e.g., Pipedrive Deal Won). Choose the trigger type you need. In our example, we will choose Connection . This will allow Skyvia to query the Deals table in Pipedrive. Then, select the Pipedrive connection you created earlier. Change the Trigger to New/Updated Record . Select the Deals table. Choose the Columns you need. In this example, we will choose\u00a0Status\u00a0and\u00a0Title. We will need the\u00a0Title\u00a0column for the Slack message later. Change the Pool Interval to 30 minutes. Change the Trigger Condition to Expression . You can type the condition or click the button to launch the Expression Editor. The condition should be\u00a0Status = \u2019won\u2019. See a configured sample below: C. Add and Configure the Action Drag the Action component and place it between the Trigger and Stop components. Click the Action component. Change the Name of the action. Choose the Slack connection you made earlier from the dropdown list. Select SendMessage action from the Actions dropdown list. Check out the sample below covering up to this point: Change the Text parameter to the message you want to send. Click the pencil icon beside the Text parameter to open the Mapping Editor. The expression should be\u00a0concat(\u2018Deal Won: \u2018, trigger.Title), which will combine the text\u00a0Deal Won\u00a0to the title of the Deal that changed the status to\u00a0Won. The\u00a0trigger.Title\u00a0became available because we included the\u00a0Title\u00a0column from the Pipedrive Deals table. Check out the sample below: Click Apply . D. Test and Enable the Automation Save your automation and switch to Test Mode to test your automation. Enter a deal in Pipedrive and change the Status to Won. Wait for the Slack message. Reconfigure if you didn\u2019t receive the message. Once tested and good, click < Overview . Your automation is still Disabled . You can see this in the upper right corner of the page. Click it and select Enabled . Pros and Cons Flexibility meets ease-of-use when you choose third-party tools for Pipedrive and Slack integration. Check out the pros and cons below and see if this is for you: Pros: Scales with growing business needs No need to write code Can connect multiple systems in one flow Cons: May require a subscription Setup can take a bit longer than Dealbot More features = more complexity Real-Life Use Cases of Pipedrive Slack Integration That Save Time Daily It\u2019s one thing to know you\u00a0can\u00a0integrate Pipedrive and Slack. But it\u2019s another to see how it actually makes your team faster, sharper, and more in sync. These use cases show exactly how the integration plays out in daily sales life, where every second counts. Sales Pipeline Movement Alerts What it does: Get automatic Slack alerts when a deal moves to a new stage in Pipedrive. Real-life scenario: Imagine Sofia, a busy sales manager, juggling 10 reps. As soon as a deal moves to \u201cProposal Sent,\u201d she gets a Slack ping in #sales-pipeline. No need to open Pipedrive or chase updates. She instantly reacts and drops a motivational emoji \u2014 or jumps into the thread if something looks off. No emails. No hunting. Activity & Task Management What it does: Get notified in Slack about scheduled tasks, overdue follow-ups, and meetings logged in Pipedrive. Real-life scenario: Ben keeps forgetting to follow up with warm leads. Now, every time he logs a call or schedules a meeting in Pipedrive, it shows up in his personal Slack channel. He uses these nudges to plan his day and keep momentum going. It\u2019s like having a personal assistant that never sleeps. Lead Management & Assignment What it does: New leads or deals get automatically routed to the right person, with a Slack alert tagging the rep. Real-life scenario: The marketing team captures 50 new leads from a webinar. As they land in Pipedrive, Slack alerts tag specific reps based on region. Amy sees three deals tagged to her in #lead-assignments, replies with \ud83d\udc40, and calls the first one within minutes. No bottlenecks. No confusion. Reporting & Summaries (Advanced Use Case) What it does: Use tools like Skyvia to send daily or weekly Slack summaries with deal stats, progress charts, or missed opportunities. Real-life scenario: Every Monday at 9 AM, the founder of a 12-person startup gets a clean Slack summary showing deals opened, won, and dropped the past week. It gives her instant visibility without diving into reports. She forwards the message to investors with one tap. Boom \u2014 done. Team Collaboration Triggers What it does: Create Slack conversations instantly when key Pipedrive events happen \u2014 like big deals, customer issues, or lost leads. Real-life scenario: When a $30K deal is marked as \u201cLost,\u201d Slack automatically creates a thread in #deal-reviews with deal details and tags the account exec and team lead. They hash out what went wrong\u00a0right there, while the details are still fresh. These use cases may seem simple, but they stack up fast. Over a month, they can save hours of manual effort and prevent countless dropped balls. It\u2019s like giving your team a sixth sense \u2014 where everyone stays informed without even trying. Troubleshooting Your Pipedrive Slack Setup (So You Don\u2019t Pull Your Hair Out) Let\u2019s be honest \u2014 integrations\u00a0should\u00a0make life easier. But sometimes, setting them up feels like wrestling with a printer from 2005. If your Pipedrive-Slack setup isn\u2019t working right, don\u2019t panic. Here are the most common issues and how to fix them fast \u2014 no hair loss required. Problem: Slack Notifications Aren\u2019t Coming Through What\u2019s probably going on: The Slack app might not have the right permissions, or the automation trigger isn\u2019t firing correctly. Quick fix: Double-check the Slack workspace connection inside Pipedrive or your third-party tool (like Zapier or Skyvia). Make sure the app is installed on the correct workspace. Sounds basic, but it\u2019s easy to miss. Confirm that your trigger condition actually matches the event (e.g., \u201cDeal moved to\u00a0any\u00a0stage\u201d vs. \u201cDeal moved to\u00a0Proposal Sent\u201d). Pro tip: Create a test deal and move it around to see if Slack reacts. That tells you if the automation\u2019s alive. Problem: Messages Are Going to the Wrong Slack Channel What\u2019s probably going on: You picked the wrong channel during setup or edited it mid-automation. Quick fix: Go back into your integration settings and check which channel is mapped. If you\u2019re using Zapier, reselect the channel manually \u2014 it doesn\u2019t always update automatically. Pro tip: Use private test channels (like\u00a0#automation-test) before sending alerts to the whole sales team. Saves face. Problem: Integration Suddenly Stopped Working What\u2019s probably going on: Token expired, user permissions changed, or someone disconnected the app. Quick fix: Re-authenticate your Pipedrive and Slack connections. Most tools will prompt you. Make sure the person who set up the integration still has access and hasn\u2019t been removed from either platform. Check your app\u2019s dashboard (Zapier, Skyvia, or Make) for error logs. Pro tip: Some tools auto-disable your automation if it fails a few times. You might just need to flip the switch back on. Problem: You\u2019re Getting Spammed with Notifications What\u2019s probably going on: You set your triggers too broadly, or too many automations overlap. Quick fix: Review each automation rule. Are you firing alerts for every stage move or every field change? That\u2019s overkill. Add filters or delay steps \u2014 e.g., only alert if deal value > $5,000. Use digest-style summaries instead of individual alerts (available in tools like Skyvia). Pro tip: Less is more. Don\u2019t make your team tune out just because Slack\u2019s yelling every 10 seconds. Problem: Your Fields Aren\u2019t Showing in Slack Messages What\u2019s probably going on: You forgot to map fields correctly or used the wrong placeholders. Quick fix: Go into your automation tool and check the message template. Make sure you\u2019re using the exact field names from Pipedrive. Preview the message to see what\u2019s missing before saving. Pro tip: If you renamed fields in Pipedrive recently, update the mappings. Otherwise, Slack messages will show blank or error values. No integration is totally plug-and-play. But once you squash these bugs, the system runs smoothly. Think of it like tuning a bike \u2014 a little setup now saves a ton of effort later. Final Thoughts: Streamline Sales, One Slack Ping at a Time You don\u2019t need to be a tech wizard to set up a Pipedrive Slack integration that actually\u00a0helps\u00a0instead of adding noise. With just a few smart automations, your sales team gets the info they need, when they need it, without digging through inboxes or tabs. The result? Faster follow-ups. Fewer dropped deals. More high-fives. You\u2019ve seen the benefits. Real-world use cases. Common roadblocks and how to fix them. You\u2019re not starting from scratch anymore \u2014 you\u2019ve got a blueprint. Now the next move is yours. Try one small integration today. Maybe just a new-deal alert. See how it feels. Then build from there. Before you know it, your team\u2019s working smoother, closing faster, and wondering how they ever managed without it. Want help picking the right tool for your use case? Jump to the comparison table or hit us up with questions. You\u2019ve got options \u2014 and backup. Get started with your first Pipedrive Slack integration today using Skyvia \u2014 register [here](https://app.skyvia.com/register) for free. F.A.Q. for Pipedrive Slack integration How long does it take to set up? As little as 5 minutes using Dealbot. More advanced setups can take 15\u201360 minutes. Is it secure? Yes. All data passes through encrypted APIs or secure third-party tools. Can I control what triggers Slack messages? Absolutely. Customize which Pipedrive events trigger alerts, from new deals to won/lost status. Can I notify individuals, not just channels? Yes\u2014use direct messages or tag specific people in Slack from some integrations. Can I link Pipedrive to tools beyond Slack? Yes. Skyvia, Zapier, and Make allow you to bridge Pipedrive with dozens of apps. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-slack%2F) [Twitter](https://twitter.com/intent/tweet?text=Pipedrive+Integration+with+Slack%3A+Key+Methods+%26+Benefits&url=https%3A%2F%2Fblog.skyvia.com%2Fpipedrive-integration-with-slack%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/pipedrive-integration-with-slack/&title=Pipedrive+Integration+with+Slack%3A+Key+Methods+%26+Benefits) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/point-to-point-integration-pros-cons/", "product_name": "Unknown", "content_type": "Blog", "content": "[Integration](https://skyvia.com/blog/category/integration/) Point-to-Point Integration Explained: Key Pros and Cons By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/point-to-point-integration-pros-cons/#respond) 1614 July 5, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpoint-to-point-integration-pros-cons%2F) [Twitter](https://twitter.com/intent/tweet?text=Point-to-Point+Integration+Explained%3A+Key+Pros+and+Cons&url=https%3A%2F%2Fblog.skyvia.com%2Fpoint-to-point-integration-pros-cons%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/point-to-point-integration-pros-cons/&title=Point-to-Point+Integration+Explained%3A+Key+Pros+and+Cons) What is Point-to-Point Integration? Point-to-point (P2P) integration is a straightforward way to connect different systems and applications to exchange data and work together.If you\u2019ve ever wondered how various software applications can talk to each other, this is one of the concepts making it happen. In this method, each application is directly linked to the others it needs to communicate with. That ensures that data is transferred quickly and accurately between systems, reducing potential errors. Such an approach eliminates data silos and allows businesses to work faster and more efficiently. Let\u2019s consider how P2P works, its pros and cons, and business scenarios when this method might be the right choice for organizations compared to Hub-and-Spoke Integration, Enterprise Service Bus (ESB), and [Integration Platform as a Service](https://skyvia.com/blog/what-is-ipaas/) (iPaaS). Table of Contents How It Works Pros of Point-to-Point Integration Cons of Point-to-Point Integration Use Cases for Point-to-Point Integration Best Practices for Implementing Point-to-Point Integration Skyvia for Point-to-Point Integration Conclusion How It Works Point-to-point integration uses a few approaches to connect the applications directly: custom coding, APIs, and a mix of both. Custom code offers high customization but needs appropriate technical expertise. APIs ensure standardized interactions, and combining both allows for complex workflows and powerful integrations. The table below shows each method\u2019s usage specifics. Method Usage Example Scenario Custom Code Directly write scripts or programs to connect applications. A developer writes a custom script to extract data from an e-commerce platform and load it into an inventory management system, handling specific data transformations APIs Use application-specific APIs to exchange data between systems. A CRM system uses the API of an email marketing tool to automatically add new contacts to a mailing list whenever a new lead is created. Mix of Custom Code & APIs Combine custom code with API calls to create flexible and powerful integrations. A business combines custom code to extract and transform sales data from a CRM, and then uses APIs to load the data into both an accounting system and a data warehouse for reporting. Pros of Point-to-Point Integration Simplicity Point-to-point integration is straightforward to understand. It involves directly connecting two applications without complex middleware or integration platforms. This simplicity makes it accessible for small businesses and projects with limited technical resources. Performance Efficiency Point-to-point integration often results in faster data transfer and lower latency. This efficiency is critical for applications that require real-time data processing and quick response times. Cost-Effectiveness Setting up direct connections between a few applications is typically less expensive than implementing a full-scale integration platform. There are fewer components to purchase, configure, and maintain. Point-to-point integration is a budget-friendly option for businesses that need basic integrations without a hefty investment. Minimal Initial Setup Starting point-to-point integration requires establishing a direct link between the applications and configuring the data exchange. That allows businesses to integrate applications rapidly, speeding up deployment and time-to-value. Flexibility in Customization With custom code and APIs, companies can tailor the integration to meet specific business requirements and handle unique data formats or workflows. This flexibility ensures the integration works exactly as needed, accommodating any special cases or business rules. Cons of Point-to-Point Integration Scalability Issues As the number of integrated applications grows, the direct connections increase exponentially. This makes it difficult to scale, as each new application requires multiple new connections, leading to a complex web of integrations. So, managing and maintaining these connections becomes increasingly challenging and resource-intensive. Maintenance Challenges Each direct connection in a point-to-point integration setup needs to be monitored, updated and maintained individually. Maintenance can become a significant burden as the number of connections grows, requiring more time and effort from the IT team. Lack of Centralized Control Point-to-point integration lacks a central hub or platform to manage and monitor all integrations, making it hard to get a complete view of the company\u2019s integration landscape. Troubleshooting issues and ensuring data integrity across all systems becomes more difficult. Security Concerns Each direct connection must be individually secured and managed, increasing the potential for security gaps. It means a greater risk of data breaches and compliance issues if any connection is compromised. Limited Reusability Integrations created with custom code and APIs are often tailored to specific applications and use cases. These custom integrations are not easily reusable for other projects or applications. Developing new integrations for each particular need takes additional time and effort. Use Cases for Point-to-Point Integration Whether you\u2019re a small business looking for cost-effective integration, a company needing to bridge modern apps with legacy systems, or working in a specific industry like healthcare, retail, or finance, point-to-point can do some magic, but let\u2019s see how. Cost-Effective Integration Imagine you\u2019re running a small business with a few key applications: an invoicing system, a CRM, and an email marketing tool. Point-to-point integration is perfect here. It allows setting up direct connections between the invoicing system and CRM to automatically update customer records when invoices are paid. Then, links the CRM to the email marketing tool to keep mailing lists updated. With custom code and APIs, this setup is easy, cost-effective, and up and running in no time. Legacy Systems A company relies on a trusty old ERP system that\u2019s been around for years. Now, they\u2019re adopting a new cloud-based CRM and need these systems to talk to each other. With point-to-point integration, they can directly connect ERP and CRM using custom code and APIs. This way, sales data from the CRM is automatically updated in the ERP, and inventory levels from the ERP are reflected in the CRM. Specific Industry Examples In a busy hospital, the patient management system keeps track of patient visits, treatments, and discharge information. The billing system, on the other hand, handles all the financial aspects. Point-to-point integration makes it possible to set up a direct connection between these two systems. A custom script ensures that every treatment recorded in the patient management system automatically updates the billing system. In a retail store, sales happen fast, and workers need the inventory system to keep up. Point-to-point integration establishes a direct link between POS and inventory management systems. Every sale made at the POS updates inventory in real-time. This integration ensures the client never runs out of stock and always has accurate inventory levels, which helps with reordering and inventory planning. Comparing Point-to-Point with Other Integration Methods Choosing the proper integration method depends on the company\u2019s specific needs, the scale of its integration projects, and budget. Let\u2019s compare them to see how to select the one you need, according to the appropriate aspects. Aspect Point-to-Point Integration Hub-and-Spoke Integration Enterprise Service Bus (ESB) Integration Platform as a Service I(PaaS) Architecture Direct connections between applications. Centralized hub that connects all applications (spokes). Centralized bus that facilitates communication between applications. Cloud-based platform for integrating multiple applications and services. Simplicity Simple to set up for a few applications. Moderates complexity due to centralized hub. High complexity with extensive configuration and management. User-friendly with minimal setup; cloud-based simplicity. Scalability Limited scalability; complexity grows exponentially with more applications. More scalable than point-to-point, but can become a bottleneck at the hub. Highly scalable, designed for large-scale integrations. Highly scalable, designed for large-scale integrations. Maintenance High maintenance; each connection needs individual updates and management. Centralized maintenance, but can become complex as the hub grows. Centralized maintenance with potentially complex management. Low maintenance; managed by the iPaaS provider. Flexibility High flexibility with custom code and APIs for specific use cases. Moderates flexibility; customization is possible but demands more structure. High flexibility with extensive configuration options. High flexibility with pre-built connectors and customization options. Cost Cost-effective for small projects. Moderates cost. A centralized hub can be expensive to maintain. High cost due to complexity and infrastructure requirements. Cost-effective with subscription-based pricing models. Performance High performance for simple, direct connections. Moderates performance; can be impacted by hub efficiency. High performance, designed for handling large data volumes and complex workflows. High performance with cloud scalability and optimization. Centralized Control Lacks centralized control; no single view of integrations. Centralized control through the hub. Centralized control with detailed monitoring. Centralized control with a single dashboard for managing integrations. Security Each connection must be individually secured; higher risk of security gaps. Centralized security at the hub; easier to manage but can be a single point of failure. Centralized security with robust, enterprise-grade features. High security, managed by the iPaaS provider, with end-to-end encryption. Best Practices for Implementing Point-to-Point Integration A good plan is the foundation of a successful integration. It helps understand what businesses are connecting, why, and how. Start by identifying which systems need to be integrated and what data needs to flow between them. Make a clear map of the data flow and definite integration goals. Planning and Design As a business grows, its integration needs might change. Planning for scalability from the start can save a lot of headaches later. Use modular design principles. This means building integration to make it easy to add new connections or modify existing ones without starting from scratch. Testing and Validation Testing helps catch errors and issues before they go live. It ensures integration works as expected and data flows correctly between systems. Set up a testing environment that mirrors the production setup as closely as possible. Run extensive tests, including edge cases, to ensure everything works correctly. The validation ensures that the data being transferred is accurate and complete to maintain its integrity across systems. Monitoring and Maintenance Monitoring helps identify issues early before they impact the company\u2019s operations. Regular maintenance keeps the integrations running smoothly. Set up monitoring tools to track the integrations\u2019 performance. Use dashboards to get a quick overview and set up alerts for potential issues. Skyvia for Point-to-Point Integration [Skyvia](https://skyvia.com/data-integration) is a cloud-based universal data integration tool created for businesses of all sizes that can manage, integrate, and automate their data flows. With its [user-friendly](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) interface and robust features, Skyvia makes complex integrations straightforward and accessible. Skyvia incorporates automation to streamline data integration processes, save time, and reduce manual interventions. It also offers attractive pay-as-you-go [pricing](https://skyvia.com/pricing) models, including the free plan. Go [here](https://www.youtube.com/watch?v=qf-6nYllrIU) to see the successful data integration story with Skyvia. Conclusion Point-to-point integration provides the simplicity, efficiency, and customization businesses need to create robust and reliable integrations. By embracing these abilities and following best practices, companies can ensure that systems remain connected and data flows seamlessly. Regardless of company size, point-to-point integration\u00a0can help streamline operations, enhance data accuracy, and ultimately drive business success. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpoint-to-point-integration-pros-cons%2F) [Twitter](https://twitter.com/intent/tweet?text=Point-to-Point+Integration+Explained%3A+Key+Pros+and+Cons&url=https%3A%2F%2Fblog.skyvia.com%2Fpoint-to-point-integration-pros-cons%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/point-to-point-integration-pros-cons/&title=Point-to-Point+Integration+Explained%3A+Key+Pros+and+Cons) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Integration](https://skyvia.com/blog/category/integration/) [Data Mesh vs Data Lake: Comprehensive Comparison](https://skyvia.com/blog/data-mesh-vs-data-lake/) [Integration](https://skyvia.com/blog/category/integration/) [10 Best Data Aggregation Tools for 2025 for Your Business Needs](https://skyvia.com/blog/data-aggregation-tool/)" }, { "url": "https://skyvia.com/blog/postgres-to-amazon-s3-export/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) AWS RDS Postgres Export to S3: 2 Fastest Methods By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/postgres-to-amazon-s3-export/#respond) 4730 March 5, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgres-to-amazon-s3-export%2F) [Twitter](https://twitter.com/intent/tweet?text=AWS+RDS+Postgres+Export+to+S3%3A+2+Fastest+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgres-to-amazon-s3-export%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgres-to-amazon-s3-export/&title=AWS+RDS+Postgres+Export+to+S3%3A+2+Fastest+Methods) When we talk about the cloud, it\u2019s mostly about AWS, Azure, and GCP. Despite increasing competition between these \u201cBig Three\u201d, AWS maintains its lead, holding [31-32% of the global market share](https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/) as of Q4 2024. The provider currently offers over 240 fully featured services across various domains, including computing, storage, databases, big data, machine learning, and AI. Among these, cloud storage stands out as one of the most widely used solutions, particularly Amazon S3 (Simple Storage Service). One common operation when working with this service is moving data from Postgres to S3 \u2013 a practice driven by business and technical needs. In this article, we\u2019ll explore and compare two methods of exporting data from PostgreSQL database running in Amazon RDS (Relational Database Service) to an S3 bucket, providing you with the most efficient techniques to extract and save your data. Note : This article assumes that you already have a PostgreSQL instance up and running in AWS RDS and can access the database with rds_superuser permissions using a GUI or CLI tool. Also, you must have your AWS CLI configured correctly. Table of contents What is PostgreSQL in Amazon RDS? Amazon S3: What is it? Amazon RDS PostgreSQL to Amazon S3 Connection Techniques Method 1: Exporting data from AWS RDS Postgres to S3 Using Manual Option Method 2: Exporting data from RDS Postgres to S3 Using Automated Option Conclusion What is PostgreSQL in Amazon RDS? [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/) is a fully managed database service from AWS, created for those who prioritize USING a database rather than managing it. With RDS, AWS handles all tasks related to database administration, including backups, patching, monitoring, and scaling. In fact, it gives you all the power of PostgreSQL \u2013 an open-source, feature-rich relational database \u2013 and multiplies it with additional cloud perks, including high availability, scalability, security, and seamless [integration](https://skyvia.com/blog/connect-salesforce-to-sql-server/) with other AWS services. Amazon S3: What is it? [Amazon S3](https://aws.amazon.com/s3/) is a go-to solution for storing anything and everything, from tiny text files to petabytes of data. It\u2019s secure, scalable, and highly durable, with objects replicated across multiple Availability Zones. Think of it as your cloud-based hard drive \u2013 but way better. With its literally depthless storage capacity, pay-as-you-go pricing, and various data accessibility tiers, S3 covers everything a business could need in terms of backups, big data, machine learning, and even website hosting. Amazon RDS PostgreSQL to Amazon S3 Connection Techniques When working with [AWS S3 and RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/postgresql-s3-export.html) , it\u2019s often necessary to move data between these two services. This is especially common in scenarios such as: Data migration : Moving data between systems or environments. Backups : Creating secure copies of your data for recovery purposes. Analytics : Exporting data for analysis using tools like Amazon Athena or Redshift. Archiving old records : Storing infrequently accessed data cost-effectively. Machine learning : Providing large datasets for training models. Data sharing : Enabling collaboration with external teams or partners. In the following sections, we\u2019ll explore two methods for exporting data from RDS to S3: Manual export : Perform directly from a PostgreSQL instance using the aws_s3 extension. Automated export : Use Skyvia for a no-code, scheduled data [export](https://skyvia.com/blog/salesforce-quickbooks-integration/) . Method Best for Automation Cost Complexity AWS S3 extension AWS-native, simple exports such as: \u2013 Data lake integration; \u2013 Moving large datasets to S3 for the purpose of data archiving and backup. Manual at its core; can be automated with tools like AWS Lambda and Amazon EventBridge. It\u2019s calculated based on the pay-as-you-go model. Medium Skyvia \u2013 Regular updates and synchronization; \u2013 Data integration tasks that require automation and scheduling; \u2013 Complex integration scenarios that involve data relations and splitting. Fully-automated Freemium (free tier available); paid plans for advanced features starting from $79/mo. Low Method 1: Exporting data from AWS RDS Postgres to S3 Using Manual Option Step 1: Enable the AWS S3 Extension Before exporting data, we need to enable a unique feature in PostgreSQL called aws_s3 , which allows us to send data directly to S3. Open your PostgreSQL database (you can use a tool like PGAdmin4 or the command line). Run this command to install the extension: CREATE EXTENSION aws_s3 CASCADE; To confirm that it\u2019s installed, check the list of installed extensions: SELECT * FROM pg_extension; This should show aws_s3 and aws_commons in the results. Note : Some versions of RDS may not support this extension. To check, run this command using the AWS CLI: aws rds describe-db-engine-versions --region us-east-1 --engine postgres --engine-version 14.4 Look for the \u201cs3Export\u201d string in the output: if you find it there, your RDS instance supports this feature. If it\u2019s not included in the output, you may need to upgrade your PostgreSQL engine version. Step 2: Give Permission to Write Data to S3 By default, your RDS database doesn\u2019t have permission to send data to an S3 bucket. You need to create an [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) role that allows RDS to export data. Go to the IAM section in the AWS \u0441onsole and create a new IAM policy that grants the S3:PutObject permission. Make sure to adjust the source bucket name accordingly: {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"RdsDataExport\",\n \"Action\": [\n \"S3:PutObject\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:s3:::rds-data-export-demo/*\"\n ] \n }\n ] \n} Create a new role in IAM that the RDS instance can assume and attach this policy to the role: Link the IAM role to your RDS instance: Open the RDS console and select the DB instance to which you will connect. In the Connectivity and Security settings, scroll to Manage IAM Roles. Select the newly created role and enable it with the s3Export feature. With this done, your RDS instance will have permission to write [data](https://skyvia.com/blog/salesforce-to-salesforce-integration/) into the S3 bucket. Step 3: Set Up the S3 Export Path Now that permissions are set, we must tell PostgreSQL where to send the exported data in S3. This is done by defining a URI (Uniform Resource Identifier) that contains: Bucket Name \u2192 The S3 bucket where the file will be stored. Path \u2192 A prefix that defines the folder or naming pattern in S3. AWS Region \u2192 The region where your S3 bucket is located. To create this URI, use the following SQL command: SELECT aws_commons.create_s3_uri(\n 'rds-data-export-demo',\n 'export',\n 'us-west-1'\n) AS s3_export_uri; This S3 connection will be provided to the aws_s3.query_export_to_s3 function in the next step, when we actually export data. Step 4: Export Data to S3 The aws_s3.query_export_to_s3 function expects the following parameters to export the data to an S3 bucket: SQL Statement \u2013 This is a simple string with the SQL statement to select the data you want to export. For example, if you have a table called customers, you can export all its data with the following SQL statement: SELECT * FROM customers_export. S3 URI \u2013 The S3 connection that was created in the previous step. Once all parameters are set, execute the function: SELECT * \nFROM aws_s3.query_export_to_s3(\n 'SELECT * FROM customers_export', \n :'s3_export_uri'\n); If everything is set up correctly, the function will create a file in S3 with the contents of what has been selected in the SQL statement. To view the file in S3, use the AWS CLI to first list the files in the bucket and then download it to the local machine. aws s3 ls s3://rds-data-export-demo/ Step 5: Verify and Troubleshoot If the export doesn\u2019t work, here are some common things to check: Permissions issue : Ensure your IAM Role has S3:PutObject access and is properly attached to the RDS instance. Region mismatch : Ensure the S3 bucket and RDS database are in the same AWS region. Check AWS logs : If the command fails, check the AWS RDS logs. Manual method\u2019s advantages Full control over data selection and formats : Users can define exactly which data is exported, how it\u2019s formatted, and where it is stored in S3. No third-party services involved : Everything is managed entirely within AWS, ensuring stronger security control via IAM roles and policies. Manual method\u2019s drawbacks Admin access : The aws_s3 extension requires administrative or root user privileges to the RDS database instance, which users might not have. IAM roles and permissions setup : For the RDS instance to export data to S3, it necessitates correct IAM roles and policies, whose setting might be challenging to some users. Automation requires technical skills : You cannot automate export tasks unless you know how to write cron jobs for scheduling exports. Also, the server must always be up and running to execute scheduled tasks. Method 2: Exporting data from RDS Postgres to S3 Using Automated Option The manual method described above offers clear advantages, such as complete control over exported data and secure access managed through IAM roles. However, the need for cloud configuration can make it overly complex for general users. In the following section, we\u2019ll explore how Skyvia can facilitate this process, making your export to S3 a breeze. [Skyvia](https://skyvia.com/) is a leading SaaS platform that offers powerful data integration capabilities. With its user-friendly interface and no-code approach, it\u2019s recognized by [G2 Crowd](https://www.g2.com/products/skyvia/reviews) as one of the top [ETL tools](https://skyvia.com/blog/etl-tools/) for ease of use. Skyvia\u2019s accessibility empowers even non-technical users to build robust data pipelines for moving data across various cloud platforms, including Amazon RDS and S3. Step 1: Sign up to Skyvia Go to [Skyvia](https://app.skyvia.com/) and sign up. After signing up, you\u2019ll get a Default Workspace where you can create and manage objects like connections, agents, and integrations \u2013 a one-stop shop for everything that can be done within the Skyvia platform. For this use case, we first need to establish connections to both the PostgreSQL database running on Amazon RDS and the S3 bucket. These connections will then be used to create an integration task that will export data from PostgreSQL to S3. Step 2: Create Connections First, let\u2019s create a connection to our RDS instance. To establish a successful connection, you\u2019ll need the following details: RDS database public endpoint \u2013 the connection string for your database instance, available in the RDS management console under Connectivity and Security. Copy the endpoint and port where PostgreSQL is running. Database username \u2013 the master username used to access the database. Database password \u2013 the master password for authentication. Default database \u2013 the specific database Skyvia will connect to for exporting data to S3. Note : Make sure the Security Group allows inbound connections to the RDS instance, or the connection may fail. Go to the Connections tab and click +Create New . In the search bar, type PostgreSQL and select the connector. Set Connection Mode to Direct and fill in the required fields: Server \u2013 Enter the RDS database public endpoint. Port \u2013 Specify the PostgreSQL instance port. User ID \u2013 Enter the database username. Password \u2013 Use the master password for authentication. Database \u2013 Select the target database from the drop-down list. Click Test Connection to verify the setup. If the test is successful, click Create Connection to finalize. To connect Skyvia to your Amazon S3 bucket, you\u2019ll need an access key ID and secret key from the AWS Management Console. If you don\u2019t have these credentials yet, follow [AWS\u2019s official guide](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) to create an access key. In Skyvia, go to Connections and click +Create New . On the Select Connector page, choose Amazon S3. Enter the required details: Access Key ID \u2013 Your AWS access key ID. Secret Key \u2013 Your AWS secret key. Region \u2013 The AWS region where your S3 bucket is located. Bucket Name \u2013 The S3 bucket where data will be exported. Click Test Connection to validate the setup. If successful, click Create Connection to finalize. Step 3: Create an Integration to Export Data Now that we\u2019ve set up the RDS and S3 connections, we\u2019ll use them to create a Skyvia integration to perform AWS RDS Postgres export to S3. In Skyvia, click +Create New and select Export . Set up the source and target: Source Connection \u2192 Select the RDS Postgres connection. Target Type \u2192 Choose CSV To storage service . Target Connection \u2192 Select the S3 connection created earlier. Folder \u2192 Enter a folder name where the exported files will be stored in S3. Keep the other settings at default and proceed. As a next step, create a Task to define the actual data export process. On the right-hand pane, click on Add new for Tasks . On the Task Editor page: Set the Editor Mode as Simple . In the Target File Name enter a name for the exported file. For the Compression Type choose zip, gzip, or None based on your preference. Within the specified Object select the checkboxes next to the fields that you want to export in the CSV file; unchecked fields will not be included. If needed, apply Filters and Order By settings to refine the exported data. Click Next Step to map the database fields to the exported file. For simplicity, we\u2019ll leave the field names unchanged and click Save . Click Validate and Create to save the integration. Once this is done, click Run to export data from Amazon RDS PostgreSQL to S3. Step 4: Run and Monitor Click on the Monitor tab to see the integration status. The initial state is Queued, which is then updated once the execution is completed. To verify the export, navigate to the S3 bucket in the AWS Management Console and download the file to your local machine. Here, you can open the file in your preferred code editor and view the contents. With its advanced automation options, Skyvia is an excellent choice for scheduling integrations on a regular basis. To automate export tasks, click Schedule and configure the timing. You can set it to run daily, on specific weekdays, or at intervals of a few hours or even minutes, achieving near real-time data integration. Advantages of using Skyvia Ease of use : With the Skyvia connector, even non-technical users can create data pipelines and export data from Amazon RDS to S3 buckets. No-code approach : Skyvia\u2019s intuitive GUI allows users to easily filter, sort, and select columns for export. Scheduling and automation : These options come built-in, enabling automated data exports from Amazon RDS to S3. No installation required : Since Skyvia is cloud-based, everything works within the platform without third-party extensions. Scalability : Easily handles growing data volumes and adapts to larger workloads as your business needs expand. Disadvantages of using Skyvia Cost considerations : While Skyvia offers affordable [pricing](https://skyvia.com/pricing) , high-volume exports may require higher-tier plans, increasing costs. Dependency on an external service : Adding an intermediate service, however reliable, increases the potential risk of failure. Since Skyvia is a third-party platform, you rely on its availability and uptime for scheduled exports. Conclusion As we\u2019ve seen, exporting data from Amazon RDS to S3 using the native aws_s3 extension offers flexibility and control but comes with its share of challenges. The manual setup can be complex, especially when configuring permissions. Fortunately, there\u2019s a simpler way. The Skyvia cloud platform removes technical barriers to data integration. With its intuitive, no-code interface, advanced functionality, and affordable pricing, Skyvia is your reliable partner for data operations of any complexity. FAQ for AWS RDS Postgres to S3 How does Amazon S3 facilitate data storage? Amazon S3 offers scalable, secure, and durable object storage. It allows users to store and retrieve any amount of data from anywhere, with features like versioning, access control, and lifecycle management to optimize storage and costs. What are the prerequisites for exporting data from AWS RDS Postgres to S3? You need an Amazon RDS PostgreSQL instance (version 11+), an S3 bucket, an IAM role with proper permissions (S3:PutObject), the aws_s3 extension enabled in PostgreSQL, and appropriate database user privileges. How can I verify if my PostgreSQL version supports S3 exports? Run this AWS CLI command: aws rds describe-db-engine-versions \u2013region your-region \u2013engine postgres \u2013engine-version your-version If \u201cs3Export\u201d appears in the output, your version supports S3 exports. What best practices should I follow to ensure a successful data export process? The best practices are as follows: \u2013 Set up IAM roles and permissions correctly \u2013 Keep RDS and S3 in the same AWS region \u2013 Validate the aws_s3 extension \u2013 Test the connection before exporting \u2013 Enable logging for troubleshooting. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgres-to-amazon-s3-export%2F) [Twitter](https://twitter.com/intent/tweet?text=AWS+RDS+Postgres+Export+to+S3%3A+2+Fastest+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgres-to-amazon-s3-export%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgres-to-amazon-s3-export/&title=AWS+RDS+Postgres+Export+to+S3%3A+2+Fastest+Methods) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/postgresql-and-powerbi-connection/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) How to Connect Power BI to PostgreSQL By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/postgresql-and-powerbi-connection/#respond) 4507 November 16, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-and-powerbi-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Power+BI+to+PostgreSQL&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-and-powerbi-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgresql-and-powerbi-connection/&title=How+to+Connect+Power+BI+to+PostgreSQL) Are you planning to use Power BI on a PostgreSQL data warehouse? Then part of your task is to connect Power BI to PostgreSQL. PostgreSQL is the most loved and wanted database, [according to the 2022 StackOverflow Developer Survey](https://survey.stackoverflow.co/2022/#section-most-loved-dreaded-and-wanted-databases) . So, developing your next data warehouse in PostgreSQL is a good choice. Also, afterward, you can build reports and dashboards in Power BI. But how do you connect Power BI to PostgreSQL? Let\u2019s dive in. Table of contents About PostgreSQL About Power BI Benefits of PostgreSQL and Power BI Connection Connection via Power BI and PostgreSQL Configuration Connection via ODBC Connection via Skyvia About PostgreSQL [PostgreSQL](https://skyvia.com/connectors/postgresql) is a powerful open-source object-relational database. PostgreSQL has earned a high reputation for reliability, performance, and robust features. That\u2019s why it has been known as [the world\u2019s most advanced open-source relational database](https://www.postgresql.org/) . It supports a large part of the SQL standard. And it has the modern features of a relational database that includes: more data types table inheritance multi-version concurrency control and more The image below shows the PostgreSQL version, credentials, and database used in this article. About Power BI [According to Microsoft](https://learn.microsoft.com/en-us/power-bi/fundamentals/power-bi-overview) , [Power BI](https://skyvia.com/data-integration/powerbi) is \u201ca collection of software services, apps, and connectors.\u201d This collection works together to visualize your data interactively. It has three working parts: Power BI Desktop, where you can design your interactive reports Power BI Service, where you can deploy your reports and share them with your stakeholders And Power BI Mobile, where you can take your reports wherever you go. It works on iOS, Android, and Windows devices. PostgreSQL is one of the many data sources where Power BI can connect and work on your data. In short, Power BI helps you visualize your data in PostgreSQL. Then, view it on your favorite device. In this article, the Power BI Desktop version used is shown below. Benefits of PostgreSQL and Power BI Connection Visually interacting with your data to get business insights will help you make sound business decisions. So, what makes a combination of Power BI and PostgreSQL a good fit for this scenario? First, PostgreSQL is a top-of-the-line database that is: fast and scalable easy to learn reliable and fault-tolerant free and open-source supported by a large community of experts with robust features PostgreSQL is a good candidate for a large data warehouse deployed in the cloud or on-premise. Meanwhile, Power BI is a good data visualization tool that: has easy integration with PostgreSQL via default connector or ODBC includes a free visualization and report designer (Power BI Desktop) you can easily publish to the cloud through the Power BI Service you can extend through formulas and queries Power BI is also [rated 4.4 in Gartner Peer Insights](https://www.gartner.com/reviews/market/analytics-business-intelligence-platforms/vendor/microsoft/product/microsoft-power-bi) under Analytics and Business Intelligent Platforms. In short, connecting Power BI to PostgreSQL is a good combination for data analysis. Connection via Power BI and PostgreSQL Configuration It\u2019s easy to connect Power BI to PostgreSQL using the default connector. See the following steps. Step 1. Open Power BI and Click Get Data Run Power BI Desktop on your Windows computer through the Start menu or Search. Then, the Power BI splash screen will open. So, click Get Data . Then, a new window will appear. Step 2. Click Database and Select PostgreSQL Database First, click Database in the left pane. Then, select PostgreSQL database . Finally, click Connect . Step 3. Enter the Server and Database Name First, you need to get this information from your administrator: Server name or IP address Port number if it\u2019s different from the default 5432 Database name Username and password Then, in Power BI, enter the server name or IP address. And the port number if it\u2019s not 5432 in the Server box. Use the format :. Then, enter the database name. Finally, click OK . Enter the Username and Password if required. If your PostgreSQL server has [SSL](https://www.cloudflare.com/learning/ssl/what-is-ssl/) disabled, a prompt will appear. Just click OK to continue. Check how it is done from steps 1 to 3 below. Connection via ODBC Another easy way to connect Power BI to PostgreSQL is to use [Open Database Connectivity](https://en.wikipedia.org/wiki/Open_Database_Connectivity) (ODBC). You need the same server, port, database name, username, and password. But the steps are a bit different. Before you continue, you need an ODBC driver for PostgreSQL. Your best choice is to download the [default ODBC driver for PostgreSQL](https://www.postgresql.org/ftp/odbc/versions/msi/) or [Devart\u2019s ODBC driver for PostgreSQL](https://www.devart.com/odbc/postgresql/download.html) . In this article, we will use the default ODBC driver for PostgreSQL. Step 1. Create an ODBC Data Source Name or Connection String You have two choices for the first step: an ODBC Data Source Name or a Connection String. Creating a Data Source Name (DSN) In this option, you need to have a PostgreSQL ODBC driver installed. So, here are the steps: 1. In Windows, open the ODBC Data Sources (64-bit) . 2. Click the System DSN tab. 3. Click Add and select the PostgreSQL ANSI(x64) or PostgreSQL Unicode(x64) . Which of these fits depends on your database. Your administrator should know if the database uses [Unicode](https://en.wikipedia.org/wiki/Unicode) or not. Or you can also use the Devart ODBC Driver for PostgreSQL if this is the one you installed. 4. Enter your desired Data Source Name and connection details (server, port, database name, username, and password). 5. Click Test to test the connection. 6. If the test connection is successful, click Save . Check how to do it from steps 1 to 6 below. Forming the Connection String This section will discuss the connection string to connect to PostgreSQL using ODBC. The Username and password are not included in the following connection string. Power BI will prompt you for these credentials if required. For ANSI : Driver={PostgreSQL ANSI(x64)};Server=;Port=;Database=; And here\u2019s an example: Driver={PostgreSQL ANSI(x64)};Server=ncc-1701-E;Port=5432;Database=BudgetExpense; For Unicode: Driver={PostgreSQL Unicode(x64)};Server=;Port=;Database=; And here\u2019s an example: Driver={PostgreSQL Unicode(x64)};Server=DBSERVER1;Port=5432;Database=HumanResources; Step 2. Open Power BI and Click Get Data This is the same as Step 1 of the default connector. Step 3. Click Other and Select ODBC In the left pane, click Other . Then, select ODBC . Finally, click Connect . Step 4. Select the Data Source Name or Type the Connection String This depends on what you did in Step 1. If you created a Data Source Name (DSN), select the DSN from the dropdown list of DSNs. We used BudgetExpense DSN as our example. Then, click OK . Here\u2019s how to do it from steps 1 to 4. If you created a connection string, select (None) in the list of DSNs. Then, click Advanced Options . Paste your connection string in the Connection String textbox. Finally, click OK . Here\u2019s how to do it from steps 1 to 4 using a Connection String. Connection via Skyvia Skyvia offers another method to connect the PostgreSQL database to Power BI. The main advantage of this way of integration between PostgreSQL and Power BI is an additional security layer. [Skyvia](https://skyvia.com/) is a cloud platform providing solutions for various data-related tasks: [data integration](https://skyvia.com/data-integration/) , cloud services data backup, data management with SQL, CSV files import/export, creating OData services, etc. It\u2019s also perfect for business users, as you don\u2019t need specific technical skills or extra effort to build data integration. Skyvia connects to the database via the [Agent application](https://docs.skyvia.com/agents.html) , which acts as a security bridge between Skyvia and your local server. You can connect to your PostgreSQL database with Agent, create the SQL endpoint, and use it as the source in Power BI with the help of ODBC Driver for Skyvia Connect. Let\u2019s see what steps we should perform to establish the connection between the PostgreSQL database and PowerBI using Skyvia: Important! First, you should [register](https://app.skyvia.com/register) in Skyvia. Step 1. Create Agent Sign in to your Skyvia account and create a new Agent; follow the instructions on the page. Download the Agent key file. Install the Agent application on the local server, where your PostgreSQL database is located. Note! Find more details on [how to use Agent](https://docs.skyvia.com/agents.html#managing) in the documentation. Step 2. Create the Agent Connection After creating and installing the Agent application on the server, you should [create the Agent connection](https://docs.skyvia.com/connections/) in Skyvia. In the connection editor, create an Agent connection: select the Agent connection mode, select the Agent from the drop-down list, and specify the required connection parameters like hostname or IP address, user Id, password, database name, and schema. Note! You can also optionally specify the additional connection parameters, such as protocol and timeout values. Step 3. Create SQL Endpoint Download the [ODBC Driver for the Skyvia Connect](https://docs.skyvia.com/connect/sql-endpoints/odbc-driver.html) installer. Launch the ODBC Driver installer and follow the setup instructions. [Register your SQL endpoint as an ODBC data source](https://docs.skyvia.com/connect/sql-endpoints/odbc-driver.html#registering-data-source) . Step 4. Connect the database to Power BI In Power BI, select the ODBC data source type in the Get Data window. Select the data source added to the ODBC Driver in the previous step. Now you have a secure integration between Power BI and PostgreSQL! Once it\u2019s ready, Power BI can connect to it, and you can start visualizing your data. [Why not try Skyvia today](https://app.skyvia.com/register) with your PostgreSQL database? You\u2019ll find it easy to use with its drag-and-drop designer, scheduler, and more. Conclusion This article shows two ways of integrating data between Power BI and PostgreSQL. Both methods are very easy, and you can\u2019t go wrong if you follow the steps outlined here. You can choose the native integration or a seamless and double-secure integration by Skyvia. Moreover, as well as integrating databases with Power BI, Skyvia allows getting data for Power BI from the data sources, which are not supported directly by [integrating cloud sources with Power BI](https://skyvia.com/data-integration/powerbi) . If you want to discover what other integrations Skyvia can offer, [sign up for a demo!](https://skyvia.com/schedule-demo) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-and-powerbi-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Power+BI+to+PostgreSQL&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-and-powerbi-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgresql-and-powerbi-connection/&title=How+to+Connect+Power+BI+to+PostgreSQL) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/postgresql-vs-mysql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) PostgreSQL vs MySQL: Which Database is Right for You in 2025? By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/postgresql-vs-mysql/#respond) 4342 September 30, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-vs-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=PostgreSQL+vs+MySQL%3A+Which+Database+is+Right+for+You+in+2025%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-vs-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgresql-vs-mysql/&title=PostgreSQL+vs+MySQL%3A+Which+Database+is+Right+for+You+in+2025%3F) Do you have a new data integration project that involves either PostgreSQL or MySQL? But if your team has never touched one or both, you need to read this PostgreSQL vs MySQL comparison. Your team of data professionals is not always lucky to choose the databases of their choice. They are sometimes forced to use either of the two database platforms. So, they need to adapt when the sources are one of these or both. Table of contents PostgreSQL vs MySQL Features Comparison PostgreSQL vs MySQL Performance PostgreSQL vs MySQL Scalability PostgreSQL vs MySQL Syntax Differences PostgreSQL vs MySQL Popularity PostgreSQL vs MySQL: Features Comparison MySQL is known as the world\u2019s most popular open-source database. And Oracle owns MySQL. Meanwhile, PostgreSQL is known as the world\u2019s most advanced open-source, object-relational database. And this is maintained by PostgreSQL Global Development Group. But which is the best for you? Let\u2019s discuss features that matter to data integration. Other features are expanded in the next sections. Platform Support and Deployment Options PostgreSQL and MySQL support major operating systems whether in the cloud or on-premise. They also support both 64-bit and 32-bit architectures. Installation and deployment are well documented for both databases. Your team of experts should have no problems deploying them. Major cloud providers like [AWS](https://skyvia.com/connectors/aurora) , [Azure](https://skyvia.com/connectors/sql-azure) , and [Google Cloud](https://skyvia.com/connectors/google-cloud-sql-mysql) have fully-managed database offerings for both. So, if you plan to deploy a data warehouse made from either database, both are strong contenders. PostgreSQL supports Linux, Windows, Mac OS X, Solaris, AIX, HP/UX, OpenBSD, FreeBSD, and NetBSD. Here\u2019s a screenshot of it in Ubuntu Linux 22.04 using pgAdmin 4. Meanwhile, MySQL supports Linux, Windows, Mac OS X, FreeBSD, and Solaris. Here\u2019s a screenshot of the MySQL Command Line Interface (CLI). It shows the version and operating system: SQL Compliance You need to note the compliance with the SQL standards of MySQL and PostgreSQL. Why? Because the SQL syntax and behavior may differ if a standard is followed or not. PostgreSQL conforms to [170 out of 179 mandatory features](https://www.postgresql.org/about/) of the latest SQL standard. But note also that there are extensions added to the standard. It differs from other SQL platforms like MySQL. MySQL\u2019s goal is to comply with the latest SQL standard but not sacrifice reliability and speed. There are also many extensions added to it. You can also set the [SQL mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html) or behavior of MySQL. For example, when you enable ONLY_FULL_GROUP_BY, the GROUP BY clause in a SELECT statement will behave differently. Data Access Libraries and Language Support Both MySQL and PostgreSQL support major programming languages and data access libraries. Both database platforms got you covered if you\u2019re a developer for Python, Java, .Net, and JavaScript. Or any language developer that supports the [ODBC](https://www.devart.com/odbc/what-is-an-odbc-driver.html) or [JDBC](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) . There are also native libraries for Python, PHP, Go, Ruby, Rust, C/C++, and many more. You can check out this [comprehensive list](https://dev.mysql.com/doc/refman/8.0/en/connectors-apis.html) for MySQL. And [this detailed list](https://wiki.postgresql.org/wiki/List_of_drivers) for PostgreSQL. Then, you will most likely see the language your team uses. Data Types Why the PostgreSQL vs MySQL data types? The headaches of data integration start when mapping incompatible types. And if your ETL tool does not map them automatically, you have to do it yourself. Imagine reading a Salesforce contacts table. And then writing it to MySQL or PostgreSQL, for example. That\u2019s a lot of columns! Mapping them manually will be a pain. Not to mention the errors you will encounter after wrongly mapping a handful of columns. So, aside from strings and numbers supported by PostgreSQL, it also supports XML, JSON, and more. For a long list of all supported types in PostgreSQL, [check this out](https://www.postgresql.org/docs/current/datatype.html) . You can see the data types yourself using a tool like pgAdmin 4, shown below: Meanwhile, MySQL supports common data types like text and numbers. And it also supports JSON and XML. For the official list of MySQL data types, [check this out](https://blog.devart.com/mysql-data-types.html) . You can also see a list of data types using a tool like dbForge Studio for MySQL shown below: PostgreSQL has more types you can choose from. You may decide in favor of PostgreSQL if you need the data type not found in other relational databases. Concurrency Both PostgreSQL and MySQL can access a table row concurrently with many users. This is known in database management systems as [Multiversion concurrency Control or MVCC](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) . PostgreSQL has MVCC built-in. For MySQL, [use the InnoD](https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html) [B](https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html) [storage engine to make use of MVCC](https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html) . It is a must to extract a consistent set of row data in an ETL process. Otherwise, your data warehouse suffers from a mix of several updates to a record. For more details, here\u2019s the technical discussion of [MySQL MVCC](https://dev.mysql.com/doc/refman/8.0/en/innodb-multi-versioning.html) and [PostgreSQL MVCC](https://www.postgresql.org/docs/current/mvcc-intro.html) . Reliability With MySQL, reliability starts with the storage engine used. MySQL has several of them. But PostgreSQL has only one storage engine. But regardless of the database product, each should support the following features: ACID ACID stands for Atomicity, Consistency, Isolation, and Durability. These are properties of a database transaction. In simpler terms, your data should be okay despite errors and other mishaps. This is good for real-time and [batch ETL](https://skyvia.com/blog/batch-etl-processing/) . PostgreSQL is ACID-compliant by design. MySQL is ACID-compliant using the InnoDB storage engine and [NDB Cluster](https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster.html) . [Other MySQL storage engine does not support transactions](https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html) . WAL WAL stands for Write-Ahead Logging. And this is important for ensuring data integrity. WAL works like this: Changes are written in one or more tables only after a log is written. In the event of a crash, the database is recoverable using the log. Both MySQL and PostgreSQL have their implementations of WAL. Though this is known as a [redo log](https://dev.mysql.com/blog-archive/mysql-8-0-new-lock-free-scalable-wal-design/) in MySQL InnoDB. Connectivity How easy or hard to configure a connection is another concern of data integrators. Your team will set this up on the database server and adapt it to your ETL connector. Both PostgreSQL and MySQL have documented how this is done. You can connect to a PostgreSQL database using TCP/IP protocol or the Unix Domain Socket files. And the default port is 5432. PostgreSQL also supports an [IPv6-enabled network](http://www.dataarchitect.cloud/david-z-how-to-setup-postgresql-on-an-ipv6-enabled-network/) . Meanwhile, you have more [connection transport protocols](https://dev.mysql.com/doc/refman/8.0/en/transport-protocols.html) in MySQL. You can use TCP/IP, Named Pipes, Shared Memory, and Unix Domain Socket files. For TCP/IP, the [default port is 3306](https://www.devart.com/dbforge/mysql/studio/mysql-port.html) . [MySQL also supports IPv6-enabled networks](https://dev.mysql.com/doc/refman/8.0/en/ipv6-support.html) . Security Hacking a database is widespread. So, securing your data at rest and in transit is a good practice. The following shows PostgreSQL\u2019s security features: Database roles and privileges User id and password Various authentication methods like Password, GSSAPI, LDAP, and [more](https://www.postgresql.org/docs/current/auth-methods.html) Encryption options like column encryption, data partition encryption, SSL/SSH connections, and [more](https://www.postgresql.org/docs/current/encryption-options.html) . Column and row-level restrictions The following shows server security options in pgAdmin 4: Next, consider the security features of MySQL: Database roles with per-object access User id and password Data in-transit encryption using SSL/SSH Cryptographic functions using AES, SHA2, and [more](https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html) . [MySQL Enterprise Encryption](https://dev.mysql.com/doc/refman/8.0/en/mysql-enterprise-encryption.html) using a commercial license The following shows the security options to connect to MySQL using dbForge Studio: Licensing [PostgreSQL is free](https://www.postgresql.org/about/licence/) . You have permission to use, copy, modify, and distribute for any purpose. There\u2019s no fee or written agreement whatsoever. Meanwhile, MySQL uses a [Dual license model](https://www.mysql.com/about/legal/licensing/oem/) . One is GNU GPL version 2 for open-source apps. And a commercial license with Oracle if you don\u2019t distribute your software with MySQL as open source. Technical Support Both MySQL and PostgreSQL have free and paid support in case you\u2019re stuck with a problem. Check out [PostgreSQL support](https://www.postgresql.org/support/) and [MySQL support](https://www.mysql.com/support/) for more details. PostgreSQL vs MySQL Performance Is MySQL faster than PostgreSQL? Honestly, that\u2019s hard to tell who wins in PostgreSQL vs MySQL performance. But remember: both are tools to store your data. Period. If you see some PostgreSQL vs MySQL benchmarks, take them with a grain of salt. It may favor one database over the other. But here\u2019s the truth. Whatever database you use, you can tune queries. You can also upgrade hardware. Both have support to make your database faster. So, pick the one that has the features you need. If you are stuck with a slow-running query, use EXPLAIN. Here\u2019s [how to do it in PostgreSQL](https://www.postgresql.org/docs/current/using-explain.html) . And here\u2019s the [counterpart in MySQL](https://dev.mysql.com/doc/refman/8.0/en/explain.html) . The following are a few of the ways to fix the performance issues. Indexing Indexing tables is the most basic performance tuning you should do. PostgreSQL supports innovative index types like B-Tree, GiST, SP-GiST, GIN, BRIN, and Hash. Your intent in searching records will determine which index type to use. For more details about PostgreSQL indexes, check [the official manual](https://www.postgresql.org/docs/current/indexes-types.html) . Meanwhile, MySQL supports B-Tree (Default) and Hash. For more details, visit the [official documentation](https://dev.mysql.com/doc/refman/8.0/en/create-index.html) . Materialized Views Views are stored queries in a database. Data coming from this object are stored in the tables used. But materialized views stored the results of the query the same way as a table. But the downside is the data is not always current. You need to refresh the data as you need it. However, [it may perform faster than a regular view](https://hashrocket.com/blog/posts/materialized-view-strategies-using-postgresql) . MySQL doesn\u2019t support materialized views. But you can mimic the same feature with a regular table using triggers or a scheduled ETL process. Parallel Query Execution Another performance feature is parallel query execution. This tells the SQL engine to use multiple CPU cores to process a query. But the caveat is having many queries using parallel execution will degrade server performance. [PostgreSQL can devise query plans to use multiple CPUs](https://www.postgresql.org/docs/current/parallel-query.html) for faster querying. But MySQL doesn\u2019t have this feature yet. Table Partitioning When a table grows very large, performance will degrade. Using table partitioning will split one large table into smaller physical ones. Both PostgreSQL and MySQL support table partitioning. Click [here](https://www.postgresql.org/docs/current/ddl-partitioning.html) for a discussion of PostgreSQL table partitioning. Meanwhile, MySQL supports table partitioning under the InnoDB and NDB Cluster storage engines. [Here is the link](https://dev.mysql.com/doc/refman/8.0/en/partitioning.html) to the official manual. PostgreSQL vs MySQL Scalability Scalability refers to a system\u2019s ability to increase or decrease in performance to meet changes in processing demands. It can be horizontal scaling. That is adding new machines to the system to handle the increase in the number of users and transactions. Using replication is one way to do horizontal scaling. Replication enables copying data from the source server to one or more extra servers. This offers performance advantages on a source server with a very high load. Both PostgreSQL and MySQL have replication solutions. For more details visit the official [PostgreSQL manual](https://www.postgresql.org/docs/13/runtime-config-replication.html) and [MySQL manual](https://dev.mysql.com/doc/refman/8.0/en/partitioning.html) for replication. PostgreSQL vs MySQL Syntax Differences Most SQL syntaxes are the same for PostgreSQL and MySQL. But the PostgreSQL vs MySQL syntax difference starts with the extensions added to each database. That means using a simple SELECT, INSERT, UPDATE, and DELETE will likely have the same syntax. But the handling of GROUP BY clause, UPDATE with JOIN, and other next-level syntax have their differences. Depending on your tool you will see it before executing the query. In MySQL, using a particular storage engine affects features and syntax. For example, [using the NDB Cluster storage engine will not support temporary tables](https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-limitations-syntax.html) . PostgreSQL vs MySQL Popularity Both PostgreSQL and MySQL belong to the top 5 databases in [DB-Engines ranking](https://db-engines.com/en/ranking) as of August 2022. And both are [top databases in the 2022 Stackoverflow Survey](https://survey.stackoverflow.co/2022/#section-most-popular-technologies-databases) with MySQL as #1 and PostgreSQL as #2. But PostgreSQL won [the Most Loved and Wanted database](https://survey.stackoverflow.co/2022/#section-most-loved-dreaded-and-wanted-databases) in the survey. So, who are the big names using MySQL? Sony NASA Netflix eBay Facebook Twitter and [more](https://www.mysql.com/customers/) Which companies use PostgreSQL? Apple IMDB Instagram Reddit Skype Spotify and [more](https://stackshare.io/postgresql) Conclusion Using the features mentioned earlier will help you decide when to use PostgreSQL over MySQL in data integration. And to do this successfully, you need an ETL tool that suits your needs. Skyvia has connectors for both databases and more. It has automatic column-type mapping. And designing pipelines is easily done with a drag-and-drop interface. Do yourself a favor by trying it out for free. [Click here](https://id.skyvia.com/core/register) and start getting value right now. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-vs-mysql%2F) [Twitter](https://twitter.com/intent/tweet?text=PostgreSQL+vs+MySQL%3A+Which+Database+is+Right+for+You+in+2025%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fpostgresql-vs-mysql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/postgresql-vs-mysql/&title=PostgreSQL+vs+MySQL%3A+Which+Database+is+Right+for+You+in+2025%3F) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/quickbooks-and-google-sheets-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Expert Guide to QuickBooks & Google Sheets Integration By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/quickbooks-and-google-sheets-integration/#respond) 2243 February 22, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fquickbooks-and-google-sheets-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Expert+Guide+to+QuickBooks+%26+Google+Sheets+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fquickbooks-and-google-sheets-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/quickbooks-and-google-sheets-integration/&title=Expert+Guide+to+QuickBooks+%26+Google+Sheets+Integration) The origins of digital bookkeeping come from spreadsheets in Excel and their online analog in Google Sheets. Many modern businesses still rely on them but also use innovative accounting applications like QuickBooks Online. Even though QuickBooks Online is an excellent solution for tracking expenses and income, it sets limits on data sharing and reporting. Fortunately, the integration of QuickBooks and Google Sheets addresses these constraints. This article reveals the most substantial reasons for bringing QuickBooks and Google Sheets together. Also, it provides three fundamental methods for QuickBooks Google Sheets connection and data integration. Table of Contents Benefits of QuickBooks Google Sheets Integration Method 1: Standard QuickBooks Export Method 2: Skyvia Method 3: Skyvia Add-on Methods Comparison Conclusion Benefits of QuickBooks Google Sheets Integration QuickBooks and Google Sheets are often included in accountant software kits for many companies worldwide. QuickBooks usually appears as the centralized financial register, while Google Sheets has many useful data visualization and sharing options. Given that QuickBooks and Google Sheets are often used simultaneously, their integration is inevitable. See [the NISO company case](https://skyvia.com/case-studies/niso) , where they use Skyvia to integrate data into QuickBooks daily. Google Sheets and QuickBooks together reveal such opportunities and benefits: Automated data transfer prevents data discrepancies and errors. Visualization and reporting of the enriched financial data in Google Sheets. Facilitated collaboration of stakeholders through data sharing in Google Sheets. Bulk migration of financial spreadsheets from Google Sheets to QuickBooks to create centralized financial data storage. These advantages are convincing, right? So, let\u2019s explore the ways to connect QuickBooks to Google Sheets. Method 1: Standard QuickBooks Export Native integration is the most obvious approach to connect Google Sheets directly to/from QuickBooks, but everything isn\u2019t as simple as it seems to be. There are certain limitations associated with this method: Advanced plan for QuickBooks Online is required. Only QuickBooks reports could be exported. If you\u2019re already on the Advanced plan of QuickBooks Online and reports export is what exactly interests you, native integration will work well for you. Here are the instruction steps for this method: In the QuickBooks Online account, go to Reports . Select the report to export and click on it. In the upper right corner of the report, click on the Export icon and select Export to Google Sheets from the menu. Enter the verification code. Review the permissions settings and click Allow . The system takes you to Google Sheets. You\u2019ll be asked to sign into your Google account. The report is now exported into your Google Sheets workspace, and you can start elaborating on it together with your colleagues. Note! Subsequent changes made on Google Sheets will NOT be reflected in the corresponding report in QuickBooks. Method 2: Skyvia Native integration mightn\u2019t be suitable for everyone due to the limitations it imposes. Luckily, there are alternative solutions to connect QuickBooks to Google Sheets. Skyvia is a decent alternative to the native integration method and can outperform it when it comes to the number of objects to transfer. It supports both online and desktop versions of QuickBooks, so you can perform [QuickBooks Desktop and Google Sheets integration](https://skyvia.com/data-integration/integrate-quickbooksdesktop-google-sheets) . Skyvia is the universal SaaS platform for a wide set of data-related tasks, including integration, workflow automation, backup, and query. Its [Data Integration](https://skyvia.com/data-integration/) product is particularly designed for building data integration pipelines via multiple scenarios: [Import](https://skyvia.com/data-integration/import) . Loads data from QuickBooks to Google Sheets or vice versa. Skyvia offers powerful data transformations and mapping settings to match different data structures between sources. [Export](https://skyvia.com/data-integration/export) . Extracts data from QuickBooks and Google Sheets into CSV files. [Replication](https://skyvia.com/data-integration/replication) . Copies data from QuickBooks or Google Sheets to the selected data warehouse. [Synchronization](https://skyvia.com/data-integration/synchronization) . Ensures data consistency between cloud sources and databases. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) & [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) . Data Flow enables the creation of complex data integration scenarios with several data sources and compound data transformations. Control Flow orchestrates data integration executions based on specific conditions. Skyvia\u2019s Features and Benefits User-friendly interface that requires no coding skills to proceed with data integration operations. 180+ cloud apps, databases, and data warehouses supported. Powerful data transformation and mapping settings that ensure the data structure correspondence between sources. Secure data transfer as Skyvia is hosted on Microsoft Azure and complies with standards such as ISO 27001, GDPR, and HIPAA. Being cloud-hosted , Skyvia requires no on-premises software implementation. Comprehensive documentation and [video instructions](https://www.youtube.com/@skyvia3342) on every Skyvia product with all minor details and examples provided. Excellent pricing model suitable for any business. Start with a free plan to try out all the features and switch to another pricing plan as your business evolves. QuickBooks and Google Sheets Integration with Skyvia Skyvia could become your best assistant for [QuickBooks Online and Google Sheets Integration](https://skyvia.com/data-integration/integrate-quickbooks-google-sheets) . You can import any data from QuickBooks to Google Sheets or vice versa using the Import tool. Also, we show how to extract data from these tools in CSV files with the Export tool. Import Let\u2019s start with the example depicting the QuickBooks invoice data transfer to Google Sheets with the [Import](https://skyvia.com/data-integration/import) tool. [Log into Skyvia](https://app.skyvia.com/) or create a new account. Go to +NEW->Import in the top menu. Select QuickBooks as a source and Google Sheets as a target. Click Add task . Under Source Definition , select the object that to import. In this example, we select Invoice from the drop-down menu. Apply the filtering conditions if needed. In this example, we specify that the invoice due date shouldn\u2019t be earlier than 01/26/2024. Under Target Definition , select the needed sheet in the Google document. Specify the DML operation: INSERT adds all records to the spreadsheet. UPDATE finds the records matching the primary key and updates them with the QuickBooks data. UPSERT finds the records matching the primary key and updates them. If the records are not found, the system will add them to the spreadsheet. DELETE removes the specified records from the spreadsheet. Under Mapping Definition , define the column, constant, expression, target, or source lookup. [Check here](https://docs.skyvia.com/data-integration/common-package-features/mapping/index.html) for more details about mapping settings. If needed, set regular integration execution by clicking on Schedule . Click Run to start the import. Other cases where the Import tool might be useful: Migrating files from Google Sheets once QuickBooks becomes a single point of truth for the financial department. Uploading CSV files to Google Sheets or QuickBooks. Loading financial data to other supported cloud apps or databases. Export The [Export](https://skyvia.com/data-integration/export) tool allows you to extract data from QuickBooks or Google Sheets into a CSV file and save it on your computer or online storage service. This might be useful when there\u2019s a need to regularly transfer reports in CSV to external vendors. It\u2019s also possible to automate the export process by setting scheduling parameters so you\u2019ll be sure that the stakeholders get the needed documents on time. In this example, we load data from QuickBooks to a CSV file on Google Drive. [Log into Skyvia](https://app.skyvia.com/) or create a new account. Go to +NEW->Export in the top menu. Select QuickBooks as a source and Google Drive as a target. It\u2019s possible to select other cloud storage providers or simply select the option to download the file manually. Click Add task . Select the QuickBooks object for export. In this example, we use the Query Editor in the Advanced editor mode to specify conditions for exported objects: invoices of the customer Maria Carlucci (supported only in the paid plan). When using the Standard editor mode, just select the objects for export from the drop-down menu. Click Create and then click Run . Go to the Monitor tab. Once the file is generated, click on the corresponding task under Run History . In the History Details window that appears, click on the number highlighted in blue to download the CSV file. Method 3: Skyvia Add-on If you want to get QuickBooks data right within Google Sheets, use the [Skyvia Google Sheets Add-on](https://skyvia.com/google-sheets-addon/quickbooks) for that. It imports data from QuickBooks via [Skyvia Query](https://skyvia.com/query/) and updates it with the Refresh Sheet function. Skyvia add-on for Google Sheets uses a powerful visual Query Builder that allows non-techs to craft and execute queries without SQL knowledge. Tech-savvy professionals can create queries using SQL syntax. In this example, we demonstrate how to embed the Skyvia Google Sheets add-on and get the QuickBooks data instantly. See the detailed instructions below: Open any document in your Google Sheets account. Go to Extensions -> Add-ons -> Get add-ons . Type Skyvia Query in the search bar. Click Install and proceed with setup instructions in the wizard. Once the Skyvia Query add-on is installed, go to Extensions -> Skyvia Query -> Login . Log into your Skyvia account or create a new one. Once the setup and login are completed, go to Extensions -> Skyvia Query -> Query . Select the workspace, QuickBooks connection, and the object to import. Apply any filtering or order criteria if needed. Click Run . The spreadsheet gets populated with QuickBooks data. To get the most recent data, go to Extensions -> Skyvia Query -> Refresh Current Sheet . Methods Comparison All the above-mentioned means aim to integrate Google Sheets and QuickBooks in one way or another. But they are all different at the same time. To better understand which of them suits you best, we\u2019ve prepared a comparison table with the principal criteria and use cases. Native Skyvia Skyvia Add-on Speed Fast Fast Fast Setup Medium Easy Easy Cost 99$ Free Free Limitations Only reports No No Use cases \u2013 Sending reports from QuickBooks to Google Sheets. \u2013 Importing data from QuickBooks to Google Sheets for advanced visualization and reporting. \u2013 Export QuickBooks or Google Sheets data into a CSV file. \u2013 Loading data from CSV files, cloud apps, and databases into QuickBooks as a single source of truth. \u2013 Importing data from QuickBooks to Google Sheets for advanced visualization and reporting. The native integration method is suitable for those who have an Advanced plan on QuickBooks Online and are interested in exporting their reports only. The Skyvia add-on offers a wider functionality, allowing users to integrate reports as well as other multiple objects to Google Sheets via a user-friendly Query Builder, even without any SQL knowledge. Skyvia makes it possible to append other sources to the integration flow and perform export, backup, replication, etc. Such features make it the best choice for those who want to build smart data integration pipelines containing QuickBooks, Google Sheets, and other sources. You can do all that at no cost or choose the plan for your data operating volume. Conclusion There are multiple reasons for integrating Google Sheets and QuickBooks. We have presented three marvelous ways to bring those tools together in this article. Select the one that works best for you and enjoy the streamlined financial workflow in your businesses. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fquickbooks-and-google-sheets-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Expert+Guide+to+QuickBooks+%26+Google+Sheets+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fquickbooks-and-google-sheets-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/quickbooks-and-google-sheets-integration/&title=Expert+Guide+to+QuickBooks+%26+Google+Sheets+Integration) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/redshift-vs-s3/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Difference Between Amazon Redshift and Amazon S3 By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/redshift-vs-s3/#respond) 1321 August 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-s3%2F) [Twitter](https://twitter.com/intent/tweet?text=Difference+Between+Amazon+Redshift+and+Amazon+S3&url=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-s3%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/redshift-vs-s3/&title=Difference+Between+Amazon+Redshift+and+Amazon+S3) With the rise of cloud computing in the past decade, several key players have dominated the cloud computing space. Amazon Web Services, also known as AWS, is the [leader in the world of cloud computing](https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/) followed by Microsoft and Google. What makes AWS a global leader is the plethora of services provided by them in various domains such as compute, storage, networking, AI, etc. When it comes to storage, two of the most popular services from AWS are Amazon Redshift and Amazon S3. In this article, let\u2019s understand what Amazon Redshift and Amazon S3 are with their benefits and drawbacks. We will also take a look at when to choose either of the services based on the use case and how Skyvia can help integrate the two services. Table of Contents What is Amazon Redshift? What is Amazon Simple Storage Service (S3)? How to Choose the Best Service for Your Business How Skyvia Integrates with Amazon Redshift and Amazon S3 Conclusion Storage on the cloud has been getting cheaper since the evolution. Various storage services are offered by AWS that cater to different kinds of use cases. Amazon Redshift and Amazon S3 are two of the most popular storage solutions offered by AWS with a broad spectrum of users that depend on it. Let\u2019s take a deeper dive into understanding how both these services are similar and different from one another and how to choose the right one. What is Amazon Redshift? Amazon Redshift is a fully managed cloud data warehouse service. It is scalable and offers petabyte-scale data operations within your cloud environment. With Redshift, you can query your data stored within a data warehouse or data lake using standard SQL. The results can be saved back to S3 or dumped to other cloud services based on requirements. Redshift supports open data formats such as Apache Parquet or Optimized Row Columnar (ORC) which enables users to perform complex analytical queries within a short time. Benefits of Using Amazon Redshift Some of the key benefits of using Amazon Redshift are mentioned as follows: Scalability. Amazon Redshift is a cluster-based service. Users can start with a smaller cluster and then gradually scale up the cluster size as demand and data grow. Redshift can scale from gigabytes to petabytes of storage with no effect on performance. Performance. Redshift runs on an MPP architecture that uses a Massively Parallel Processing technique to query and process your underlying data. Additionally, it uses columnar storage and data compression to reduce the amount of input and output operations needed to query the data. Integration. Redshift integrates with almost every popular service within the AWS ecosystem. Users can easily export their data from Redshift to S3, extract and transform data using AWS Glue, or create visualizations using Amazon Quicksight. Security. As part of the AWS ecosystem, it inherits all the security features of the cloud, such as VPC, data encryption, both at rest and in transit, integration with IAM roles and users, etc. Disadvantages of Amazon Redshift While Amazon Redshift is certainly a powerful tool in the analytical landscape, users need to be aware of some disadvantages. Cost. Setting up and running a Redshift cluster has some inherent costs associated with it. Although Redshift is optimized for the cloud and users only pay for what they use, it may become an expensive solution for less frequently used datasets. Users need to ensure data is properly structured and optimized to take the full benefit of the cloud. Concurrency. Sometimes Redshift can show degraded performance when multiple users start consuming the same dataset. This often leads to slower performance and also stale data in some cases. Manual Tuning. Even a managed service like Redshift needs some manual management to keep it optimized and running properly. Tasks such as vacuuming, managing indexes, etc., are key to increasing performance and avoiding performance bottlenecks. What is Amazon Simple Storage Service (S3)? Amazon S3 is a highly scalable, web-based object-storage service offered by Amazon Web Services. With a simple user interface, users can store and retrieve data from anywhere across the globe. Benefits of Using Amazon S3 The key benefits of using Amazon S3 are as follows: Scalability. With Amazon S3, there is virtually no limit the the amount of data users can store. It can easily scale to store up to exabytes of data without provisioning any infrastructure. Users only pay for what they use. Durability and Availability. Amazon S3 ensures data durability with 99.99999999999% (11 nines). This means data is always available from any geographic location on Earth with zero downtime. Security and Data Protection. Amazon S3 provides data encryption at rest and in transit and is compliant with various industry standards, such as HIPAA and GDPR. Cost Effectiveness. Amazon S3 comes in different storage classes, which allows users to pay much less for infrequently used and archived data. Disadvantages of Amazon S3 Latency. Amazon S3 is mostly architected for durability and scalability, which makes it a poor candidate for handling low-latency workloads. S3 may not be the correct solution where sub-millisecond latency is key to the applications\u2019 performance. Data Transfer Costs. Although storing data in the cloud is cheap, transferring data out to the internet is expensive. If your application sends out large chunks of data from S3 to the public internet, the costs might add up quickly. How to Choose the Best Service for Your Business Amazon Redshift and Amazon S3, both provide services for storage within the AWS ecosystem. However, the use cases for both of these might vary depending on requirements. While Amazon Redshift is marketed as an analytics data warehouse, it is useful when you want to store and analyze petabytes of mostly structured data. It is not feasible to analyze and store binary file formats such as images or videos within the Redshift directly. The default way to interact with Redshift is using SQL which has better support for textual data. On the other hand, Amazon S3 is an object storage service. This means users can store any type of file format on S3, be it images, videos, documents, etc. Amazon S3 is highly scalable and durable which offers huge flexibility to its users to store virtually unlimited amounts of data without the need to worry or manage the underlying infrastructure. In case your application needs to handle large file chunks, using S3 might be a preferred option. How Skyvia Integrates with Amazon Redshift and Amazon S3 Certain scenarios require users to transfer data from Amazon Redshift to S3 or the other way around. While there are native ways to do that, they are not intuitive. That\u2019s why the third-party tools are here to cover such challenges as: Limited Transformation Capabilities, Error Handling and Monitoring, Limited Automation. One tool that helps overcome challenges and disadvantages is Skyvia. It offers a 100% cloud-native solution to integrate Amazon Redshift with Amazon S3 with an easy-to-use interface and advanced automation features. It provides features to read CSV data stored in Amazon S3 and load it to Amazon Redshift. Secondly, it provides data export capabilities from Amazon Redshift and stores it in Amazon S3. This bi-directional integration is supplemented with enhanced features such as detailed logging and monitoring, scheduled exports, low-code GUI-based consoles, handling errors, etc. This enables business users to extract and load data between the two services without much technical knowledge. Check out more on the integration between [Amazon Redshift](https://skyvia.com/connectors/redshift) and [Amazon S3](https://skyvia.com/connectors/amazon-s3) on Skyvia\u2019s official website. Conclusion In this article, we explained Amazon Redshift and Amazon S3 and their place in today\u2019s cloud computing landscape. Amazon Redshift is a highly scalable data warehouse on the cloud often used for analytical capabilities. Amazon S3 offers an object-based storage system for multiple types of files on the Internet. Both services support many use cases, and depending on the application architecture, users should choose between the two. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-s3%2F) [Twitter](https://twitter.com/intent/tweet?text=Difference+Between+Amazon+Redshift+and+Amazon+S3&url=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-s3%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/redshift-vs-s3/&title=Difference+Between+Amazon+Redshift+and+Amazon+S3) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/redshift-vs-snowflake/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Redshift vs Snowflake: Which data warehouse fits your business needs? By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/redshift-vs-snowflake/#respond) 1638 June 26, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-snowflake%2F) [Twitter](https://twitter.com/intent/tweet?text=Redshift+vs+Snowflake%3A+Which+data+warehouse+fits+your+business+needs%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-snowflake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/redshift-vs-snowflake/&title=Redshift+vs+Snowflake%3A+Which+data+warehouse+fits+your+business+needs%3F) Choosing the right data warehouse (DWH) is a strategic move to find a perfect home for data, where all valuable information will live. This choice is pivotal for ensuring smooth operations throughout the data ecosystem, quick data retrieval, and robust analytics, making it a key factor in your business\u2019s success. Here are the main points of why it matters. The right data warehouse ensures data queries run fast, so you\u2019re not left waiting for reports and insights. A good data warehouse scales effortlessly with your business, handling larger volumes without mistakes. The correct choice can save money in the long run . A robust data warehouse offers strong security features to protect data from breaches and unauthorized access. The data warehouse should seamlessly integrate with existing tools and platforms, making data consolidation and analysis easy. Redshift and Snowflake are both good options for the set criteria, each with its own strengths. Redshift is great if you\u2019re deep into the AWS ecosystem, while Snowflake offers unmatched flexibility and ease of use across multiple clouds. Let\u2019s consider these solutions and compare their key differences to help businesses make informed choices. Table of Contents Understanding Redshift and Snowflake Key differences between Redshift and Snowflake Security and Compliance Use Cases Cost Comparison Data Integration by Skyvia Conclusion Understanding Redshift and Snowflake Amazon Redshift [Redshift](https://docs.aws.amazon.com/redshift/) is a fully managed data warehouse service from Amazon Web Services (AWS) designed to handle petabytes of data from various business activities, such as sales, marketing, and customer interactions. Redshift uses [columnar storage and massively parallel processing (MPP)](https://docs.aws.amazon.com/whitepapers/latest/data-warehousing-on-aws/data-warehouse-technology-options.html) to speed up query performance, allowing reports and analytics to run super fast, even with massive datasets. Since it\u2019s part of the AWS family, Redshift integrates with other AWS services. Need to pull in data from S3, analyze it with AWS Glue, or visualize it in QuickSight? No problem. Redshift can do it smoothly. It also provides encryption, VPC support, and compliance with various security standards to protect the data. While dealing with structured and semi-structured data or needing to scale up and down, Redshift handles it all with grace. Snowflake [Snowflake](https://www.snowflake.com/en/) is a cloud-based data warehousing platform that easily handles all the data storage and analytics needs. Unlike traditional data warehouses, Snowflake separates storage and computing. This is Snowflake\u2019s secret sauce. Need more space? Scale up storage. Need faster query performance? Scale up computing. The warehouse effortlessly handles structured (like SQL databases) and semi-structured data (like JSON, Avro, and Parquet), making it versatile for different data types and use cases. It offers end-to-end encryption, role-based access control, and compliance with industry standards like HIPAA and GDPR. And, at last, forget about managing hardware, software updates, or performance tuning. Snowflake is fully managed, so you can focus on analyzing data instead of maintaining the infrastructure. Key differences between Redshift and Snowflake We know that Amazon Redshift and Snowflake both are awesome in their own ways, but they\u2019ve got some distinct features that set them apart. Here\u2019s the scoop. Focus Redshift Snowflake Architecture Redshift uses a traditional cluster-based architecture. Snowflake\u2019s architecture separates storage and computation. Pricing Models Redshift offers on-demand pricing, where you pay for the computing and storage you use. There\u2019s also a reserved instance pricing model for long-term commitments. Snowflake charges separately for storage and computing. Compute is billed per second, and you only pay for what you use. Storage is billed at a flat rate. Performance and Scalability Redshift offers high performance through its use of columnar storage and parallel processing. Snowflake\u2019s separation of storage and computing means users can adjust resources on the fly. JSON and Semi-structured Data Support Redshift supports semi-structured data with its Redshift Spectrum feature, allowing users to query data directly from S3. Snowflake natively supports semi-structured data like JSON, Avro, and Parquet. It treats this data as a first-class citizen. Automation and Maintenance Redshift requires some manual tuning and maintenance, like vacuuming tables and managing workload queues. AWS provides some automation tools, but hands-on management is often needed. Snowflake is fully managed, automatically handling most maintenance and optimization tasks, including scaling, tuning, and even auto-suspending idle compute resources. Integrations and Ecosystem Redshift integrates with the AWS ecosystem, including S3, EMR, Glue, and more. Snowflake is cloud-agnostic, running on AWS, Azure, and Google Cloud. It also integrates with various data tools and platforms, providing flexibility regardless of the cloud provider. Security and Compliance Amazon Redshift and Snowflake offer robust security and compliance features, ensuring data is well-protected. [Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html) provides customizable encryption and integrates seamlessly with AWS\u2019s security tools, making it a solid choice for users within the AWS ecosystem. On the other hand, [Snowflake](https://community.snowflake.com/s/article/Snowflake-Security-Overview-and-Best-Practices) offers tiered security options with advanced features like data masking and extensive compliance certifications, catering to businesses with more stringent security requirements. Let\u2019s take a closer look at how they stack up against each other regarding security options and compliance features. Amazon Redshift Redshift allows data to be encrypted both at rest and in transit. Companies can use AWS-managed keys or bring their own keys (BYOK) using [AWS Key Management Service (KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) ) to get control over encryption, ensuring data is protected according to businesses\u2019 needs. Comprehensive Security Features Network Isolation. Use Amazon Virtual Private Cloud (VPC) to isolate the Redshift clusters and control access. IAM Integration. Manage access with AWS Identity and Access Management (IAM) for fine-grained permissions. Audit Logging. Log all database activities for monitoring and auditing purposes. SSL/TLS Encryption. Secure data in transit with SSL/TLS encryption. Compliance Certifications. Meets standards like GDPR, HIPAA, SOC 1/2/3, and ISO 27001. Snowflake Snowflake provides a robust security model with multiple layers of protection, including [end-to-end encryption](https://docs.snowflake.com/en/user-guide/security-encryption-end-to-end) and role-based access control. It also offers advanced security features in its Enterprise and Business Critical editions to get comprehensive security out-of-the-box, with additional features available as businesses scale up. Extensive Compliance Features End-to-End Encryption. Encrypts data at rest and in transit using strong encryption standards. Role-Based Access Control. Fine-grained access control to manage who can access and manipulate data. Network Policies. Define network policies to restrict access to specific IP addresses or ranges. Data Masking. Mask sensitive data to protect it from unauthorized access. Compliance Certifications. Complies with various standards, including GDPR, HIPAA, SOC 1/2/3, ISO 27001, PCI DSS, and FedRAMP. Use cases Let\u2019s explore when to use Amazon Redshift and Snowflake to help companies decide which DWH is best suited for their requirements and business goals. Use Redshift If your business needs cohesive and efficient workflow and is deeply embedded in the AWS ecosystem, like S3, EMR, Glue, and QuickSight. When you need to perform complex, large-scale data analytics and reporting. Redshift\u2019s columnar storage and massively parallel processing (MPP) capabilities will help query large datasets quickly. If you prefer predictable costs and can commit to long-term usage. Redshift offers reserved instance pricing, which can be more cost-effective for predictable, long-term workloads. When you need extensive data encryption and security control, Redshift can customize encryption settings and integrate with AWS Identity and Access Management (IAM) for granular access management. Use Snowflake If your business operates across multiple cloud platforms or wants the flexibility to do so. Snowflake is cloud-agnostic, running on AWS, Azure, and Google Cloud, providing unparalleled flexibility. When you have fluctuating workloads and need to quickly scale resources up or down, separating storage and computing allows instant elasticity, making adjusting resources based on current needs easy. Suppose you work with semi-structured data formats like JSON, Avro, or Parquet. In that case, Snowflake natively supports semi-structured data, making it easy to load, query, and analyze these data types without complex transformations. When you want a fully managed solution with minimal administrative overhead, Snowflake automates many maintenance tasks, including performance tuning, scaling, and patching, freeing up your team to focus on data analysis. If your business is in a highly regulated industry field and needs advanced security features, including data masking and extensive compliance certifications like PCI DSS and FedRAMP. When you need to support multiple concurrent workloads and many users without performance degradation, Snowflake\u2019s architecture efficiently handles concurrency, allowing multiple users and workloads to operate simultaneously without impacting performance. Cost Comparison Amazon Redshift and Snowflake offer cost-effective data warehousing solutions, each with strengths. The table below shows the differences between the pricing policies of both platforms. Focus Redshift Snowflake Compute Costs Charged hourly based on the node type and number. Billed per second based on the size of virtual warehouses. Storage Costs Charged per GB-month for data stored in the cluster and backups. Charged per TB-month for data storage, including metadata and auto-scaling. Pricing Models \u2013 On-Demand Pricing. \u2013 Reserved Instance Pricing (1 or 3 years) with up to 75% savings. \u2013 [Pay-As-You-Go, billed per second](https://www.snowflake.com/en/data-cloud/pricing-options/) . Potential Savings \u2013 Reserved Instances offer significant savings for predictable workloads. \u2013 Efficient columnar storage reduces storage costs via compression. \u2013 Spot instances for non-critical workloads offer additional savings. \u2013 Auto-suspend and auto-resume features reduce costs for idle compute resources. \u2013 Effective storage compression reduces billed storage amounts. Long-Term Cost Considerations \u2013 Best for steady, predictable workloads that can benefit from reserved pricing. \u2013 Additional savings if integrated with other AWS services (data transfer discounts, consolidated billing). \u2013 Ideal for variable workloads with flexible scaling of compute resources. \u2013 Multi-cloud flexibility optimizes costs based on provider pricing changes and organizational needs. Data Transfer Costs Typically lower within the AWS ecosystem. The situation depends on the region and cloud provider, and cross-region transfers are higher. Data Integration by Skyvia Companies working with [Amazon Redshift](https://skyvia.com/connectors/redshift) or [Snowflake](https://skyvia.com/connectors/snowflake) know how powerful these data warehouses are. Integrating data into Amazon Redshift or Snowflake enables unified data views, real-time insights, improved data quality, and operational efficiency. But is there a way to make data integration and management smoother? [Skyvia Data Integration](https://skyvia.com/data-integration) works seamlessly with both Redshift and Snowflake. Benefits Skyvia\u2019s no-code platform means a user doesn\u2019t need to be a tech genius to set up data integrations. Its [intuitive interface](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) makes the whole process straightforward. The platform allows organizations to schedule data integrations conveniently or even set up real-time syncs to ensure their Redshift or Snowflake data is always up-to-date. It offers robust mapping and transformation tools to ensure data fits perfectly into Redshift or Snowflake. Users can clean, format, and transform their data as needed during the integration process. Skyvia uses powerful encryption methods to secure data during transfer. It also complies with industry standards, including HIPAA, GDPR, PCI DSS, ISO 27001, and SOC 2 (by Azure), ensuring data is handled responsibly. And, at last, Skyvia offers flexible [pricing plans](https://skyvia.com/pricing) , including a free tier, making it an affordable solution for businesses of all sizes. Conclusion Choosing between Amazon Redshift and Snowflake depends on each business\u2019s specific needs and environment. If the company is deeply integrated into AWS, has predictable workloads, and needs advanced SQL capabilities, Redshift is its go-to. On the other hand, if the firm needs multi-cloud flexibility, handles variable workloads, and works with semi-structured data, Snowflake is a perfect fit. No matter which data warehouse the business chooses, Skyvia makes its life easier. Its seamless integration, automated data sync, and robust security ensure that data is always accurate, up-to-date, and ready for analysis. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-snowflake%2F) [Twitter](https://twitter.com/intent/tweet?text=Redshift+vs+Snowflake%3A+Which+data+warehouse+fits+your+business+needs%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fredshift-vs-snowflake%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/redshift-vs-snowflake/&title=Redshift+vs+Snowflake%3A+Which+data+warehouse+fits+your+business+needs%3F) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/reverse-etl-in-salesforce/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Unlocking the Power of Reverse ETL with Salesforce By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/reverse-etl-in-salesforce/#respond) 2348 January 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Freverse-etl-in-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=Unlocking+the+Power+of+Reverse+ETL+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Freverse-etl-in-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/reverse-etl-in-salesforce/&title=Unlocking+the+Power+of+Reverse+ETL+with+Salesforce) In our data-driven surroundings, everyone has heard about [reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) solutions helping businesses in decision-making and operational efficiencies. It\u2019s about an evolution in how we now use data, focusing on bringing the data warehousing power and analytics directly into operational systems. The difference between ETL and reverse ETL is just the data processing path. [ETL](https://skyvia.com/learn/etl-pipeline-meaning) extracts data from business apps, transforms it if needed, and loads it to DWH or data lakes for analysis. Reverse ETL syncs data from a source of truth like a DWH or data lake to a system of action including CRM, advertising platform, or other business app. So, not only analytics are happy with it, but all the company nodes, like Sales, Marketing, Finance, HR, customer support, etc. Our next step is to see how it works in reality. Table of Contents Salesforce and Reverse ETL: Enhancing Business Processes Tools for Implementing Reverse ETL with Salesforce Step-by-Step Tutorial: Using Skyvia for Reverse ETL with Salesforce Conclusion Salesforce and Reverse ETL: Enhancing Business Processes Here are a few real-life stories of how Salesforce enhances customer engagement , streamlines sales processes , and customizes marketing strategies with reverse ETL. Customer Engagement Enhancement Background Imagine a mid-sized retail company collecting rich customer data from various touchpoints, like online sales, in-store purchases, and customer services in their cloud data warehouse. However, they needed a solution for the day-to-day customer interactions. Solution They used reverse ETL to transfer customer segments, predicted preferences, and potential future purchases into Salesforce. So, with updated data, the marketing team created personalized email campaigns targeting customers with offers and products relevant to their preferences and previous purchasing behavior. At the same time, the sales team focused on products the customers like based on data-driven insights. Results Customers\u2019 satisfaction increased. Conversion rates grew. Customers\u2019 loyalty and retention improved. Sales Processes Streamlining Background A software company providing B2B solutions sought to improve sales efficiency and personalize outreach by leveraging its extensive customer data. Solution Implementing reverse ETL allowed the company to transfer critical insights, like data on customer product usage, engagement levels, and potential upsell opportunities, from the data warehouse back into Salesforce. So, the sales team obtained access to enriched customer data and focused on features the data shows most relevant to each customer. The priority here is outreach to customers whose usage patterns suggest they are ready for an upgrade or additional services. Results Sales efficiency increased. Conversion rates raised. Customer satisfaction improved. Marketing Strategies Customization Background An online retailer specializing in eco-friendly fashion company was interested in creating more targeted and effective marketing campaigns based on customer preferences and purchasing behavior. Solution The company employed reverse ETL to transfer the segmented customer data and insights, like online shopping behavior, customer feedback, social media interactions, and purchasing history, from the data warehouse back into Salesforce. The marketing team used Salesforce to create customized email and social media campaigns so customers receive personalized content and product recommendations based on their specific interests and past purchasing behavior. For instance, customers interested in sustainable footwear receive targeted content about EcoFashion\u2019s range of eco-friendly shoes. Results Customers\u2019 engagement increased. Conversion rates improved. Customers\u2019 loyalty enhanced. Tools for Implementing Reverse ETL with Salesforce Skyvia Talking about versatile reverse ETL and data synchronization solutions to integrate Salesforce with other data systems, like CRMs, SaaS applications, etc., [Skyvia](https://skyvia.com/) is a good choice. It\u2019s no-code, cloud-based, supports [180+](https://skyvia.com/connectors/) connectors, and perfectly balances attractive price, simplicity, and usability. The [G2 Crowd rate](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) shows Skyvia as the leader in the top 20 easiest-to-use [ETL tools](https://skyvia.com/blog/etl-tools/) . Key Features Data integration capabilities, including ETL and Reverse ETL, bi-directional data sync, data replication, and import/export between different data sources and Salesforce. Automated cloud-to-cloud backup service for Salesforce and other cloud applications. Easy management and data manipulation from various sources in a single cloud-based platform. Pros The no-code interface makes Skyvia accessible to users without deep technical expertise. A wide range of data sources supported is beneficial for companies using multiple business applications. Scheduling and automating data tasks saves time and resources. Being cloud-based, it doesn\u2019t require extensive infrastructure or maintenance. Allows creating data-related scenarios of any complexity, from the simplest to advanced ones, with complex flows and transformations. Cons Despite the solution being friendly for non-tech users, having more video tutorials would be a good idea. Pricing The [pricing](https://skyvia.com/pricing/) model is flexible and depends on your usage. You may start with the Freemium plan to try what you really need or use the paid plans, opening a cool functionalities window for your business. These versions range from the Basic plan ($15/mo) for the primary data ingestion and ELT scenarios to the Professional ($399/mo), including the robust data pipelines for any complex scenario. The Enterprise plan allows you to get a tailor-made offer according to your requirements. Hightouch The next significant player in this field is [Hightouch](https://hightouch.com/) . It leverages the concept of reverse ETL for syncing data from DWHs directly to business apps to make operational data from warehouses actionable and accessible in Salesforce. Key Features An ability to sync data from data warehouses, like Snowflake, BigQuery, and Redshift, directly to Salesforce. Real-time data synchronization, ensuring that changes in the data warehouse are immediately reflected in Salesforce. SQL-based transformations allow writing SQL to transform data in the warehouse before syncing it to Salesforce, providing flexibility in preparing and presenting data. Pros Data from warehouses are readily usable in Salesforce, enhancing data-driven decision-making. Easy-to-use UI simplifies the complex process of data integration. Cons Cost considerations for small businesses and start-ups. Learning curve for users not familiar with SQL or data operations. Data quality dependencies in the DWH. Pricing The [pricing](https://hightouch.com/pricing) provides the free starting plan, but you may select a paid option at $350 per month or a Business Model. Please get in touch with Hightouch sales for details. Census [Census](https://www.getcensus.com/) is the analytics solution, helping businesses sync the DWH data stored directly to Salesforce and other SaaS applications, facilitating real-time data use in business routines. Key Features An ability to sync data from DWHs like Snowflake, BigQuery, or Redshift in real-time or schedule directly to Salesforce, ensuring that the CRM is always up-to-date with the latest data. Data transformations using SQL, offering flexibility in data handling. Pros The solution can directly enhance business operations with real-time, data-driven insights in Salesforce. SQL-based data operations provide significant flexibility in managing and preparing data. Cons You have to be an SQL expert to use it. Price may be a stop factor for small businesses or start-ups. Pricing The solution\u2019s [price](https://www.getcensus.com/pricing) offers the free model, the Professional ($350/month), and the Enterprise custom one. Hevo [Hevo](https://hevodata.com/) is a popular cloud-based data integration platform that automates data pipelines, including reverse ETL capabilities with Hevo Activate. It allows businesses to consolidate their data from various sources into a single repository like a DWH and then use it in Salesforce. Compared to [Skyvia](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) , the connectors supported here are a bit fewer (150+), and the number of destinations is only 15+. If you use the solution for free, the connector\u2019s ability decreases to 50+. Key Features The real-time data processing ability. Automated schema mapping from source to destination. Alerting and monitoring for any issues in the data pipeline. Pros The platform has no code and is easy to use, even for non-tech ones. Strong security features, ensuring data safety and compliance. An ability to handle large data volumes. Cons Depending on the scale and complexity of data needs, pricing can be a stop factor, especially for small businesses. The customization abilities for complex scenarios might be limited. Pricing Hevo provides usage-based [pricing](https://hevodata.com/pricing/pipeline/) , including a full-feature 14-day free trial and connectors limited free plan. Matillion [Matillion](https://www.matillion.com/) is a nice choice for businesses leveraging cloud data warehouses for reverse ETL processes into Salesforce. It\u2019s cloud-based, no-code/low-code, works with a web-based wizard interface, and supports 150+ connectors. Key Features Numerous pre-built components for connecting with data sources and third-party APIs, including Salesforce. Advanced data processing features, like ML and data enrichment. Pros Easy-to-use graphical interface. Capabilities of integration with various data sources and cloud DWHs, like Amazon Redshift, Google BigQuery, Snowflake, etc. Cons Pricing might be a barrier for companies with limited data integration needs. Extensive data processing tasks can require significant cloud resources, increasing operational costs. Pricing The consumption-based [pricing](https://www.matillion.com/pricing) includes a 30-day free trial + 500 Matillion credits. The platform doesn\u2019t offer a free plan. Workato [Workato](https://www.workato.com/) offers a platform for connecting apps and automating business workflows. Despite primarily focused on workflow automation and application integration, it can also be used for reverse ETL tasks, particularly in syncing data from various sources into Salesforce. Key Features Automated workflows, allowing for efficient data synchronization and business process automation. Real-time data integration and automation capabilities. An ability to create customizable workflows. Advanced data mapping and transformation functionality. Pros Capability of handling a wide variety of integration and automation scenarios. The solution suits any business size and can scale as business needs grow. Cons The UI is user-friendly for basic tasks, but complex integration scenarios might require a deeper understanding of data workflows. In some instances, performance tuning may need large-scale or highly complex integrations. Pricing Workato uses a pay-as-you-go [pricing](https://www.workato.com/pricing) model with the ability of a free trial. Informatica [Informatica](https://www.informatica.com/) is one of the leaders in cloud data management and integration that particularly fits enterprises and complex data integration scenarios. You can use it both in the cloud or on-premise; the connectors supported here are just [90+](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) , compared to Skyvia\u2019s [170+](https://skyvia.com/connectors/) , but this iPaaS platform is good enough for reverse ETL tasks into Salesforce. Key Features Powerful connectivity options with cloud-based, on-premises, and hybrid apps. Data transformation and mapping capabilities are essential for preparing and formatting data for Salesforce. AI and machine learning usage for intelligent data integration. Pros An ability to cover a wide range of requirements. The solution\u2019s reliability and scalability. Complex integrations\u2019 support. Cons The solution\u2019s complexity, despite the friendly UI. The platform requires significant infrastructure and maintenance. Pricing The [pricing](https://www.informatica.com/products/cloud-integration/pricing.html) is consumption-based, with a free trial. Step-by-Step Tutorial: Using Skyvia for Reverse ETL with Salesforce Using Skyvia for reverse ETL with Salesforce enables businesses to centralize and leverage their data for different strategic purposes and supports a lot of business cases, such as: Synchronization of customer data from multiple sources to create a unified customer view. Aggregation of sales data from different platforms for comprehensive analysis. Integration of marketing data from different channels for analyzing campaign effectiveness. Sync financial data from accounting software or ERP systems for a holistic financial view. Bringing product usage data from analytics tools to understand customer engagement. Sync inventory data from supply chain management systems for real-time inventory tracking. Consolidation of employee data from HR systems for managing workforce information. Aggregation of regulatory compliance data from the systems zoo for comprehensive compliance reporting. Data integration from partner and vendor management apps for better collaboration. Now, let\u2019s use Skyvia to move data from a DWH or other sources back into Salesforce in a few simple steps. Setting up the account If you don\u2019t have a Skyvia account, sign up [here.](https://app.skyvia.com/register) Once your account is set up, log in to your Skyvia workspace. Data Warehouse connecting In the Skyvia workspace, click + NEW -> Connection to create a new connection. Choose your data warehouse from the list of supported data sources. (e.g., [Amazon Redshift](https://skyvia.com/connectors/redshift) , [Google BigQuery](https://skyvia.com/connectors/google-bigquery) ). Enter the necessary credentials and connection details for your data warehouse. NOTE: By default, all connections are saved as untitled. For your convenience, set the scenario name at the top of the screen. Connecting to Salesforce Click + NEW -> Connection . Select Salesforce from the list of available data sources. Follow the prompts to authenticate and grant Skyvia access to your Salesforce account. Configuring Data Integration Click + NEW and select the Integration type ( Import ) . Select your data warehouse as the source. Choose Salesforce as the target. Create the integration task, where select the source object or desired operation (INSERT, UPDATE, UPSERT, DELETE). Define how the data from the source should map to the corresponding fields in Salesforce (Skyvia offers a visual editor for mapping fields). Apply any necessary data transformations (you might need to transform data formats to match Salesforce requirements) and click Save. Click Create on the Integration screen. Scheduling and Running the Integration You can schedule the integration to run automatically at regular intervals for your convenience. Run the integration (you may monitor the progress and view logs in the Monitor and Log tabs). Checking the Integration Status If there are any errors after running the package, click Skyvia\u2019s logs and error messages to troubleshoot and resolve issues. NOTE: You may set the automatic sending of error messages. Data Verification in Salesforce Log in to your Salesforce account to verify the data updates or addings as expected. Conclusion Reverse ETL capabilities with Salesforce are a strategic move for companies ready to leverage their data assets fully. The key word to describe the results of this integration is \u201c more ,\u201d meaning: More efficient operations. More effective customer engagement. More informed decision-making. Such scenarios build a bridge between data analysis and operational execution. Hence, marketing and sales teams know Miss Jane P. has a cat and likes daisies, but Mrs. Palmer has three kids and hates winter. Sure, their purchasing histories are slightly different, but this info helps offer something special to them both to make them happy and return with more and more other wishes. So, you still tailor your clients, encouraging them to invite more ones into your business. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Freverse-etl-in-salesforce%2F) [Twitter](https://twitter.com/intent/tweet?text=Unlocking+the+Power+of+Reverse+ETL+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Freverse-etl-in-salesforce%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/reverse-etl-in-salesforce/&title=Unlocking+the+Power+of+Reverse+ETL+with+Salesforce) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/saas-etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How To Transform Your Data Processes Using SaaS ETL Tools? By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/saas-etl-tools/#respond) 884 December 13, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsaas-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=How+To+Transform+Your+Data+Processes+Using+SaaS+ETL+Tools%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fsaas-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/saas-etl-tools/&title=How+To+Transform+Your+Data+Processes+Using+SaaS+ETL+Tools%3F) With the rapid implementation of SaaS applications and IoT devices, companies face data management challenges. Since each app has its own settings, preferences, country-dependent taxation and currency rules, etc., it\u2019s necessary to align all data from different systems. SaaS ETL platforms address this challenge by gathering, organizing, and standardizing data from various sources in a unified system to provide better control of all corporate operations. In contrast to traditional [ETL tools](https://skyvia.com/blog/etl-tools/) , SaaS solutions are more scalable and cost-effective. This article explores the key features of SaaS ETL platforms and their benefits. We also provide several hints on selecting the best solution for your business. Table of Contents Functionality of SaaS ETL Tools Advantages of SaaS ETL Tools Top SaaS ETL Tools in 2025 Skyvia Talend Fivetran Stitch Matillion Hevo Data Airbyte Informatica Use Cases of SaaS ETL Tools in Different Industries Overcoming Common Challenges with SaaS ETL Tools Choosing the Right SaaS ETL Tool for Your Business Final Thoughts FAQs Functionality of SaaS ETL Tools The prior objective of SaaS ETL services is to convert raw data sets into meaningful insights. They perform excellently since they combine the best of two worlds by delegating infrastructure management to cloud providers (SaaS) and automating data integration (ETL). SaaS ETL tools provide the following features: Data transfer in batches Real-time synchronization Mapping features Data transformation options Scheduling parameters for data loading and update Integration capabilities with other tools Advantages of SaaS ETL Tools It\u2019s common for businesses to have a CRM system, a database, an inventory management tool, and many other applications. To obtain a unified vision of all business operations, data from such dispersed systems needs to be gathered in one place. This is possible with SaaS ETL tools that can collect, organize, and move data across apps. Here are some other benefits these solutions provide. Data centralization. Consolidate data from multiple sources in one place. Automated data integration. Create and manage [ETL data pipelines](https://skyvia.com/learn/etl-pipeline-meaning) within an intuitive no-code user interface and scheduling features. Data accuracy. Apply transformations (cleansing, duplicate removal, blending) to ensure better data accuracy. Cost-efficiency. Eliminate the need for infrastructure management for data integration tasks by delegating it to cloud providers according to the SaaS model. Scalability. Enjoy the flexibility of on-demand resource allocation, which meets the fluctuating data needs. Reporting and analytics. Prepare comprehensive reports based on the consolidated data to obtain a clear vision of your business health. Data ecosystem. Benefit from solutions that contain not only [data integration tools](https://skyvia.com/blog/data-integration-tools/) but other services for data-related operations (backup, query, workflow automation, etc.). Top SaaS ETL Tools in 2025 1. Skyvia [Skyvia](https://skyvia.com/) is a universal cloud platform designed for a wide range of data-related tasks. It contains several products for data integration, backup, OData and SQL endpoint creation, SQL querying, and workflow automation. The [Data Integration](https://skyvia.com/data-integration) solution of Skyvia contains a range of tools for building SaaS ETL pipelines: [Import](https://skyvia.com/data-integration/import) . This ETL tool helps you configure integration scenarios, connect different data sources, and transform data on the go. [Replication](https://skyvia.com/data-integration/replication) . It automates cloud data replication to databases or data warehouses, keeping data consistent and up-to-date. [Synchronization](https://skyvia.com/data-integration/synchronization) . It supports bi-directional data sync between SaaS apps and databases, ensuring that data remains consistent across platforms. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/index.html) . It contains a visual wizard where you can design ETL pipelines involving multiple SaaS apps and other data sources. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/index.html) . It allows you to orchestrate complex integration scenarios by managing the execution order of various tasks, including data import, [export](https://skyvia.com/blog/connect-salesforce-to-sql-server/) , replication, and synchronization. Benefits 200+ connectors to SaaS apps, databases, and data warehouses. Intuitive GUI in all areas of Skyvia. The web-based nature of the platform allows for access from anywhere. Possibility to design both simple and complex ETL pipelines. Scheduling options for automated data integration. Incremental data updates. User Reviews There are [200+ reviews on the G2 platform](https://www.g2.com/products/skyvia/reviews) , with an average rating of 4.8/5 for Skyvia. Users admit ease of use and integration, excellent data management, and a wide set of features. However, they note a lack of detailed error reporting for integrations. Pricing [S](https://skyvia.com/pricing) [kyvia prices](https://skyvia.com/pricing) start at $79 per month and grow gradually depending on the number of records per month, scheduled integrations, and features. You can always try Skyvia with a fully-featured free tier. 2. Talend [Talend](https://www.talend.com/) is a data integration and management platform that provides such features as data transformations, data governance, data cleansing, and many more. This solution has two versions: free Talend Open Studio and paid Talend Data Fabric with a rich feature set. Benefits 300+ connectors to various data sources. Unified platform for all data integration scenarios. Easy-to-use interface with drag-and-drop functionality. Real-time data processing. Support both SaaS apps and on-premises services. User reviews There are [100+ reviews on the G2 platform](https://www.g2.com/products/talend-cloud-data-integration/reviews) , with an average rating of 4.3/5 for Talend. Users note Talend\u2019s ease of use, automation capabilities, and strong collaboration features. However, they also mention challenges like a difficult setup process and a steep learning curve. Pricing Talend offers four [pricing plans](https://www.qlik.com/us/pricing/data-integration-products-pricing) , each of which provides a free trial. [Skyvia vs Talend](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) 3. Fivetran [Fivetran](https://www.fivetran.com/) is a cloud-based data integration platform that automates ETL processes. It aims to simplify data movement between various sources, applying the needed data transformations on the go. Fivetran offers pre-built connectors to SaaS tools, databases, applications, and other data sources. Benefits 500+ data sources connectors. Automatic schema management. Incremental data loading. Monitoring and repair of data pipelines. User reviews There are almost [400 reviews on the G2 platform](https://www.g2.com/products/fivetran/reviews) , with an average rating of 4.2/5 for Fivetran. Users value a large number of connectors and ease of integration setup. At the same time, they express concerns about the low responsiveness of customer support. Pricing Fivetran offers four [pricing tiers](https://www.fivetran.com/pricing) , including a free one. Each of these tiers offers a trial period with a complete feature set at disposal. [Skyvia vs Fivetran](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) 4. Stitch [Stitch](https://www.stitchdata.com/) is another data integration tool, though it\u2019s designed for [ELT pipelines](https://skyvia.com/learn/what-is-elt) rather than ETL processes. This tool helps businesses to replicate their data from SaaS apps to data warehouses. Benefits 130+ connectors to various tools. Incremental data replications. Support for custom integrations. User reviews There are [around 70 G2 reviews about Stitch](https://www.g2.com/products/talend-stitch/reviews?source=search) with a 4.4/5 rating. Users admit the simplicity and ELT efficiency of this solution. However, they also note limitations in data management and error reporting. Pricing Stitch offers three monthly [pricing plans](https://www.stitchdata.com/pricing/) with variable costs that depend on the amount of ingested data. Standard starts at $100/month. Advanced starts at $1,250/month. Premium starts at $2,500/month. [Skyvia vs Stitch](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) 5. Matillion [Matillion](https://www.matillion.com/) is another data integration tool providing both SaaS [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) capabilities. Its cloud-native design allows users to harness the computing resources of the cloud platforms fully. Therefore, Matillion can easily scale data integrations and transformations for SaaS tools. Benefits 150+ pre-built connectors. Uses Generative AI for data pipelines. Data transformations on the DWH side. Job orchestration and scheduling. User reviews There are [around 80 G2 reviews on Matillion](https://www.g2.com/products/matillion-2023-06-26/reviews) with a 4.4/5 rating. Users admit the simplicity and ELT efficiency of this solution. However, they note the limited feature set and high costs as its major drawbacks. Pricing Matillion offers three [pricing plans](https://www.matillion.com/pricing) , which are feature and consumption-based. [Skyvia vs Matillion](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) 6. Hevo Data [Hevo Data](https://hevodata.com/) is a no-code, cloud-based ETL and ELT platform that automates data integration from various sources to a data warehouse. Thanks to its automated schema mapping, pre-load transformations, and user-friendly interface, Hevo supports both batch and real-time data processing. Benefits 150+ pre-configured connectors to data. No-code interface for visual pipeline building. Data aggregation, cleaning, and enrichment. Real-time monitoring and alerts. User reviews The Hevo Data rating is 4.4 out of 5 based on [200+ reviews on the G2 platform](https://www.g2.com/products/hevo-data/reviews) . Users admit its power in data management and a rich set of features. At the same time, Hevo Data is associated with data inaccuracies and pipeline issues. Pricing There are four [pricing tiers](https://hevodata.com/pricing/pipeline/?) , including a free one. [Skyvia vs Hevo Data](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) 7. Airbyte [Airbyte](https://airbyte.com/) is an open-source data integration platform with a large contribution from community members. This solution can be hosted in your production environment on in-house infrastructure or based on Airbyte Cloud with fully managed service. Benefits 400+ pre-built connectors to different data sources. Customizable data transformations. Incremental data updates. Detailed logging and monitoring. User reviews The Airbyte rating is 4.5 out of 5 on the [G2 platform](https://www.g2.com/products/airbyte/reviews) . Users like the abundance of connectors and quick pipeline setup. At the same time, they admit that Airbyte is more developer-oriented with the technical expertise required. Pricing Airbyte is free unless cloud hosting and management are needed. The [Airbyte price](https://airbyte.com/pricing) depends on the consumption and is estimated at $2.50 per [credit](https://docs.airbyte.com/cloud/managing-airbyte-cloud/manage-credits) . [Skyvia vs Airbyte](https://skyvia.com/etl-tools-comparison/airbyte-alternative-skyvia) 8. Informatica Cloud Data Integration [Informatica](https://www.informatica.com/it/) is a cloud-based data integration solution that helps businesses connect their data across SaaS apps and on-premises systems. This SaaS ETL has a cloud-native architecture, allowing easy scaling and availability. Benefits 500+ ready-to-use data connectors. AI-based insights for ETL pipeline design. Integrated Git support for collaboration. Visual interface with drag-and-drop functionality. Scheduling options for automated data transfer. User reviews [Informatica\u2019s G2 rating is 4.3 out of 5](https://www.g2.com/products/informatica-cloud-data-integration/reviews?source=search) , derived from 100+ reviews. Users appreciated the abundance of connectors and Informatica\u2019s cloud-native architecture. However, they often mention technical issues and the lack of features for more efficient integration. Pricing The prices for each business are individually discussed with Informatica sales managers. [Skyvia vs Informatica](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) Use Cases of SaaS ETL Tools in Different Industries SaaS ETL solutions are versatile and can be applied across different industries. However, certain sectors heavily rely on these tools for specific use cases. Healthcare The most common scenario for using SaaS ETL services in the healthcare industry is consolidating and analyzing data. This involves data collection from medical devices, pharmaceutical records, the public health sector, etc., to improve diagnostic experiences and support research. Explore a [real healthcare case study](https://skyvia.com/case-studies/ahca) of how the American Healthcare Association improved its reporting with Skyvia. E-Commerce SaaS ETL tools are commonly used in the retail industry to integrate data between e-commerce platforms, CRMs, ERP systems, and other software. Synchronizing data between these tools contributes to obtaining a 360-degree view of the customer profile. Data integration also helps to improve customer experiences, increase sales, and analyze campaigns. See how [Bakewell cookshop](https://skyvia.com/case-studies/bakewell-cookshop) synchronized their retail shop data with the BigCommerce online platform using Skyvia. Finance The use of SaaS ETL solutions in the financial sector tends to increase the security of online transactions. Companies gather data from financial applications to send it for analysis in real-time. This makes it possible to prevent fraud, detect unauthorized transactions, and ensure robustness of financial operations. Explore how Niso optimized its financial operations with the Skyvia data integration solution. Manufacturing Companies in the manufacturing sector largely depend on vendors that supply critical spare parts. SaaS ETL tools help optimize supply chain management by consolidating data from multiple sources in a unified system. That way, businesses get real-time insights into vendor performance, which helps them to identify the most reliable suppliers and optimize operations. See how Skyvia has helped Teesing [streamline all data processes](https://skyvia.com/case-studies/teesing) . Overcoming Common Challenges with SaaS ETL Tools Despite the effectiveness and opportunities SaaS ETL offers to businesses, it also has its weak points. It\u2019s better to be informed about possible challenges and avoid them with [best practices](https://skyvia.com/blog/salesforce-quickbooks-integration/) when elaborating on your data integration strategies. Data security concerns. The major threats to data integrity may arise from unauthorized access, human error, and malicious attacks. Therefore, you need to select a SaaS ETL solution that implements modern security protocols, provides authorized people access to data, and complies with HIPAA, GDPR, and other regulatory acts. Integration with legacy systems. Most SaaS ETL solutions work with modern online apps and on-premises systems. Not many of them support [legacy systems](https://skyvia.com/learn/legacy-system) , which makes it challenging for companies with obsolete software to ensure effective data integration management. Data volume and complexity. Companies need to select a solution that can handle fluctuating data volumes and properly process them. Choosing the Right SaaS ETL Tool for Your Business When selecting a SaaS ETL solution, the first step is to evaluate its core functionality to ensure it aligns with your needs. However, there are other important factors to consider when looking for a data integration solution. Those are the [ETL cost](https://skyvia.com/blog/etl-cost/) , rating on popular review platforms, number of available connectors, etc. We have prepared a table with key selection criteria for the SaaS ETL tools. We hope it will facilitate your decision-making process and help you make the right choice. SaaS ETL tool Rating on G2 Connectors Price Free tier Skyvia 4.8/5 200+ Starts at $79/month Yes Talend 4.3/5 300+ Custom Yes (open-source version) Fivetran 4.2/5 500+ Custom Yes Stitch 4.4/5 130+ Custom No Matillion 4.4/5 150+ Custom No Hevo Data 4.4/5 200+ Starts at $239/month Yes Airbyte 4.5/5 400+ $2.50 per credit Yes (open-source version) Informatica 4.3/5 500+ Custom No Final Thoughts The proliferation of SaaS applications has invoked new challenges in data integration. Companies need to manage all data from multiple systems through compliance with data security requirements and data quality maintenance. Since it\u2019s nearly impossible to do all that manually, the rise of SaaS ETL tools has become a natural progression. Such solutions can automate data integration tasks and ensure data integrity. To select the right solution for our business, pay attention to: functionality rating on review platforms cost other factors Consider using Skyvia for your data integration needs since it can implement various scenarios, set schedules on data transfer, apply incremental updates, etc. It offers a free tier to start, allowing you to try all its features! FAQs What is ETL in SaaS? This is a process of data extraction, transformation, and loading for SaaS applications. Other sources, including hybrid installations and on-premises systems, might also participate in data pipelines. Overall, SaaS ETL aims to facilitate data integration, analytics, and reporting. What is the difference between traditional ETL and SaaS ETL? The comparison of traditional and SaaS ETL solutions can be made using four key criteria: \u2013 Infrastructure management. Traditional ETL tools operate on-premises servers, while SaaS ETL tools are cloud-based. \u2013 Deployment and maintenance. In-house IT teams deploy and maintain traditional ETL tools, while the cloud providers are in charge of SaaS tool implementation and resource provisioning. \u2013 Data sources. Traditional ETL tools primarily support structured data in traditional databases, while SaaS ETL tools can deal with multiple application types (databases, data warehouses) and handle unstructured data. \u2013 Scalability. SaaS ETL tools usually provide much more flexibility and scalability than traditional ones. How do SaaS ETL tools handle large volumes of data? Thanks to the cloud-based nature of SaaS ETL solutions, they can easily scale up and down to meet the current data volumes. Such tools also support the incremental approach to extract only new and updated data instead of entire datasets on subsequent loads. SaaS ETL services also apply compression techniques on large data volumes to reduce the size of datasets in transit. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsaas-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=How+To+Transform+Your+Data+Processes+Using+SaaS+ETL+Tools%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fsaas-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/saas-etl-tools/&title=How+To+Transform+Your+Data+Processes+Using+SaaS+ETL+Tools%3F) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/salesforce-and-bigquery-connection/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to connect\u00a0Salesforce to Google BigQuery By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/salesforce-and-bigquery-connection/#respond) 3692 May 28, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-bigquery-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect%C2%A0Salesforce+to+Google+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-bigquery-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-and-bigquery-connection/&title=How+to+connect%C2%A0Salesforce+to+Google+BigQuery) If you\u2019re moving data between Salesforce and Google BigQuery, it\u2019s time to stop copy-pasting and start synchronizing. You may scale fast or try to get cleaner insights; this integration brings the CRM and analytics under one roof. No more second-guessing reports or waiting on data dumps. In this guide, we\u2019ll walk you through how to connect the dots. Simply, smartly, and without the fluff. Table of Contents Why Integrate Salesforce with Google BigQuery? What Are Salesforce and Google BigQuery? Available Methods to Connect Salesforce to BigQuery Method 1: Manual Salesforce to BigQuery Integration (Custom API) Method 2: No-Code Salesforce to BigQuery Integration Method 3: Custom Coding Benefits of Salesforce and Google BigQuery connection Conclusion Why Integrate Salesforce with Google BigQuery? Connecting the two is a no-brainer if you\u2019re running Salesforce for CRM and using BigQuery for analytics. You\u2019re already behind when your sales and marketing data live in one place and your reporting stack lives somewhere else. Syncing Salesforce with BigQuery lets you: Build live BI dashboards your execs can actually trust. Track marketing attribution end-to-end, from ad click to closed deal. Get sharper sales forecasting using real-time pipeline data. Ditch clunky CSV exports and start automating insights. Without the integration? You\u2019re stuck with manual data pulls, mismatched metrics, and teams making decisions off outdated info. It slows down everything from campaign optimizations to boardroom reporting. Hooking these two platforms together opens the door to fast, reliable, always-on analytics. This article describes the possible ways to integrate Salesforce CRM and Google BigQuery data warehouse, their pros, cons, and the best use cases. What Are Salesforce and Google BigQuery? Before diving into how to [connect](https://www.youtube.com/watch?v=mu36kKxHDws&ab_channel=Skyvia) these two systems, we have to ensure what these platforms do and why pairing them up is such a decisive move. What is Salesforce? This go-to CRM tracks leads, manages customer relationships, and aligns the sales and marketing teams. Whether you\u2019re closing deals or building campaigns, it\u2019s where all your customer data lives. Think of it as the sales engine and marketing control room rolled into one. What is Google BigQuery? This Google Cloud\u2019s serverless data warehouse handles massive datasets fast. It\u2019s built for speed, scalability, and SQL-based analysis. No infrastructure headaches, no waiting around for results. If Salesforce is your data source, BigQuery is where the magic happens: analytics, dashboards, machine learning, etc. Available Methods to Connect Salesforce to BigQuery There are multiple ways to get the Salesforce data flowing into BigQuery. Here are the three main options, each with its trade-offs depending on your team\u2019s skillset, goals, and how hands-on you want to be. Method 1: Custom Integration via API . This approach taps into the Salesforce and BigQuery APIs directly. It\u2019s powerful and flexible. You can tailor the flow to exactly what you need. But it also means dealing with OAuth tokens, API quotas, pagination, error handling, etc. You\u2019ll need a dev team or strong technical chops to clean it. Method 2: No-Code Integration . If you\u2019d rather skip the code and let a tool do the heavy lifting, Skyvia\u2019s a great pick. It\u2019s a no-code cloud integration platform that connects Salesforce and BigQuery with just a few clicks. You can schedule syncs, map fields, monitor flows, and even handle incremental updates from the browser. Method 3: Custom Coding . Think Python scripts, middleware apps, or ETL pipelines using tools like Airflow or dbt. This approach gives users total control over the data flow, transformation logic, and scheduling. But they\u2019re also responsible for building and maintaining everything. Great for engineering teams that want to fine-tune every step. Which Salesforce to BigQuery Method Should You Choose? Let\u2019s briefly compare these approaches. Criteria Custom API Integration No-Code Integration Custom Coding Skill Level Required Advanced Beginner to Intermediate Advanced Scalability High Moderate to High High (with effort) Maintenance Needs Ongoing upkeep required. Low. Managed platform. High. You\u2019re on the hook. Real-Time Support Possible, but complex. Near real-time supported. Possible with effort. Data Transformation Full control; custom logic. Built-in, user-friendly tools. Total flexibility. Security High, you control everything. Managed by platform. Depends on implementation. Costs Development and maintenance. Subscription-based, predictable. Variable:\u00a0\u00a0infrastructure + dev. Best For Complex, highly customized integrations. Fast, no-code integrations with minimal fuss. Full control, complex ETL pipelines. Method 1: Manual Salesforce to BigQuery Integration (Custom API) This approach involves building your own pipeline by pulling data from Salesforce using its API, shaping that data how you want it, and then loading it into BigQuery, either directly or through Google Cloud Storage (GCS). It\u2019s the most hands-on method, giving you full control but demanding coding skills and ongoing care. Best For Teams with strong developer resources. Projects needing fine-tuned, custom data flows. Complex schemas or business rules that off-the-shelf tools can\u2019t handle. Pros Total control over data extraction, transformation, and loading. It can be optimized for performance and security. Flexibility to implement real-time or near-real-time syncing. Cons Requires advanced API and dev skills. It can be time-consuming to build and maintain. Handling errors, rate limits, and auth tokens adds complexity. Step-by-step Guide Step 1: Extract Data from Salesforce API Register a connected app in Salesforce for OAuth credentials. Obtain an OAuth access token. Use REST API endpoint to pull data (e.g., /services/data/vXX.X/query/?q=SOQL). Handle pagination and API limits. Sample Python snippet to get data: import requests \n# Set up OAuth token and endpoint \naccess_token = 'YOUR_ACCESS_TOKEN' \ninstance_url = 'https://yourInstance.salesforce.com' \nquery = \"SELECT Id, Name FROM Account LIMIT 100\" \nheaders = { \n 'Authorization': f'Bearer {access_token}', \n 'Content-Type': 'application/json' \n} \nresponse = requests.get(f'{instance_url}/services/data/v54.0/query/?q={query}', headers=headers) \ndata = response.json() \nprint(data) Step 2: Transform and Prepare the Data Parse JSON data into a flat structure. Clean or format fields as needed (dates, booleans, nested objects). Save transformed data as CSV or JSON for BigQuery ingestion. Sample Python snippet to flatten and save CSV: import pandas as pd \nfrom pandas import json_normalize \nrecords = data['records'] # from previous step \ndf = json_normalize(records) \n# Clean columns or format as needed \ndf.to_csv('salesforce_data.csv', index=False) Step 3: Load into BigQuery via GCS or Direct Upload files to the Google Cloud Storage bucket (if using staging). Use BigQuery UI or client library to load data from GCS or stream directly. Monitor load jobs and handle errors. Sample Python snippet to load CSV from GCS to BigQuery: from google.cloud import bigquery \nclient = bigquery.Client() \ntable_id = 'your-project.your_dataset.your_table' \njob_config = bigquery.LoadJobConfig( \n source_format=bigquery.SourceFormat.CSV, \n skip_leading_rows=1, \n autodetect=True, \n) \nuri = 'gs://your-bucket/salesforce_data.csv' \nload_job = client.load_table_from_uri( \n uri, \n table_id, \n job_config=job_config \n) \nload_job.result() # Waits for the job to complete. \nprint(f\"Loaded {load_job.output_rows} rows into {table_id}.\") Method 2: No-Code Salesforce to BigQuery Integration If coding isn\u2019t your strong skill or you want to get things moving fast, using a no-code platform like [Skyvia](https://skyvia.com/) is the way to go. It handles ETL, ELT, reverse ETL, automating workflows, cloud-to-cloud backup, data management, and connectivity while you focus on what data to move and when. Perfect if you want a solid [integration](https://skyvia.com/data-integration/integrate-salesforce-google-bigquery) without the headaches of writing custom code. Best For Business analysts, marketers, and non-developers. Teams needing quick setups with minimal fuss. Projects where reliability and ease of use trump full customization. Pros No coding required: drag, drop, and configure. Managed platform means less maintenance and monitoring. It supports scheduling and incremental updates that are out of the box. Clear UI for mapping fields and monitoring syncs. Cons Less flexibility for highly custom data transformations. Some complex use cases might require manual tweaking outside the platform. Limited control over error handling and recovery in large, complex syncs. Step-by-step Guide Before starting your integration journey, register in [Skyvia](https://skyvia.com/) to [create connections](https://docs.skyvia.com/connections/#creating-connections) to [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Google BigQuery](https://docs.skyvia.com/connectors/databases/google_bigquery_connections.html) . It takes a few steps to implement different [Salesforce and Google BigQuery integration](https://www.youtube.com/watch?v=mu36kKxHDws&ab_channel=Skyvia) scenarios here. You can: Replicate Salesforce objects and data to BigQuery. Enrich Salesforce data with data from BigQuery. Build complicated flows involving other cloud apps or databases. Step 1: Create Salesforce Connection Log in to Skyvia. Click + Create New > Connection . Select Salesforce as the connector, enter appropriate credentials, and authorize access. Note that Salesforce supports two authentication methods: OAuth and Username & Password. Step 2: Create BigQuery Connection Similarly, create a new BigQuery connection. You\u2019ll upload your Google service account JSON key and select the target dataset. Note that there are also two authentication methods for Google BigQuery: user account or JSON key. Also, the information about the Project Id and the DataSet Id can be found in the Google API Console. Step 3: Use Skyvia Import or Replication Create a new import or replication task. Choose Salesforce as the source and BigQuery as the destination. Skyvia Import With Skyvia Import, you can enrich [Salesforce data with BigQuery data](https://skyvia.com/data-integration/integrate-salesforce-google-bigquery) . This scenario allows loading CSV files manually (when needed) and automatically on a schedule. To set up the Import integration in Skyvia: Log in to Skyvia, click +Create NEW in the top menu, and select Import in the Integration section. Click on the Data Source database or cloud app Source type and select BigQuery connection. Select Salesforce connection as a Target. Click Add new to create an integration task. You can add multiple tasks in one integration. Select the object to import data from and use filters to limit the number of records if needed. Specify the object to import data to and select the action to perform. Assign field mapping. Skyvia maps the fields with the same names automatically. Run the integration and monitor the results, the same as described above. Skyvia Replication [Replication](https://skyvia.com/data-integration/replicate-salesforce-to-google-bigquery) copies the Salesforce data structure and the data itself and then creates the same data structure in BigQuery and keeps it updated if needed. The example below shows how to replicate available Salesforce objects and data to BigQuery. To create replication integration: Log in to Skyvia, click +Create NEW in the top menu, go to the Integration section, and select Replication . Select the Salesforce connection as a Source and BigQuery connection as a Target. Choose the objects and fields to replicate. You can [set filters](https://docs.skyvia.com/data-integration/common-package-features/filter-settings.html) to limit the copied records if needed. Enable or disable the available [integration options](https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html#selecting-replication-options) . The Incremental Updates option is enabled by default. It allows replicating only changes made to the source records. Click Schedule to [set the integration schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) for the automatic run if needed. Run the integration and monitor the results.\u00a0Skyvia keeps the results of each integration run and allows checking the number of successful and failed records using the [integration run history](https://docs.skyvia.com/data-integration/package-run-history.html) . Use the Monitor and Log tabs to capture the run results. Click on the specific run to see the details. As a result, you have a copy of Salesforce data in the BigQuery warehouse. Method 3: Custom Coding With custom coding, you can write your own scripts or apps in Java, Python, or whatever suits your team. That handles extraction, transformation, and loading exactly how you want. This method allows maximum control and flexibility, letting them fine-tune every step of the data journey. But heads up, it\u2019s not a walk in the park. Building a reliable, scalable pipeline takes serious programming resources and plenty of time. Plus, as the data landscape shifts or your platforms evolve, you\u2019ll need to keep updating the code to keep everything humming smoothly. Best For Engineering teams with deep knowledge of programming languages and an understanding of business analytics needs. Use cases demanding custom business logic and transformations. Organizations ready to invest in building and maintaining bespoke pipelines. Pros Full control over data flows, transformation, and error handling. Ability to handle complex or niche business requirements. No dependence on third-party tools or vendors. Can scale and optimize performance to fit your environment. Cons It requires multiple developers who really get both Salesforce and BigQuery internals. You have to handle every little quirk and edge case yourself. Need dedicated testing and validation to ensure data completeness and quality. Frequent code updates needed with any business or platform changes. Benefits of Salesforce and Google BigQuery connection Salesforce is a CRM giant that provides services for most business processes. Google BigQuery is a manageable and scalable serverless technology set designed to meet modern demand challenges of petabyte-scale data storage, networking, and sub-second query response times. Google BigQuery and Salesforce integration combines the best Salesforce and BigQuery capabilities. You can transfer Salesforce data to BigQuery to keep a copy of your Salesforce data in a secure, serverless environment, not worrying about storage space. There is no need to build and maintain complicated on-premises infrastructure, as BigQuery is a cloud-based serverless technology. You can build efficient collaboration between teams by consolidating data from different Salesforce services into BigQuery for custom reporting, analysis, and visualization by BigQuery means. Conclusion If you\u2019ve got a strong dev team and need absolute control with custom logic, Method 3 (Custom Coding) is your playground. But be ready to roll up the sleeves for a serious build-and-maintain job. If you want something faster, easier, and without the coding headache, Method 2 (No-Code with Skyvia) is where most users find their sweet spot. It\u2019s reliable, flexible enough for most use cases, and lets companies focus on what really matters: just data, not the plumbing. And if you\u2019re all about precision but don\u2019t mind some hands-on work, Method 1 (Manual API Integration) gives you the best of both worlds: control without the full DIY marathon. Companies that are just starting or want to skip the tech stress might give Skyvia a shot. Setting up is a breeze, and you can see results fast. Plus, it grows with you as your data needs evolve. F.A.Q. for Salesforce to Google BigQuery How secure is Salesforce to BigQuery integration? Salesforce \u2013 BigQuery integrations can be highly secure when using tools that support OAuth , encrypted data transfer (HTTPS/SSL), IAM roles , and access controls . Skyvia , for example , doesn\u2019t store credentials and offers audit logs for added security . What types of data can I extract from Salesforce? You can extract standard and custom objects , including Leads , Contacts , Accounts , Opportunities , Cases , Activities , and custom fields . API access lets you query almost any structured CRM data stored in Salesforce . Can I automate the Salesforce\u2013BigQuery integration? Yes. Tools like Skyvia , Fivetran , and Airbyte support scheduled, incremental syncs to keep BigQuery up to date with Salesforce data \u2013 without manual exports. Automation can run daily, hourly, or in near real time, depending on the tool and plan. Do I need coding skills to set up the integration? Not necessarily . No-code platforms like Skyvia allow full setup via a browser-based UI. However , manual integrations using the Salesforce API or Google Cloud SDK require familiarity with Python , REST, and SQL. What are the common challenges when syncing Salesforce to BigQuery? Some common hurdles include API rate limits , data type mismatches , and flattening nested fields from Salesforce . Using a managed ETL tool helps address these with built-in mapping and error handling . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-bigquery-connection%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect%C2%A0Salesforce+to+Google+BigQuery&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-bigquery-connection%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-and-bigquery-connection/&title=How+to+connect%C2%A0Salesforce+to+Google+BigQuery) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-and-erp-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce and ERP Integration: Maximizing Business Potential By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/salesforce-and-erp-integration/#respond) 2720 November 3, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-erp-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+ERP+Integration%3A+Maximizing+Business+Potential&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-erp-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-and-erp-integration/&title=Salesforce+and+ERP+Integration%3A+Maximizing+Business+Potential) Have you ever wondered why Salesforce and ERP integration systems can feel like trying to solve a puzzle blindfolded? If you\u2019re nodding your head, you\u2019re not alone. The journey to harmonizing these two vital systems can be an emotional roller coaster. Imagine this: You\u2019re in the driver\u2019s seat of your business, trying to navigate the winding road of Salesforce and ERP integration. Frustration builds as data refuses to flow seamlessly. Processes become sluggish, and errors rear their ugly heads. It\u2019s an anxiety-inducing experience, like crossing a minefield without a map. But fear not, because there\u2019s a silver lining to this cloud of chaos. In this guide, we\u2019re about to unveil a fantastic cloud integration tool that\u2019s easy to use. It\u2019s the key to addressing those pain points that keep you up at night. We understand that uncertainty and anxiety can cloud your path, but we\u2019re here to tell you that pain can become your greatest gain. So, fasten your seatbelt and get ready to discover how you can break free from the Salesforce and ERP integration chaos once and for all. Table of Contents What is ERP? How Salesforce and ERP Integration Drives Business Success The Benefits of Salesforce and ERP Integration Advancing Salesforce and ERP Integration with Skyvia STEP 1: Create Connections STEP 2: Create Integration STEP 3: Run, Test, and Adjust Best Practices in Salesforce and ERP Integration What is ERP? Enterprise Resource Planning, or ERP, is the backbone of operations in many businesses worldwide. It\u2019s like the central nervous system. It coordinates and streamlines various functions within an organization. ERP systems handle finance, human resources, manufacturing, and supply chain management. Think of it as the conductor of a grand orchestra. It ensures that every instrument plays in harmony to create a beautiful symphony of efficiency. Now, imagine ERP as a versatile tool that transcends industries. While it\u2019s commonly associated with manufacturing and distribution, it doesn\u2019t there. ERP has found a home in sectors as diverse as healthcare, finance, and retail. It\u2019s adaptable to the unique needs of each industry. It offers tailored solutions to complex challenges. But, ERP systems come with their own benefits and challenges. The benefits are like the sweet fruits of hard labor. They include enhanced productivity, streamlined processes, and data-driven decision-making. But, the journey to put an ERP system in place can be like a rollercoaster ride. These challenges include costs, time-consuming deployments, and employees\u2019 resistance to change. With ERP\u2019s significance now clear, let\u2019s explore Salesforce\u2019s role in the integration puzzle. Is Salesforce an ERP system on its own? Or are they distinct entities that can coexist harmoniously? Let\u2019s unravel the mysteries in the upcoming section. Salesforce and ERP: Are They the Same? Now, it\u2019s time to uncover the relationship between Salesforce and ERP systems. Let\u2019s begin by unraveling the primary functions of Salesforce. Salesforce is a Customer Relationship Management (CRM) tool. It\u2019s like a seasoned detective, focusing on managing customer interactions with finesse. It\u2019s a hub for sales, marketing, and customer support activities. Imagine it as the Sherlock Holmes of your business. It helps you solve the mysteries of customer preferences, leads, and sales pipelines. Now, let\u2019s step into the ring to compare and contrast Salesforce with ERP systems. ERP systems are like the Swiss Army knives, handling various organizational functions. But Salesforce is the CRM scalpel, precision-focused on customer-centric operations. The distinction lies in their core purposes. ERP caters to internal processes like finance and HR. Meanwhile, Salesforce prioritizes external customer interactions. But the question is: Can Salesforce serve as an ERP? This debate can be as polarizing as discussing which is better, coffee or tea. Some argue that Salesforce can indeed function as an ERP. Smaller businesses with less complex needs are their examples. It\u2019s like saying a Swiss Army knife can replace a toolbox in certain situations. But, ERP systems are like seasoned captains steering the ship of enterprise operations. It\u2019s a responsibility Salesforce, in its CRM essence, may not fully embrace. But, Salesforce and ERP systems can coexist harmoniously. In the next section, we\u2019ll delve into the strategic importance of integrating Salesforce with ERP. How Salesforce and ERP Integration Drives Business Success Integrating Salesforce and ERP is like building a bridge. A bridge between the customer-focused world and your internal operations. It means your team has access to accurate data to make decisive actions. It\u2019s like switching from navigating with an old, unreliable paper map to a GPS that guides you on the most efficient route. Real-World Success Story from a Real-Estate Company Let\u2019s have a real-world example to showcase the potential of this integration. A real estate company has an ERP system. And they need to integrate it into Salesforce and Mailchimp. They also needed to push back some data from Salesforce to the ERP system. So, they decided to look for third-party tools that can do these. So, they found their dream tool and the seamless integration they ever wanted. Check out the full story [here](https://skyvia.com/case-studies/real-estate-company) . This real-world success story illustrates the potential of integration, transforming chaos into harmony. The following section will discuss the benefits of Salesforce and ERP integrations. We will uncover the advantages that await businesses on this integration journey. The Benefits of Salesforce and ERP Integrations It\u2019s time to unlock the treasure chest of benefits of this integration quest. Simple Data Distribution Integrating Salesforce with your ERP system simplifies data distribution to a remarkable degree. It\u2019s like having a well-oiled machine that automatically shares information across departments. This streamlining not only saves time but enhances operational efficiency. Advanced Control and Comprehensive Analytics Integration offers you the reins to control your data and processes more effectively. With comprehensive analytics, you can delve deeper into your data. So, you will gain valuable insights that would have remained hidden in the chaos. Boosted Efficiency and Elevated Customer Engagement Integration streamlines business processes, making them as efficient as a high-speed train. This newfound efficiency extends to customer relationship management. It\u2019s where you can engage with customers in a more personalized and timely manner. Minimizes Inaccuracies and Duplication in Data Input One of the benefits of integration is its ability to reduce errors and duplicate data entry. It\u2019s like having a diligent proofreader who spots typos before they mark your document. This isn\u2019t just about short-term gains. It\u2019s a long-term investment that saves costs and improves data quality over time. But wait, there\u2019s more to uncover in our next section! We\u2019ll walk you through the steps of how to integrate Salesforce and ERP. Get ready to embark on a practical journey towards integration success. Advancing Salesforce and ERP Integration with Skyvia The cloud integration solution used by the real-estate company earlier is Skyvia. So, it\u2019s more than a simple reason to bring you the easy steps to integrate Salesforce into your ERP system. And what more can you ask for? At the time of writing, Skyvia has a [4.8 customer satisfaction rating in G2](https://www.g2.com/products/skyvia/reviews#reviews) . And a [4.9 rating in Gartner Peer Insights](https://www.gartner.com/reviews/market/data-integration-tools/vendor/devart/product/skyvia) . So, the numbers speak of a happy integration journey ahead. Whether you\u2019re connecting Salesforce to Microsoft Dynamics 365, NetSuite, or Xero, the key steps remain consistent. General Steps The following are the general steps: STEP 1: Create Connections STEP 2: Create Integration STEP 3: Run, Test, and Adjust STEP 1: Create Connections Begin by setting up the essential connections in Skyvia. You\u2019ll need a connection for Salesforce and another for your chosen ERP system. Ensure that accounts for both systems are active, providing the necessary credentials. Depending on your requirements, you may need more connections. Simply follow the general steps below for each of those connections. Creating a connection in Skyvia is easy. Check out the steps below: Login to Skyvia if you haven\u2019t done so. Then, click NEW from the top panel of the page, and select Connection . Type the name of the connector you need in the search box. For example, Salesforce. Click your desired connector from the search results. Give your new connection a descriptive name. Configure the connection. You will need the credentials for your connector to work. Then, click Test Connection to see if you can connect. Finally, if you have a successful test connection, click Save connection or Create connection . Do the same steps for creating connectors for your ERP system. The following sections will show the different Skyvia connectors we are covering. Salesforce Connector Find below the screenshot of the Salesforce connection sample. You only need a few boxes to fill. And the most important is the Salesforce OAuth token. To get that token, sign in to Salesforce. Note that Skyvia will not store your Salesforce credentials. And the best part of saving the connection is about reuse. You can reuse this same connection on other Salesforce integration projects. For more information about connecting to Salesforce, see [the official documentation](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) . NetSuite Connector NetSuite is a comprehensive cloud business management solution from Oracle. It provides ERP, CRM, and other business solutions. [Skyvia can integrate with Netsuite](https://skyvia.com/data-integration/integrate-salesforce-netsuite) with the same general steps as earlier. But you need to use a connector for NetSuite. Below is the configuration for the NetSuite Connector in Skyvia: You can get the Account ID, Application ID, and the rest from your NetSuite administrator. Or from someone who\u2019s an expert in NetSuite in your company. Input them in the form above. Then, test the connection. And finally, click Create Connection . Once you have a successful connection, you can use it in any Skyvia integration package. If you need more help connecting to Netsuite, click the Connecting to NetSuite button. It will take you to the [documentation](https://docs.skyvia.com/connectors/cloud-sources/netsuite_connections.html) . Xero Connector Xero is a finance app covering automatic bank feeds, accounts payable, invoicing, and more. [Skyvia can do a 2-way integration](https://skyvia.com/data-integration/integrate-salesforce-xero) between Salesforce and Xero. So, you can create invoices for your Salesforce customers and more. All you need is to create a Xero connection in Skyvia and follow the general steps earlier. Below is the screenshot for the Xero connector configuration: Sign in to Xero to get an Access Token . Then, provide the Tenant . Then, test the connection. And finally, click Create Connection . A successful Xero connection will let you use it in Skyvia integration packages. If you need more help connecting to Xero, click the Connecting to Xero button. It will take you to the [documentation](https://docs.skyvia.com/connectors/cloud-sources/xero_connections.html) . In-House ERP Using SQL Server Some companies develop their own ERP in-house. And they prefer storing it in a relational database like Microsoft SQL Server. But it doesn\u2019t matter whether the ERP is custom-made or not. Skyvia can connect with them. So, this section will focus on creating a connection to a relational database like [SQL Server](https://skyvia.com/data-integration/integrate-salesforce-sql-server) . But first, let\u2019s see where your SQL Server is. Is it on-premise or in the cloud? Connecting to On-Premise SQL Server If your SQL Server is on-premise, you need to download and install the Skyvia Agent. With this, Skyvia will have an encrypted connection to your on-premise SQL Server. The following are the steps you can follow to install and configure the Skyvia Agent: Login to Skyvia if you haven\u2019t done so. Then, click NEW from the top panel of the page, and select Agent . Name your agent. You will use this name when you create an SQL Server connection. Download the Agent to the Windows machine you will use to connect Skyvia to your SQL Server. Double-click to run the Skyvia Agent installer and follow the prompts. Download the Skyvia Agent Key File and place it in the folder where you install the Agent. Run the Skyvia Agent. Then, you will see a Terminal app running. Below is what it looks like so you will have an idea. You will see when Skyvia connects to SQL Server from Skyvia itself. Or, the messages will show up in the Skyvia Agent app. Then, you can make a Skyvia connection for SQL Server. Here\u2019s another example: Use an Agent connection mode and specify the name of your Agent. We used the Skyvia-MyPC Agent for this example. Then, specify your server\u2019s IP address. If you need a step-by-step example with pictures, check out [this article](https://skyvia.com/blog/import-csv-file-to-sql-server/) . Connecting to Azure SQL Your ERP\u2019s SQL Server can also be in the cloud. In the following example, we use Microsoft Azure as the cloud provider. Before you create an Azure SQL connection, you need to whitelist Skyvia\u2019s IP addresses in Azure. Or, it won\u2019t work. Below is the [Azure SQL connection in Skyvia](https://skyvia.com/data-integration/replicate-salesforce-to-sql-azure) : It\u2019s the same Skyvia connector for SQL Server. But instead of using the Skyvia Agent, it\u2019s in Direct Connection Mode. Now that connectors are set, you can use them for any [data pipeline](https://skyvia.com/blog/what-is-data-pipeline/) you create. The next step will discuss that. STEP 2: Create Integration Skyvia offers two integration methods: [Import](https://docs.skyvia.com/data-integration/import/) integration and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) with [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . For simpler requirements, Import integration may suffice. But more complex needs may need Control Flow logic and data transformations in Data Flow. This flexible approach ensures that your integration caters to your specific needs. And the best part? Skyvia can integrate both ways. Make Salesforce the target and the ERP system the source. Or, flip their roles where Salesforce becomes the source. In this article, we will focus on using Skyvia Control Flow. Skyvia Control Flow and Data Flow If you need more control over how to integrate two sources of data, your best bet is the Skyvia Control Flow . It lets you do integrations in a specific order. For example, you may want to backup the target data first. Then, you may also want to format the data before copying from the source. And the Control Flow has the necessary components for all your integration needs. The Control Flow works with the Skyvia Data Flow. The Control Flow defines which of the Data Flows will run first and which will run next. And not just that. You can also apply many transformations to your data before you can load it to the target. Control Flow Example Below is an example of a Control Flow that backs up Salesforce data first. If there\u2019s no error, the Data Flow to process the data from NetSuite to Salesforce proceeds. The Try Catch component does the trick to catch any error in the two Data Flows. When an error occurs, it logs the problem in the log file. Then, execution stops. Let\u2019s see the configuration of each component in the following screenshots. First, here\u2019s the setup for the Try-Catch component. You need to add a variable to store the error details. In the example below, we used the @error_message variable. Then, check out the Data Flow design for the backup. The sample backs up the Salesforce Account data to a MySQL database. And the following is the Data Flow for integrating [Salesforce and Netsuite](https://skyvia.com/blog/netsuite-salesforce-integration/) ERP. Notice the Extend transformation component. It adds a new column before loading to Salesforce Account data. Finally, check out the Action component to log the error. The following are the steps to do the above: Click NEW . Under INTEGRATION, click Control Flow . Then, drag a Try-Catch component to the canvas and configure it. Then, add 2 Data Flows under the Try flow. One for the Salesforce backup and the other for the Netsuite-Salesforce integration. Configure each Data Flow as seen above. Finally, add 1 Action component to the Catch flow and configure. As you have seen above, with just a few clicks, you can do a Salesforce and ERP integration in Skyvia. Everything is without coding. When dealing with other ERP systems, you only need to configure a different connector. For more information, visit the official documentation for the [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . STEP 3: Run, Test, and Adjust Save your integration package and run the integration process. After execution, check the Run History. You can verify it there if the integration succeeded or encountered errors. If issues arise, troubleshoot by identifying the source of the error. Then, make necessary adjustments. If everything is good, you can create a recurring schedule, so, it will run on its own next time. Best Practices in Salesforce and ERP Integration As our integration journey nears its completion, we\u2019re about to explore a broader horizon. That\u2019s integrating Salesforce with ERP systems beyond what we have discussed so far\u2026 Whether it\u2019s NetSuite, SAP, or any other ERP software, the foundational steps are quite similar. In my experience, it holds true whether you use code or any other integration tool. But here\u2019s the catch: You need more technical skills, effort, and energy to accomplish the same. With Skyvia, it\u2019s a lot easier to do. Here are the general steps again: Create Connections. Create Integration. Run, test, and adjust. To make your Salesforce and ERP integration successful using Skyvia, here are some tips and best practices: Define Clear Objectives : Start by clearly defining your integration goals. What specific data needs to be synchronized, and for what purpose? Having a well-defined objective ensures that your integration efforts remain focused and effective. Plan Ahead : Create a detailed integration plan that outlines the steps, timelines, and responsibilities. This plan serves as your roadmap, guiding you through the integration process. Data Mapping : Pay close attention to data mapping. Ensure that data fields in Salesforce align correctly with those in your ERP system. Mapping errors can lead to data inconsistencies. Error Handling : Develop a robust error-handling mechanism. Not every integration will go flawlessly, so it\u2019s essential to have a plan in place to address errors promptly and efficiently. Security Measures : Prioritize data security. Ensure that data transferred between systems is encrypted and protected. Implement access controls to restrict unauthorized access. Regular Monitoring : After integration, establish a routine for monitoring data flow and system performance. Regular checks help identify issues before they become significant problems. Ending Takeaways It\u2019s a thrilling journey through the labyrinth of Salesforce and ERP integration. We\u2019ve explored the intricacies of ERP systems. We also dissected the roles of Salesforce and ERP. Then, unveiled the strategic significance of integration. We\u2019ve dived deep into the benefits of harmonizing these powerful platforms. And learned the essential steps for successful integration using Skyvia. Unlock Integration Success with Skyvia Ready to embark on your integration journey? [Try Skyvia for free.](https://app.skyvia.com/register) And discover why reviews are raving about this user-friendly cloud integration platform. With Skyvia, you can turn chaos into seamless data harmony. It will empower your business to thrive in the modern landscape. Don\u2019t miss the chance to experience integration success like never before! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-erp-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+ERP+Integration%3A+Maximizing+Business+Potential&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-and-erp-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-and-erp-integration/&title=Salesforce+and+ERP+Integration%3A+Maximizing+Business+Potential) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-backup-solutions-and-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Salesforce Backup Solutions And Tools By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-backup-solutions-and-tools/#respond) 2168 March 28, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-backup-solutions-and-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Backup+Solutions+And+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-backup-solutions-and-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-backup-solutions-and-tools/&title=Salesforce+Backup+Solutions+And+Tools) It\u2019s a common belief that SaaS application developers should maintain and manage everything, from infrastructure to the data stored within the business account. However, this is not so in the case of Salesforce, as this tool ensures service availability but delegates the responsibility for data integrity to users. Let\u2019s recall the recent [hacker attack on Kyivstar](https://www.bbc.com/news/world-europe-67691222) , Ukraine\u2019s main mobile network provider. The mobile network was down for days, and tons of the company\u2019s data stored on the cloud servers were destroyed. Luckily, Kyivstar managed to get back on the rails the next day due to a deliberate recovery plan. From the Kyivstar case and other cyber attacks, we can perceive the value and importance of cloud application data backup. As Salesforce is a cloud application, and companies are responsible for the data stored there, reliable Salesforce backup solutions are a must for any organization. This article includes a list of the best Salesforce backup tools, along with their principal characteristics and benefits. Table of Contents Importance of a Reliable Salesforce Backup Solution Advantages of Reliable Salesforce Backup Solutions What to Look for in Salesforce Backup Tools Top 7 Salesforce Backup Tools and Solutions Exploring Skyvia\u2019s Salesforce Backup Solution Salesforce Data Backup Best Practices Contrasts between Restoring and Recovery of Salesforce Data Summary Importance of a Reliable Salesforce Backup Solution As Salesforce is hosted on [AWS](https://aws.amazon.com/) , it relies on the \u2018distributed\u2019 or \u2018shared\u2019 responsibility model. Salesforce service providers are responsible for the availability and uptime, while clients must create and manage data independently. In brief, companies using Salesforce need to ensure that the data kept within their account is secure, compliant with the regulatory requirements, and recoverable. All this is possible with manual or automated Salesforce backup solutions. Here are some core reasons for implementing Salesforce data backup tools: Human error. If an operator or sales agent enters erroneous data, Salesforce backup services would be necessary to restore mistakenly edited or deleted records. [Mass updates](https://skyvia.com/blog/mass-update-salesforce-records/) . System updates aren\u2019t always successful, so there might be a need to get back to the previous version with the Salesforce backup and restore solutions. Hacker attacks. Since online data is often vulnerable to malware and hacker attacks, regular Salesforce backups would be a remedy. In brief, implementing Salesforce data backup solutions is inevitable if you want to keep your customer data safe and sound. Advantages of Reliable Salesforce Backup Solutions Most Salesforce backup tools provide myriad opportunities for businesses: data backup and restore, archiving, and management. They also bring such notable benefits for organizations as: Simplicity. Third-party Salesforce backup solutions provide an intuitive interface for administrators to automate regular data exports and restores. Protection. With Salesforce backup tools, it\u2019s possible to preserve not only the customer data but also configurations, custom fields, etc. Security . Salesforce backup solutions grant high-level data safety due to embedded security protocols, advanced data encryption, role-based access control, and many others. Productivity. Regular Salesforce backups help users and administrators by providing confidence in data integrity and accessibility. What to Look for in Salesforce Backup Tools Solid technological solutions are essential for seamlessly performing Salesforce backups. Those should be third-party tools that store the backed-up Salesforce data outside the CRM itself, which is a fundamental data integrity criterion. Here are some characteristics that define a decent Salesforce backup solution: Backup frequencies. Ensure the service allows you to perform backups at the frequency you need. Storage volume. Pay attention to the amount of available storage provided for backups. This must grant enough space for new data copies each session. Price. Check the pricing plans and associated features for performing Salesforce backups. Make sure that the service cost matches your annual budget. Ease of use. Lastly, a tool must have a user-friendly interface to be intuitive and clear, preventing extra cost and time spent on employee education. Metadata copy. Keeping a copy of configuration settings and page layouts could be a great addition for metadata backup. Top 7 Salesforce Backup Tools and Solutions Now, we\u2019d like to present the best Salesforce backup solutions that correspond to the highest standards and comprise the features mentioned in the previous section. Now, we\u2019d like to present the best Salesforce backup solutions that correspond to the highest standards and comprise the features mentioned in the previous section. Skyvia [Skyvia](https://skyvia.com/) is a multifaceted cloud platform designed for a myriad of data-related operations. In particular, Skyvia offers three reliable ways for Salesforce data backup: [Backup](https://skyvia.com/backup/) : This Skyvia product allows users to create backups manually or set up regular backups on schedule. Each backup is saved as a so-called snapshot. Such snapshots can be compared with each other. Once data needs to be restored to the application, you can select which exact data to restore instead of loading the entire snapshot. [Salesforce data loader](https://skyvia.com/data-integration/salesforce-data-loader) : You can easily export Salesforce data to CSV files on a schedule and save them on cloud storage platforms such as Dropbox, Box, Google Drive, Azure File Storage, and Amazon S3. [Replication](https://skyvia.com/data-integration/replication) . Another way to create a backup is to replicate the cloud Salesforce data to a database or a data warehouse of your choice. You may apply incremental updates to keep the Salesforce data always up-to-date in the data warehouse or a database. The second and third methods are realized with the tools in Skyvia\u2019s Data Integration product. Meanwhile, Backup is a separate product of Skyvia dedicated to creating and managing backups. Therefore, it provides many useful data backup and recovery features, allowing users to quickly and seamlessly implement their backup strategy. OwnBackup [OwnBackup](https://www.owndata.com/) tool is a solution for companies using Salesforce to protect and recover their data effectively when needed. It allows users to configure automated backups and use convenient recovery tools. OwnBackup also notifies about any data loss and provides an opportunity to archive obsolete data. Spanning Backup [Spanning Backup](https://spanning.com/) is a dedicated solution for protecting data from Salesforce as well as Microsoft and Google products. It can execute daily automated backups and manual sessions on demand. Due to monitoring and control functions, admins can check the health of Salesforce backups and manage encryption keys when needed. Backupify [Backupify](https://www.backupify.com/) is a solution for cloud-to-cloud backups with recovery options. It saves backups on the private cloud compliant with SOC 2, which ensures high availability and security. Also, Backupify offers unlimited storage, so users don\u2019t need to worry that some data won\u2019t be saved or they will run out of free space. CloudAlly [CloudAlly](https://www.cloudally.com/) is a backup and restore solution designed to protect Salesforce data. It works with organizational data as well as metadata and Chatter feeds. CloudAlly is an enterprise-level tool offering a range of advanced features, including intelligent workforce management and flexible recovery options, to big companies. Druva [Druva](https://www.druva.com/) is another tool that keeps enterprises using Salesforce in mind. It protects both on-premises and cloud resources by copying the selected objects into a centralized data lake. Druva uses its patented technology to save unique data blocks, optimizing network and storage costs. Veeam [Veeam Backup](https://www.veeam.com/) for Salesforce allows users to save customer data and metadata. It allows users to choose which exact objects to back up from on-premise and cloud tools. Also, Veeam offers flexible recovery options for either partial or complete data recovery. Exploring Skyvia\u2019s Salesforce Backup Solution Previously, we mentioned that Skyvia provides three options for backing up Salesforce data. Here, we\u2019ll examine the Backup tool that supports major cloud applications, including Salesforce. [Skyvia Backup](https://skyvia.com/backup/) allows you to automatically back up data from Salesforce on a schedule or manually at any time and restore it in several clicks when needed. The data is saved as a snapshot corresponding to the specified date and time. Snapshots can be viewed, compared, exported, or restored via the Skyvia interface. Now, let\u2019s look at the example of how to back up Salesforce data with Skyvia. [Log into your Skyvia account](https://app.skyvia.com/) . Click +NEW in the top menu and select Backup . Select Salesforce connection or create a new one by clicking +Add new in Backup Wizard. Select checkboxes next to the objects you want to back up, and click Next step to proceed. Run a one-time backup or select a recurring backup on a schedule. NOTE: Scheduled backup is available for paid subscriptions only. Users on free subscriptions can run the backup manually. Give your backup a name and save it. Click Run to initiate backup and monitor its status in the Overview tab. When data needs to be restored, go to the Snapshots tab. Select the objects, select the DML operation, and click Restore . In addition to being simple to use, Skyvia Backup is highly affordable for any company. It\u2019s free up to 1GB of storage used. The pricing depends only on the total amount of data stored in backups. All paid subscriptions offer unlimited snapshot storage time, data search, backup comparison, automatic cleaning, and scheduled backups. See Skyvia\u2019s Backup pricing details [here](https://skyvia.com/pricing/) . Salesforce Data Backup Best Practices Every organization should preferably have a clear plan for a backup strategy containing consecutive implementation steps and suggestions. Here, we present the best practices and recommendations you may rely on when crafting your proper Salesforce backup strategy. Data Prioritization Decide which Salesforce data fields to protect with the highest priority. Modern Salesforce backup tools allow for selecting definite objects. Consider Data and Metadata Not only is customer information important, but internal Salesforce configurations also matter. The latter is also known as metadata, and it usually contains settings for custom fields, page layouts, reporting, and customer coding. Regular Backups The key to success is to perform regular Salesforce data backups. Skyvia offers the automated option allowing businesses to perform backups on schedule according to their plans. Otherwise, you may consider a manual option for occasional backups. Such variety in functionality is accompanied by the flexibility of pricing and scalability of storage. That way, businesses are free to choose the backup plans that best suit their goals and switch to another plan whenever needed. Encryption and Data Security Keeping copies of Salesforce data as backups must be safe and secure. For instance, [Skyvia stores backed-up data in secure Azure GRS storage](https://docs.skyvia.com/backup/backup-security.html) . All the data is kept in encrypted form, using AES 256-bit encryption, which is one of the strongest ciphers available. Skyvia uses unique encryption keys for each user, and no one else has access to them. Version Control and Data Integrity It\u2019s also a good idea to track changes from one backup to another. And Skyvia has such an option \u2013 it allows users to compare snapshots and see which exact objects have undergone alteration. Test Restore Once the Salesforce data gets restored from the backup, take a minute to review it. This helps to evaluate your backup strategy and understand whether it works fine. Contrasts between Restoring and Recovery of Salesforce Data There are many discussions about the difference between data recovery and restoration. Both terms are frequently used interchangeably, implying the same meaning, but a significant difference lies behind them. Let\u2019s refer to the term definitions and juxtapose them with the Salesforce backups. Recover \u2013 to get back something that was accidentally lost. Restore \u2013 to return something to its original stay. When thinking of Salesforce, recovery might be applicable after a hacker attack affects the entire database. Salesforce data restoration occurs after some data fields were deleted or wrong information was inserted and needs to return to its previous state. Summary A solid backup and recovery strategy is necessary to effectively respond to data entry mistakes, malware, and mass update errors in Salesforce. Dedicated Salesforce backup solutions can help you with that. In order to select the best backup tool, pay attention to backup frequency, pricing, storage volume, and ease of use. As Skyvia is a user-friendly tool offering flexible pricing and storage, it could be a great option for any business. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-backup-solutions-and-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Backup+Solutions+And+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-backup-solutions-and-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-backup-solutions-and-tools/&title=Salesforce+Backup+Solutions+And+Tools) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-best-data-loaders/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) 9 Best Free and Paid Salesforce Data Loaders By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/salesforce-best-data-loaders/#respond) 4888 February 28, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-best-data-loaders%2F) [Twitter](https://twitter.com/intent/tweet?text=9+Best+Free+and+Paid+Salesforce+Data+Loaders&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-best-data-loaders%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-best-data-loaders/&title=9+Best+Free+and+Paid+Salesforce+Data+Loaders) Salesforce (also SF or SFDC) is a gold mine when it comes to data. The records that sit in its storage are invaluable for businesses that strive for financial prosperity and market leadership. Most companies have more information than they realize within Salesforce \u2013 and often even more outside that could be put to use on the platform. To unlock this mighty potential, businesses must perform data loads to easily move records into and out of Salesforce. There are a number of tools that can help you with that, each designed for different needs, skill levels, and use cases. But make your choice wisely, as it directly impacts your data accuracy, security, and scalability \u2013 all of which are essential components of business success. In this article, we\u2019ll explore the most commonly used Salesforce data loader tools, comparing their pros and cons to help you make an informed decision. TABLE OF CONTENTS Considerations for Performing a Data Load How Modern Data Workflows Shape Data Loader Requirements Key Factors to Consider When Choosing a Data Loader Top 9 Data Loader Tools for Salesforce Skyvia Salesforce Data Loader Dataloader.io Jitterbit\u2019s Salesforce Data Loader Google Salesforce Connector CloudExtend\u2019s Excel for Salesforce G-Connector XL-Connector Dataloader.ai Best Salesforce Data Loaders Compared Why Skyvia Excels as a Salesforce Data Loader Conclusion Considerations for Performing a Data Load To start, what exactly is a data loader? Simply put, it\u2019s a client application designed for bulk data operations. Although commonly used with Salesforce, it is compatible with a range of platforms like HubSpot, NetSuite, Zendesk, and databases such as MySQL, PostgreSQL, and Amazon Redshift for large-scale [data management](https://skyvia.com/blog/connect-salesforce-to-sql-server/) . Typically, all data loaders share basic functionality in terms of: Ability to define objects and fields for upcoming operations. Mapping utility to align fields. Status feedback system to inform users throughout the process. However, advanced features vary across tools, particularly in terms of load capacity, validation mechanisms, and automation options. Data loaders come in handy when [Salesforce\u2019s standard import tools](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) aren\u2019t enough for large-scale operations or when there is a need to automate integration tasks using scripts and scheduled jobs. Ultimately, there are three key areas to consider and plan for when using an SFDC data loader. Based on years of experience working with Salesforce customers and clients, these are also the three most common reasons why businesses need to perform a Salesforce data loader job in the first place. 1. Data Enrichment Salesforce is probably not the only system storing valuable business information. Many platforms support day-to-day operations \u2013 some general, others tailored to your industry and business needs. For example: In e-commerce , tools like [Shopify](https://skyvia.com/blog/shopify-salesforce-integration/) , [Magento](https://skyvia.com/blog/magento-and-salesforce-integration-guide-to-easy-connection/) , and BigCommerce help manage online sales, customer orders, and inventory. In marketing , platforms such as [HubSpot](https://skyvia.com/blog/salesforce-hubspot-integration/) , Marketo, and [Mailchimp](https://skyvia.com/data-integration/integrate-mailchimp-salesforce) support campaign management, lead nurturing, and customer engagement. In logistics , solutions like SAP Logistics, Oracle Transportation Management, and ShipStation power supply chain management, shipping, and order fulfillment. There are hundreds of examples like these. Regardless of the system, it likely holds something that, when combined with information in\u00a0 Salesforce, can unlock deeper insights and lead to better decision-making. Enriching Salesforce data with external details is one of the top reasons businesses use data loaders. Typically, information from external systems is exported [as a CSV file and then imported into Salesforce](https://www.youtube.com/watch?v=ld6JdIk8HA4&ab_channel=Skyvia) \u2013 either as new records or updates to existing ones. If your business relies on multiple platforms beyond SF for daily operations, data enrichment isn\u2019t optional \u2013 it\u2019s essential. Creating a [single source of truth](https://skyvia.com/learn/single-source-of-true) ensures that your decisions are based on complete, accurate facts, not just fragments of them. 2. Data Migration Think of the following analogy: you\u2019ve just bought a brand-new supercar \u2013 it has all the bells and whistles: a powerful engine, plush seats, and a sleek design. But when you turn the key, nothing happens. Why? Because you haven\u2019t fueled it yet! The same applies to Salesforce. Whether you\u2019re implementing a new system or enhancing an existing one, you need to fuel it with data before it can run smoothly. Unlike routine import tasks, [migration](https://skyvia.com/learn/what-is-data-migration) is typically a one-time operation performed when transitioning from an existing system to Salesforce. Migration is often necessary when: A new SF environment replaces an old CRM system. A new feature is introduced within an existing Salesforce setup, requiring data transfer from a legacy system. 3. Data Backup Backing up your Salesforce records is a habit every business should adopt. Think about the value and scale of the information stored in your Salesforce org \u2013 it\u2019s the foundation for staying competitive and delivering high-quality services to your customers. Now, imagine if that data vanished without a trace. It\u2019s unlikely, but it\u2019s a risk no business should take. [Regular backups](https://skyvia.com/backup) act as a safety net, ensuring your business stays afloat even in the fiercest storms. How Modern Data Workflows Shape Data Loader Requirements Many tools claim to handle data loading efficiently, but are you satisfied with their performance? If you\u2019re stuck with manual tasks, overburdened with troubleshooting errors, or frustrated by record limits, it might be time to rethink your admin tools. In this section, we\u2019ll explore the key requirements that modern data workflows demand from loaders and why it is important to meet them. Feature Why is it important? Description Broad integration capabilities A number of sources that businesses receive data from, is constantly growing. Today, organizations work with cloud platforms, databases, CRMs, and third-party applications. A robust tool should support API integrations for flawless data exchange across these systems. Large-scale operations support Businesses receive more data from more sources. With volumes of information exponentially growing, businesses need a high-performance tool that can handle large-scale processing without performance issues. Automation Large volumes of records require automation for efficient handling. Automated solutions help businesses reduce manual workload and schedule routine tasks. For example, a data loader scheduler can automate repeating Salesforce tasks, such as the nightly sync of all sales transactions on the platform. Data transformation Consistency of data formats is critical for the mapping process. A reliable tool should provide advanced field mapping, validation, and transformation features, ensuring data integrity and accuracy. Security Data loaders process sensitive business information. A secure tool should offer encryption, access control, and logging to protect data and confirm compliance with industry regulations. Real-time processing Some businesses require immediate data updates. Real-time processing guarantees instant synchronization across systems. This is crucial for scenarios like customer interactions, inventory updates, fraud detection, and real-time reporting. Backup It safeguards businesses against data loss and ensures they remain operational even in the case of a system failure. A loader tool should support secure backups by exporting data to an external DWH, separate from the main Salesforce repository. Key Factors to Consider When Choosing a Data Loader Understanding the requirements above provides a solid foundation for navigating the broader landscape \u2013 but what about your specific project? Let\u2019s focus on the key criteria that will help you choose the right tool. Performance and speed. Before selecting a tool, assess the number of records you\u2019ll process monthly. Is the tool\u2019s capacity enough to meet your needs? The right solution should enable fast and bulk processing without performance bottlenecks. Pre-load validation and error detection . Interruptions due to data inconsistencies can slow down your workflow. Choose the tool with built-in error-checking mechanisms to detect issues before data is loaded. Deep Salesforce integration & query precision . Quite naturally, the best data loader for Salesforce natively understands its data model, recognizing object relationships, field [data structures](https://skyvia.com/blog/salesforce-quickbooks-integration/) , and dependencies. It also supports Salesforce Object Query Language (SOQL) for efficient filtering, sorting, and retrieval of specific records. Extensive functionality . Beyond simple import/export, your tool should support UPSERT, mass DELETE, UNDELETE, and duplicate detection to provide full control over your records. Usability . Ideally, your tool should be accessible to both technical and non-technical users. Cost efficiency . A great data loader balances price and functionality, offering valuable features without excessive costs. Customer support. Having access to knowledgeable support from professionals seasoned in Salesforce best practices adds significant value. Top 9 Data Loader Tools for Salesforce In the section below, we\u2019ll explore both native and third-party tools to help with your data loading tasks. Skyvia [Skyvia Cloud Data Loader](https://skyvia.com/data-integration/salesforce-data-loader) stands out as a premier tool on the market. To start with, it allows you to connect to your Salesforce org and perform the full suite of INSERT, UPDATE, UPSERT, and even DELETE operations. Beyond this, as part of the broader Skyvia platform, it can assume other functions that make it more than just a data loader. It can perform integrations, work as a replication tool, and maintain complete backups of your Salesforce orgs. [Skyvia offers a free tier](https://app.skyvia.com/) , with enhanced functionalities available through [paid plans](https://skyvia.com/pricing) starting at $79 per month. Unlike many alternatives, Skyvia\u2019s pricing is based on a per business per month model rather than a per-user basis, which makes it extremely cost-effective. Pros Cloud-based: No installation required; accessible from any browser. Versatile integration: Supports [connection to over 200 data sources and destinations](https://skyvia.com/connectors) . Scheduling: Offers automated scheduling of data loading tasks. Ease of use : Skyvia is ranked among the top user-friendly [ETL tools](https://skyvia.com/blog/etl-tools/) on the market [by G2 Crowd](https://www.g2.com/products/skyvia/reviews) . Cons Subscription-based: Advanced features may require a paid subscription. Lack of hands-on guides : The platform\u2019s extensive functionality is not fully covered by video tutorials, which some users may find complex. Best for Businesses that require frequent data updates. Scenarios requiring DML operations with UPSERT functionality. Complex integration tasks that involve data relations and splitting. Scenarios of regular updates and synchronization of Salesforce with other systems. Salesforce Data Loader This is a [Salesforce-native desktop tool](https://help.salesforce.com/s/articleView?id=xcloud.data_loader_about.htm&type=5) designed for handling bulk data tasks, such as importing, exporting, and deleting records on the SF platform. Its capability to efficiently manage large volumes of information makes it a go-to solution for businesses working with extensive datasets. The tool runs natively on macOS and Windows, with full GUI support on both platforms, but command-line scheduling is available only on Windows. Available as a free download from Salesforce, it\u2019s included at no additional cost with base licensing and is automatically updated with every corporate release cycle. Pros High volume handling : Capable of importing, exporting, updating, and deleting up to 5 million records. Broad object support : Supports a wide range of standard and custom objects. Advanced features : Includes functionalities like field mapping and [batch processing](https://skyvia.com/blog/batch-etl-processing/) . Cons Technical complexity : May present a steeper learning curve for non-technical users. Installation required: Must be installed locally on your machine. Scheduling restrictions : No native scheduling in the GUI, CLI-based scheduling is required. macOS CLI limitation: Mac users lack native scheduling support. Best for Cases where managing large amounts of customer data is critical in areas like marketing and sales. Dataloader.io [Dataloader.io](http://dataloader.io) is a web-based application developed by MuleSoft, which is now a part of Salesforce. Being cloud-based, it requires no installation: with Dataloader.io, users can perform INSERT, UPDATE, UPSERT, and export operations directly from their browser. Additionally, it supports the creation of scheduled tasks that can automatically import or export information from connected sources such as Box, Dropbox, FTP, and SFTP servers. Unlike the Salesforce Data Loader, which is a free tool included with SF subscriptions, Dataloader.io operates on a freemium model. The Free tier allows up to 10,000 records per month and includes basic features. For enhanced capabilities, there are two paid options: Professional : Priced at $99 per user per month; up to 100,000 records monthly; includes features such as task scheduling and integration with cloud storage services. Enterprise : $299 per user per month; unlimited number of records; includes advanced functionalities like support for SFTP and managing up to 10 Salesforce accounts. Pros Web-based: No installation required; operates entirely in the cloud. Ease of use: Simple interface with drag-and-drop functionality. Scheduling : Supports scheduling of data loads. Cons Record limitations : The free version has limitations on the number of records per import. Feature restrictions: Advanced features are available only in paid versions. Best for Non-technical users who want a simple, cloud-based tool for scheduling automated imports/exports. Jitterbit\u2019s Salesforce Data Loader This cloud data loader is a free tool offered by [Jitterbit](https://www.jitterbit.com/application/salesforce-data-loader/) . To access it, you need to provide your contact information and sign up for a Jitterbit Harmony account. Once registered, you can download the tool for Windows. It\u2019s important to note that as of May 2024, [Jitterbit has discontinued the macOS version of Data Loader](https://docs.jitterbit.com/data-loader/known-issues/) . The last macOS version (10.72) was a 32-bit application, which is incompatible with macOS 10.15 Catalina and later versions. Therefore, macOS users would need to use an older version of macOS or run Windows on a virtual machine to utilize the tool. Installing data loader on a compatible system allows you to perform INSERT, UPDATE, UPSERT, DELETE, and export (query) operations with Salesforce. You can execute these tasks multiple times, schedule them, and connect to sources such as local files, FTP servers, and file-sharing services. Pros Open-source : Free to use with a supportive community. Automation: Allows scheduling of data migration tasks. Easy-to-use interface: Intuitive design simplifies the migration process. Cons Limited support: Being open-source, official support may be scarce. Resource intensive: Requires significant system resources during operation. Limited accessibility on macOS : Because of additional conditions, it can be challenging to utilize on Mac. Best for Medium-to-large migrations or frequent integrations that require minimal technical expertise. Bulk operations across Salesforce and databases. Google Salesforce Connector If your business operates within Google Workspace (formerly G Suite), you might consider using the [Google Salesforce Connector](https://workspace.google.com/marketplace/app/salesforce_connector/857627895310) to perform data loading tasks. This add-on is available through the Google Workspace Marketplace and can be easily installed into Google Sheets. Once installed, you can enable the add-on within a Google Sheet to begin using its features. The Google Salesforce Connector is a wonderful tool if you like simplicity. It performs simple tasks including INSERT, UPDATE, and UPSERT records to Salesforce, and query records into the Google Sheet (that can then be exported). While the query/export functionality can be scheduled, the other functions must be performed manually. Pros Seamless Google Workspace integration : Works natively within Google Sheets. Simple & user-friendly : Easy installing from the Google Workspace Marketplace; requires no technical expertise. Query : Allows querying Salesforce data directly into Google Sheets. Export capabilities : Supports scheduled data imports. Cons Manual data loading : No built-in automation for INSERT, UPDATE, and UPSERT\u00a0 tasks. Limited functionality : No mass DELETE or UNDELETE functions; lacks advanced error handling, transformation, and deduplication features. API call limits : Relies on Salesforce API limits, meaning frequent or large-scale operations might consume API calls quickly. Best for Scenarios involving operations on smaller data sets. Google Workspace users who want simple Salesforce data interaction inside Google Sheets. Business analysts & marketers needing to pull Salesforce figures into spreadsheets for analysis and reporting. CloudExtend\u2019s Excel for Salesforce Like Google\u2019s connector option above, [CloudExtend\u2019s Excel for Salesforce](https://www.cloudextend.io/) is a point solution designed for businesses using Microsoft 365. Installing this add-in allows users to manage SF records directly within Excel. Unfortunately, this tool does not support any form of scheduling or automated processing: each INSERT, UPDATE, UPSERT, and query needs to be run manually. Nevertheless, these processes can be easily created as templates with just a couple of clicks. Although its free version allows for exporting Salesforce data into Microsoft Excel, performing any form of INSERT, UPDATE, or UPSERT functions will require a paid subscription, starting at $149 per user per year. Pros Seamless Microsoft 365 integration : Designed for Excel users; works as an Excel add-in. Easy installation & user-friendly UI : Quick setup via the \u2018Get Add-ins\u2019 feature in Excel; intuitive, non-technical interface. Export and query directly in Excel : Enables live querying of Salesforce data into Excel. Template-based processing: Allows for saving query and updating templates for repetitive tasks. Cons No automation: All data operations should be performed manually. Limited functionality : No mass DELETE or UNDELETE functions; lacks advanced transformation and deduplication features. API call limits : Since it relies on Salesforce API calls, heavy usage may consume API quotas quickly. Best for Scenarios of small to mid-sized data loads. Microsoft 365 / Excel users who frequently work with Salesforce records inside spreadsheets. Companies looking for a budget-friendly tool for basic Salesforce operations. G-Connector An alternative to Google\u2019s own tool, [G-Connector by Xappex](https://workspace.google.com/marketplace/app/gconnector_for_salesforce/971770431958) , is also available as an add-in in the Google Workspace Marketplace. Unlike Google\u2019s option, G-Connector is not 100% free. If you want to do any more than just query and [export data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) into a Google Sheet, you will need to pay for this extra functionality. The Premium Lite Plan, priced at $99 per user per year, supports uploading records from Google Sheets to Salesforce, enables refreshing entries across all sheets in the workbook with one click, and provides administrative feature control for users. With a Premium Full subscription ($299 per user per year), you will be able to INSERT, UPDATE, UPSERT, and export data to and from your Salesforce org into a Google Sheet easily, and on an automated schedule if required. Pros Seamless Google Workspace integration : Easy-to-use solution for businesses using Google Workspace. Flexible pricing model : Basic functionality of querying and exporting reports is free, while additional features require paid subscriptions. Comprehensive operations (paid plans) : Supports INSERT, UPDATE, UPSERT, DELETE, and export (query) operations; offers mass data processing. Cons Notable free plan limitations: Only 2 operations are supported \u2013 querying and exporting data from Salesforce. Not a full-scale data loader : No advanced data transformation, field mapping, or deduplication features found in more comprehensive tools. Pricing considerations : Access to full automation features requires the Premium Full subscription ($299/year), which may be costly for small businesses. Best for Google Workspace Users who manage Salesforce records inside Google Sheets. Business Analysts & Sales Teams needing scheduled data pulls and automated reports. XL-Connector In response to market demands, Xappex also offers a solution for Excel enthusiasts \u2013 [XL-Connector](https://www.xappex.com/xl-connector-salesforce-excel-connector/) , a robust Excel add-in that enables seamless integration between Salesforce and Microsoft. Designed to enhance productivity, it allows users to manipulate, analyze, and manage data without switching platforms. With XL-Connector, you can securely export, import, and refresh SF records directly within your familiar Excel environment. Just like with G-Connector, Xappex offers three pricing plans for XL-Connector, with a similar breakdown of features. Within the Free plan, you can pull unlimited reports and query any data from Salesforce. Subscribing for the Team Player plan for $99 per user per year will enable you to create, update, and delete records in SF using templates (these, however, must be created in Enterprise Admin). You will also be able to refresh information across multiple worksheets in one click, and control feature visibility for your users. Finally, with an Enterprise Admin subscription ($299 per user per year), you will be able to perform mass operations on Salesforce records, including create, UPDATE, UPSERT, DELETE, and export; schedule automatic data refreshes and uploads; access Salesforce metadata, including fields, picklists, and validation rules; and create task templates for non-technical users. Pros Seamless integration: Directly connects Salesforce with Excel. Automation capabilities: Supports scheduling of data operations. Metadata tools: Provides tools for managing Salesforce metadata. Cross-platform availability: Available as XL-Connector 365 for Mac users and Excel Online. Cons Learning curve : May be challenging for users unfamiliar with SOQL or Excel\u2019s advanced features. Pricing Considerations : Access to advanced features requires a paid subscription. Dependency on Excel : Requires a licensed version of Microsoft Excel. Best for Mass operation scenarios that can be automated and run on schedule. Businesses heavily reliant on Excel for reporting and data manipulation. SF admins seeking efficient tools for metadata management within Excel. Dataloader.ai [Dataloader.ai](http://dataloader.ai) from [Clientell](https://www.getclientell.com/) is an innovative tool that enhances Salesforce\u2019s user experience through advanced technologies. This cloud-based tool offers GPT-powered natural language processing that allows the execution of data operations with simple, conversational prompts. Among its prominent features are also intelligent data mapping to reduce manual configuration, and robust security measures ensuring information privacy and protection. Pros Cloud-based : Requires no local installation. Intuitive user experience : Simple data management with a natural language interface. Secure infrastructure : Compliance with SOC II Type II, GDPR, and ISO standards. Cons Emerging tool : May lack some advanced features present in established data loaders. Pricing information : Not readily available, requires direct contact for quotes. Best for Businesses prioritizing information security. Teams looking for an intuitive, user-friendly tool to reduce the complexity of Salesforce data management. Best Salesforce Data Loaders Compared Now that we\u2019ve explored the key aspects of each tool, let\u2019s compare them in a summary table to help you choose the most suitable option for your needs: Tool Key Features Integration with 3-d Party Apps Ease of Use / Type Cost Skyvia Cloud Data Loader \u2013 INSERT, UPDATE, UPSERT, DELETE, export;\u00a0\u2013 Supports scheduled tasks;\u00a0\u2013 No record limit;\u00a0\u2013 Advanced data transformation & field mapping;- Backup option. Supports cloud storage (Dropbox, Google Drive, Box), FTP/SFTP, and databases. User-friendly; cloud-based. Free tier available, paid from $79/month. Salesforce Data Loader \u2013 INSERT, UPDATE, UPSERT, DELETE, export;- CLI-based scheduling (Windows only);\u00a0\u2013 Up to 150 million records;\u00a0\u2013 Basic field mapping. No third-party integrations. Technical (CLI for scheduling); desktop. Free with Salesforce subscription. Dataloader.io \u2013 INSERT, UPDATE, UPSERT, export;- Supports scheduled tasks;\u00a0\u2013 Data load capacity: Free : 10K records/month, Pro : 100K records/month, Enterprise : unlimited;\u00a0\u2013 Limited data transformation. Supports Box, Dropbox, FTP/SFTP. Easy to use; web-based. Free tier available, paid from $99/user/year. Jitterbit\u2019s Salesforce Data Loader \u2013 INSERT, UPDATE, UPSERT, DELETE, export;- Supports scheduled tasks;\u00a0\u2013 No record limit;\u00a0\u2013 Some transformation options. Supports FTP/SFTP, databases, and on-prem storage. User-friendly; desktop-based. Free Google Salesforce Connector \u2013 INSERT, UPDATE, UPSERT, export;- Scheduled query/export only;- No official record limit;\u00a0\u2013 Basic data transformation. Limited to Google Sheets only. Google Sheets add-in; simple for Google Sheets users. Free CloudExtend\u2019s Excel for Salesforce \u2013 Export ( free ), INSERT, UPDATE, UPSERT ( paid );\u00a0\u2013 No scheduling;\u00a0\u2013 No official record limit;\u00a0\u2013 No transformation features. Limited to Excel only. Excel add-in; simple for Excel users. Free for export, paid from $149/user/year. G-Connector \u2013 Query/export ( free );- INSERT, UPDATE, UPSERT, DELETE ( paid );- Supports scheduled tasks ( paid );\u00a0\u2013 Basic data transformation options. Limited to Google Sheets only. \u2013 Google Sheets add-in;\u00a0\u2013 User-friendly;\u00a0\u2013 Requires setup for scheduling. Free for queries, paid from $99/user/year. XL-Connector \u2013 INSERT, UPDATE, UPSERT, DELETE, export;- Supports scheduled tasks;\u00a0\u2013 No record limit;\u00a0\u2013 Advanced data transformation & metadata management. Limited to Excel only, but supports metadata management. Excel-based; requires familiarity with advanced Excel functions. Free for queries, paid from $99/user/year. Dataloader.ai \u2013 INSERT, UPDATE, UPSERT, DELETE, export;\u00a0\u2013 AI-based automation;\u00a0\u2013 No official record limit;\u00a0\u2013 AI-powered data transformation & auto-mapping. Integration options are unclear. AI-powered, simplified UI; cloud-based. Pricing not publicly available. Why Skyvia Excels as a Salesforce Data Loader Skyvia Cloud Data Loader is a versatile, all-in-one solution that excels among Salesforce import tools. Let\u2019s take a closer look at how real businesses have benefited from Skyvia\u2019s capabilities. Broad integration options, including in-house and legacy systems With Skyvia Import, businesses can connect Salesforce with cloud apps and databases for seamless data transfers. A [real estate company leveraged this feature](https://skyvia.com/case-studies/real-estate-company) to enable two-way synchronization between their legacy ERP system, Salesforce, and Mailchimp, enhancing operational efficiency. Near real-time synchronization [A4 International, Inc.](https://skyvia.com/case-studies/a4international) , a Salesforce consulting partner, needed to migrate data from external systems to SF and establish their real-time integration. Using Skyvia, they successfully set up hourly synchronization, ensuring customers always had up-to-date information. Automation & scheduled data transfers For companies needing regular Salesforce updates, Skyvia\u2019s automation features are a game-changer. [Cirrus Insight, a sales enablement platform, used Skyvia to automate daily tasks](https://skyvia.com/case-studies/cirrus) , including merging Salesforce accounts and synchronizing data across multiple systems \u2013 saving time and reducing manual effort. Accurate mapping for consistency [Echo Technology Solutions](https://skyvia.com/case-studies/echo) , a consulting firm, needed to sync customer details between Salesforce and SQL Server to improve data visibility and accuracy. With Skyvia\u2019s robust mapping and transformation tools, they created ETL workflows that ensured precise and error-free updates across platforms. Conclusion Business needs are constantly evolving, and the market offers a variety of [solutions](https://skyvia.com/blog/best-data-pipeline-tools/) to keep up. Choosing the right data loader requires understanding key criteria, so we\u2019ve done the research and compared them for you. And here\u2019s the best part \u2013 many tools offer a free trial, giving you a risk-free way to experience them firsthand. Don\u2019t miss your chance! [Sign up for Skyvia](http://app.skyvia.com) and see for yourself how easy, powerful, and user-friendly it is. FAQ for Data Loaders for Salesforce What\u2019s the difference between Salesforce Data Loader and Data Import Wizard? Salesforce Data Loader handles large-scale operations (up to 150,000,000 records), supports INSERT, UPDATE, UPSERT, DELETE, and export, and offers automation (CLI). Data Import Wizard is UI-based and easier to use but limited to 50K records and fewer objects. Which data loader is best for real-time synchronization? Dataloader.ai offers AI-powered automation for real-time updates. Skyvia and Jitterbit also support near real-time sync via API-based integration and scheduling. Most traditional loaders (e.g., Salesforce Data Loader) lack real-time capabilities. What\u2019s better: Salesforce native tools or third-party loaders? Salesforce native tools (Data Loader, Import Wizard) are free, secure, and well-integrated but limited in automation and scalability. Third-party loaders offer advanced features, like real-time sync, scheduling, and better UX, making them ideal for complex workflows. What criteria should I consider when choosing a data loader? Key factors include volume capacity, supported DML operations, automation & scheduling, error handling, data transformation capabilities, integration with other tools, security compliance, and ease of use. Choose a tool that meets your business needs and technical expertise. How do I handle lookup relationships when importing data? Use External IDs to link records instead of manually retrieving Salesforce IDs. Tools like Skyvia offer lookup mapping, automatically resolving relationships during import. Some loaders require preloading parent records first to maintain data integrity. Is it possible to automate data loads? Yes! Tools like Skyvia, Jitterbit, and Dataloader.io support scheduled tasks for automated imports, exports, and updates. Salesforce Data Loader allows CLI-based scheduling (Windows only), while some basic tools require manual execution. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-best-data-loaders%2F) [Twitter](https://twitter.com/intent/tweet?text=9+Best+Free+and+Paid+Salesforce+Data+Loaders&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-best-data-loaders%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-best-data-loaders/&title=9+Best+Free+and+Paid+Salesforce+Data+Loaders) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-business-analyst-guide/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce Business Analyst Guide: Insights into the Role By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-business-analyst-guide/#respond) 2371 January 26, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-business-analyst-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Business+Analyst+Guide%3A+Insights+into+the+Role&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-business-analyst-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-business-analyst-guide/&title=Salesforce+Business+Analyst+Guide%3A+Insights+into+the+Role) We often hear \u201cdata analyst, product analyst, business analyst, etc.\u201d. But who are those professionals? Is the same person representing all these roles? Or are those different people working on different tasks? A little spoiler \u2013 one person, usually known as a business analyst, can embody all these roles simultaneously! This article focuses on the Salesforce business analyst role with its key tasks and responsibilities. We also highlight the importance of Salesforce for business analysts. And finally, you\u2019ll discover other essential tools that help such professionals to carry out their tasks efficiently. Table of Contents What Is a Salesforce Business Analyst? Key Responsibilities of Every Business Analyst Typical Tasks of Salesforce Business Analyst 5 Tools Every Business Analyst Needs for Enhancing Salesforce Efficiency Microsoft Power BI (Business Intelligence) Jira (project management) Skyvia (ETL and Data Integration) Microsoft Excel (Data Analysis) Confluence (Documentation and Collaboration) Demand for Salesforce Business Analysts: Trends and Career Insights How to Become a Salesforce Business Analyst? Conclusion What Is a Salesforce Business Analyst? It\u2019s tempting to know who a Salesforce business analyst is, but let\u2019s first discover who an ordinary business analyst (BA) is. The official definition of the role states the following: A BUSINESS ANALYST (BA) IS A PERSON WHO CREATES, INTERPRETS, AND DOCUMENTS BUSINESS PROCESSES, PRODUCTS, SERVICES, AND SOFTWARE REQUIREMENTS THROUGH DATA ANALYSIS. As you see, a BA can also handle tasks within the area of product and/or data analyst competence. How is a Salesforce business analyst different from a conventional BA? Well, the secret is in the name itself \u2013 a Salesforce BA puts Salesforce in the center of all the processes. Depending on the organizational type and workload, this expert might also collaborate with other BAs or combine the roles of Salesforce product analyst and Salesforce data analyst. Key Responsibilities of Every Business Analyst Here, we provide the list of global responsibilities of the traditional BA role. Requirements gathering. A BA has to identify the needed functions of a system, application, or process. Involving stakeholders (project managers, customers, testers, etc.) is the best way to gather full requirements. Such techniques as interviewing or surveys are prevalent at this stage. Process mapping. A BA has to ensure that each member involved in the process has a clear understanding of it. Once all the minor details and steps of the process are gathered, a BA can create its visual representation. Process optimization. It usually relies on the shadowing technique \u2013 observing users\u2019 behavior unobtrusively, pointing out the difficulties they experience, and highlighting the points for optimization. Data quality and integration. The data obtained during the requirements gathering or shadowing process isn\u2019t usually ready for further analysis. It needs to be filtered, cleansed and transformed beforehand. So, a business analyst must use effective data preparation tools before analysis. Reporting and analysis. A business analyst applies statistical methods and formulae to find patterns based on the given data. BI tools also come in handy for creating visual dashboards and reports, which facilitate informed decision-making. Documentation. A greater part of BA\u2019s daily routine is documenting everything, starting from functional requirements to use cases. The goal of documentation is to keep a record of new information and changes as well as to share insights with stakeholders. The specified responsibilities are typical for any business analyst. Note that this list can be expanded further to address the organizational goals and business objectives. Typical Tasks of Salesforce Business Analyst Now, let\u2019s explore the daily tasks of a Salesforce BA and see how they align with the responsibilities mentioned above. Cooperate with business stakeholders. This task aims to find the gaps between the current technological solutions in Salesforce and business needs. Transform business requirements into Salesforce solutions. This task is about process mapping and translating business requirements into clear instructions for developers to implement new functions and enhancements in Salesforce successfully. Create and maintain Salesforce solution documentation. BA records all the process flows, data models, technical specifications, employee tasks, and system tasks in the dedicated software. Conduct training sessions. Once new features appear in the Salesforce solution, a BA acquaints users with new functionality via demo sessions. Since Salesforce BAs have to deal with large amounts of data, they have to rely on technological solutions in their daily workflows for the timely completion of tasks. 5 Tools Every Business Analyst Needs for Enhancing Salesforce Efficiency Digitization has generated an ample number of services and tools that push business analysis forward. Here, we present the lifesavers that each Salesforce BA could use regularly. Microsoft Power BI (Business Intelligence) Power BI is Microsoft\u2019s product for business intelligence, analysis, and data visualization purposes. It contains a powerful set of features and perfectly connects to other Microsoft products as well as third-party services, such as Salesforce. The value of Power BI for business analysts is that it helps to improve [Salesforce data quality](https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia/) and generate reports and dashboards. Along with that, Power BI has other key features to assist BA specialists: Data transformation. There are instruments and mechanisms for cleansing, filtering, and transforming imported Salesforce data. Data visualization. There are a great number of charts, graphs, maps, and tables that appropriately illustrate Salesforce data. DAX. It\u2019s a formula-based language that allows users to create custom calculations. Plain-language query. Power BI allows users to ask questions about their Salesforce data in supported natural languages. There are both desktop and mobile versions of this application. The price for Power BI Desktop starts from $10 per month per user. Jira (project management) Jira is the most well-known issue-tracking tool used by software development and quality assurance teams. It\u2019s a highly customizable product that allows project managers, BAs, and other team members to collaborate effectively. Jira also helps Salesforce business analysts plan, track, and manage tasks. Key features of Jira: Issue tracking. Based on gathered requirements, BAs can create user stories describing the requested Salesforce functionality. Jira also allows them to create tasks associated with that user story and assign them to the corresponding team members. Agile approach. Jira provides the functionality for agile workflow management, such as Scrum and Kanban. Thus, Salesforce BAs, together with project managers, can plan sprints and control their implementation. Customizability. Jira is a highly customizable solution allowing BAs to set up workflows according to their specific needs. Integration. This platform easily integrates with multiple time tracking tools, version control systems, etc. Security. Admins can easily configure permissions to certain areas of Jira and limit access to sensitive information. Jira comes as a corporate software, so the cost depends on the number of users in the system. The tool comes free of charge for less than 10 users. In case the system has 10+ users, the price starts from $8.15 per user monthly. Skyvia (ETL and Data Integration) One of the core responsibilities of a Salesforce business analyst is to ensure data quality and perform [Salesforce data integration](https://skyvia.com/blog/salesforce-integration-tools/) . Skyvia offers a solution named [Salesforce Data Loader](https://skyvia.com/data-integration/salesforce-data-loader) for that. BAs can use it to automate the extraction, transformation, and loading of data between Salesforce and other systems. Skyvia\u2019s Salesforce Data Loader\u2019s main features are: Data import. Move Salesforce data to another cloud app, data warehouse, or database, and apply the needed mapping settings to match the target data. Skyvia also offers a range of data transformation options upon its extraction from Salesforce. Data export. Skyvia extracts Salesforce data into CSV files, which can be saved on file storage platforms, such as Box or Google Drive, or directly on a computer to be later used in Excel or other applications. Skyvia is a universal cloud data platform, so it has other features that could be beneficial for Salesforce data. Data replication. Using a data warehouse as the centralized platform for data consolidation and analysis, BAs can take advantage of Skyvia\u2019s Replication scenario. Skyvia copies Salesforce data to a chosen DWH and performs incremental updates regularly. Data backup. Protect your Salesforce data by performing automatic backups on schedule with the possibility to recover them later at any time upon necessity. Data querying. BAs can perform simple and fast analysis with Skyvia Query by executing SQL statements against Salesforce data. Reporting. Skyvia\u2019s Import, Export, and Data Flow tools can run ready-made reports from Salesforce and use their outputs in the integrations. The [competitive advantage of Skyvia is its user-friendliness](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , as it doesn\u2019t require any advanced coding knowledge to perform regular BA tasks. Moreover, this service is accessible from any browser at any time, so NO desktop installations or additional setups are necessary. Skyvia is suitable for everyone because of its ample feature set and flexible pricing model! The free version of Skyvia has all the necessary features, with the only limitation on the number and frequency of data integrations. Businesses can also benefit from the paid plans with varying data integration numbers and frequency, along with the advanced integration scenarios and mapping functions. Microsoft Excel (Data Analysis) The first steps of Excel were noted in the 1980s \u2013 the same time period when Microsoft OS emerged. Now, Excel is the most popular spreadsheet program using a grid interface with rows and columns. It allows Salesforce business analysts to organize and manipulate data, perform data analysis, and create simple visualizations. Note that the [Skyvia Query add-in for Excel](https://skyvia.com/excel-add-in/salesforce) helps BAs extract data from Salesforce. To install this application, take the following steps: [Sign up](https://app.skyvia.com/register) for Skyvia if you haven\u2019t an account yet. Go to [Microsoft AppSource](https://appsource.microsoft.com/en-GB/product/office/WA200001764?tab=Overview) . Click Get in now . You will be redirected to another page. Click Open in Excel . Perform other required settings in Excel. To start querying Salesforce data in Excel, [see detailed instructions here](https://docs.skyvia.com/skyvia-query-excel-add-in/) . Check other ways to [export data from Salesforce to Excel](https://skyvia.com/blog/5-ways-to-export-data-from-salesforce-to-excel/) . Confluence (Documentation and Collaboration) As one of the core goals of any BA is to create clear documentation, a professional tool is needed for that. Confluence is that specific solution allowing Salesforce BAs to create documents and share them with others. This tool could be extremely helpful for documenting requirements, project plans, and other critical info related to Salesforce configurations and processes. As Confluence also belongs to Atlassian Corporation, its pricing model is similar to that of Jira \u2013 it\u2019s free for systems under 10 users. Then, the price starts from $6.05 for each user monthly. Demand for Salesforce Business Analysts: Trends and Career Insights When most companies started implementing IT in their business processes, programmers spent lots of time on code rewriting because the actual functionality contrasted business needs. As a result, a considerable amount of monetary and timing resources was wasted. This triggered a sharp need for professionals who could build a bridge between business users and software developers. The spike in demand for business analysts was notable in the 1990s, and it still continues to grow. The same goes for Salesforce BA since more than [150,000 enterprises](https://www.linkedin.com/pulse/how-many-companies-use-salesforce-comprehensive-analysis-jacob-mathew-vxgye/) use Salesforce worldwide. What\u2019s more, Salesforce has been ranked as [#1 CRM](https://www.salesforce.com/news/stories/idc-crm-market-share-ranking-2023/) for 10 consecutive years. Therefore, such experts as Salesforce BAs are in demand on the market to ensure the successful development and implementation of Salesforce-related projects. As businesses increasingly integrate IT, the role of Salesforce Business Analysts becomes vital. They bridge the gap between business needs and technical solutions. [Automated Salesforce testing](https://www.functionize.com/automated-testing/salesforce-testing) is key here, ensuring solutions are not just technically sound but also perfectly aligned with business goals. This testing saves time and boosts solution accuracy, highlighting the importance of analysts in adapting business processes to the fast-changing market. As for the average salary rates for Salesforce business analysts, they range depending on the geographical region and expert\u2019s qualifications. In 2023, the average salary of Salesforce BA was around [$112k per year in the US](https://www.talent.com/salary?job=salesforce+business+analyst) and almost [\u00a343k per year in the UK](https://uk.talent.com/salary?job=business+analyst) . How to Become a Salesforce Business Analyst? Despite the increasing demand, it\u2019s hard to find universities that offer degree courses to become a Salesforce BA. This can be explained by the pioneering and constantly evolving nature of the Salesforce BA role. Therefore, some people get promoted to the role of Salesforce business analysts from other positions. Others take extra courses to switch to the business analyst career path. Even though there are no degree courses in universities, based on the profile analysis of hundreds of Salesforce BAs, certain patterns in educational background hard and soft skills could be outlined. Education It\u2019s highly recommended to possess a bachelor\u2019s degree in Computer Science, Information Systems Management, Business Administration, or other related disciplines. Key Skills In most Salesforce BA job descriptions, you\u2019ll often find the following key skills requested: Excellent knowledge and practical experience in Sales Cloud, Service Cloud, Marketing Cloud, and Salesforce CPQ. Experience in Salesforce administration. Understanding of the software development lifecycle and Salesforce functionality implementation across it. Critical and analytical thinking for extracting valuable information from data. Communication skills to interact with both technical and non-technical audiences. Experience with Atlassian tools (Jira and Confluence), software for designing use cases, [data integration tools](https://skyvia.com/blog/data-integration-tools/) , etc. As a Salesforce BA has to constantly adapt to the evolving pace and have an innovative view, it\u2019s worth taking [courses and certifications](https://www.salesforce.com/blog/become-salesforce-admin-certified/) organized by Salesforce and partner organizations. They could help business analysts advance their Salesforce knowledge and enhance other essential skills for this role. Conclusion If you\u2019re dreaming about the professional path of the Salesforce BA, the right time to start is NOW! Being a Salesforce BA grants you a chance to work with the most popular and function-rich CRM of today \u2013 Salesforce. This role guarantees interaction with different audiences, design of new approaches for workflow enhancement, discovering innovative technological solutions, and gaining lots of experience. The importance of IT tools for Salesforce BA can\u2019t be overestimated. Atlassian\u2019s Jira and Confluence, Microsoft\u2019s Excel and Power BI make up the basis of the Salesforce BA\u2019s toolkit. The cherry on the pie is Skyvia \u2013 [the best Salesforce data loader](https://skyvia.com/blog/salesforce-best-data-loaders/) for data integration and preparation. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-business-analyst-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Business+Analyst+Guide%3A+Insights+into+the+Role&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-business-analyst-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-business-analyst-guide/&title=Salesforce+Business+Analyst+Guide%3A+Insights+into+the+Role) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-connect/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) Salesforce Connect for Your Business By [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) [0](https://skyvia.com/blog/salesforce-connect-guide/#respond) 3753 March 23, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-connect-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Connect+for+Your+Business&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-connect-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-connect-guide/&title=Salesforce+Connect+for+Your+Business) Editor\u2019s note: This article has been updated for accuracy and comprehensiveness in January 2024. Salesforce is one of the most widely used CRM in the world. Many organizations use it to store and process their critical business data. However, it\u2019s rarely the only system where the company data is stored. Salesforce offers its own solutions on how to integrate data from different sources, one of them is Salesforce Connect. This article describes what Salesforce Connect is, its features, and how to configure and use it. Table of Contents What is Salesforce (Lightning) Connect? Exploring the Functionality of Salesforce Connect Salesforce Connect Pros: When to Use Salesforce Connect How to Set Up Salesforce Connect 5 Salesforce Connect Alternatives Conclusion What is Salesforce (Lightning) Connect? [Salesforce Connect](https://help.salesforce.com/s/articleView?id=sf.platform_connect_about.htm&type=5) is a native Salesforce integration tool for connecting legacy databases, ERP systems, and other data sources to Salesforce. It allows working with data from these data sources from within the Salesforce interface itself, in the same way as you work with Salesforce data. However, this data isn\u2019t actually imported into Salesforce, it stays in the connected data source itself, and you work with these data stored in that data source directly. When Salesforce Connect was first introduced, it was called Lightning Connect. Later it was renamed to Salesforce Connect, and it\u2019s available in both Salesforce Lightning and Salesforce Classic. Exploring the Functionality of Salesforce Connect Salesforce Connect allows the creation of external objects in Salesforce, which, except few rather minor differences, work as usual custom and predefined Salesforce objects. You can add tabs and customize displayed columns to work with external object data, access them via Salesforce API and development tools, etc. Salesforce Connect supports both read-only and read-write data sources. It connects to data sources through several kinds of Adapters and allows to: Cross-org adapter: connect other Salesforce org data to the current one. OData 2.0 and OData 4.0 adapters: connect data from data sources that allow access via the corresponding version of the OData protocol. Custom Apex adapters: develop custom adapters for data sources not supported directly via Apex Connector Framework. Adapter for Amazon DynamoDB: connect Amazon DynamoDB data. Adapter for Amazon Athena: connect to data stored on Amazon S3 in different formats and a number of other sources. Salesforce Connect Pros: When to Use Salesforce Connect Salesforce Connect offers the following benefits: Direct connection. You work directly with external data source data from within the Salesforce interface. Low API overhead. Salesforce Connect always [queries only necessary data](https://skyvia.com/blog/salesforce-connect/) from the data source. No Salesforce storage used. Storing large amounts of data in Salesforce can become quite costly. These benefits mean that Salesforce \u0421onnect should be perfect in the case when you need direct real-time access to data in another data source or when an external data source has a large amount of data. In case you have a lot of legacy, rarely accessed data in Salesforce, it can be even more cost-efficient to export it to some database and access it via Salesforce Connect instead of storing it in Salesforce. However, you should also take into consideration that Salesforce Connect itself is not a free feature, it comes with an additional cost, fixed per connection to an external data source. How to Set Up Salesforce Connect If you have the corresponding Salesforce Connect add-on license, setting Salesforce Connect up is pretty easy and straightforward. Let\u2019s show how to connect Salesforce to an OData endpoint using Salesforce Connect. Creating External Data Source in Salesforce First, you need to create an external data source in Salesforce. For this, you need to perform the following steps: In the top right corner of the page, click Setup , and then click the Setup menu item. 2. In the menu on the left, under Platform Tools , click Integrations , and then click External Data Sources . Click New External Data Source . Specify a user-friendly name for External Data Source and a unique external data source identifier for the Name . In the Type list, select Salesforce Connect: OData 4.0 or Salesforce Connect: OData 4.0, depending on what OData version your OData endpoint uses. Specify your OData endpoint URL . Configure the endpoint Authentication . Select the Writable External Objects checkbox if you use an endpoint to a writable data source. Optionally configure other endpoint settings, like High Data Volume , Use Free-Text Search Expressions , etc. Click Save . Creating External Objects After you create an external data source in Salesforce, creating its external objects is very simple. Just click Validate and Sync , and then select the endpoint tables you want to sync and click Sync . Salesforce automatically creates the necessary external objects. Optionally, you can configure naming for them. After this, you can work with these external objects in the same way as with usual Salesforce objects. Note that external objects have the suffix \u201c__x\u201d in Salesforce. Let\u2019s create tabs for these objects: In the menu on the left, under Platform Tools , click User Interface , and then click Tabs . In the Custom Object Tabs pane, click New . Select the required Object and set the Tab Style and Description . Click Next . Specify the tab visibility settings and click Next . Configure the tab availability for custom apps and click Save . Please note that by default, the list view on this tab does not display any useful field. You can edit this view and select the fields from the external object it displays if necessary. How to Connect Databases and Cloud Apps That Don\u2019t Support OData If the data source you want to access from Salesforce doesn\u2019t support OData, you have several possible options: Create a custom Salesforce Connect adapter for your data source. Develop an OData server that connects to your data source directly and publishes its data as an OData endpoint yourself. Use some third-party software to publish your data as an OData endpoint. The first two options mean that you need to have a developer or be one. If you don\u2019t want to do coding, it\u2019s better to go for the third option. As for third-party software, [Skyvia Connect](https://skyvia.com/connect/salesforce-connect) would be a good choice for you. It\u2019s a web-API-as-a-service solution, which means that you won\u2019t need your own hosting or your own server, and you won\u2019t need to do maintenance and administration. All you need to do is to configure an endpoint visually, in a convenient wizard, in a few simple steps: Connect Skyvia to your data source. Select the data that you want to publish. You may select tables and separate fields to publish or hide. Configure an additional security layer for your endpoint. You can create users with passwords for basic authentication and limit access to the endpoint by IP. Note that if you add authentication to the endpoint, you need to use Named Principal authentication in Salesforce Connect external data source. Specify the default endpoint protocol version and whether the endpoint is writable. That\u2019s all, and after this, you can copy the result endpoint link and paste it into Salesforce. Skyvia Connect endpoints support a wide range of OData features, more than Salesforce Connect can use. [Skyvia](https://skyvia.com/) supports over 200 cloud apps and databases. There are such [Skyvia connectors](https://skyvia.com/connectors/) like [SQL Server connector](https://skyvia.com/blog/connect-salesforce-to-sql-server/) , MySQL, Oracle, Amazon Redshift, Google BigQuery connectors, etc. It allows you to easily connect these data sources to Salesforce via Salesforce Connect in just several minutes. 5 Salesforce Connect Alternatives Salesforce Connect isn\u2019t the only integration solution from Salesforce. It has a number of other solutions, for example, Salesforce External Services, but the true alternative to connecting to a data source directly would be loading external data into Salesforce. Loading data from other data sources to Salesforce can be beneficial over Salesforce Connect if you need to work with these data in Salesforce all the time and not just access a small amount of data from time to time. Salesforce offers more native solutions for data integration: from free Import and Export wizards for manual data import and export to paid (and pricey) MuleSoft platform. There are also many third-party solutions for [Salesforce data integration](https://skyvia.com/blog/salesforce-integration-tools/) and reporting on the market. Let\u2019s see what other connectors there are to fine-tune Salesforce strategy for business. Skyvia: Universal Cloud Data Platform Skyvia also provides its own solution: Data Integration for [easy i](https://skyvia.com/data-integration/) [n](https://skyvia.com/data-integration/) [tegrations](https://skyvia.com/data-integration/) of Salesforce with other apps and databases via data loading. It includes different tools, from simple wizard-based [Salesforce Data Loader](https://skyvia.com/solutions/salesforce-solutions) tools to advanced solutions for designing flows of data between multiple data sources on diagrams. These tools can help you automate different Salesforce integration scenarios: [Jira integration](https://skyvia.com/blog/jira-salesforce-integration/) , [connecting to Shopify](https://skyvia.com/blog/shopify-salesforce-integration/) , and even [Salesforce to Salesforce](https://skyvia.com/blog/salesforce-to-salesforce-integration/) connections. And, as if it\u2019s not enough, you can create Salesforce reports in Excel and Google Sheets using [Skyvia Query Excel Add-in](https://skyvia.com/excel-add-in/) and [Skyvia Query Google Sheets Add-in](https://skyvia.com/google-sheets-addon/) . Pros Robust data integration features. Versatile data transformation capabilities. One of the [easiest to use ETL tool](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) on the market. [4.8 customer satisfaction rating in G2 Crowd](https://www.g2.com/products/skyvia/reviews#reviews) and a [4.9 rating in Gartner Peer Insights](https://www.gartner.com/reviews/market/data-integration-tools/vendor/devart/product/skyvia) . No-code solution perfect for non-technical users. Cons Not enough video tutorials. Skyvia\u2019s Data Integration has a flexible pricing system based on the amount of data loaded and features used, including a free pricing plan for simple scenarios and a trial period for paid plans. Tableau Tableau is a data analysis and visualization service that provides lots of advanced features tailored to your Salesforce data needs. Its seamless integration with Salesforce allows users to directly access Salesforce data, simplifying the process of creating insightful visualizations. Regardless of technical proficiency, Tableau boasts a user-friendly interface, catering to users of all skill levels. Pros Advanced data visualization features Direct connection to Salesforce High-level customization capabilities Cons It can be costly, particularly for larger enterprises. The learning curve is a steep one, especially for non-technical users and new users. It has limitations in data transformation. Smartsheet Smartsheet, a cloud-based platform, excels in collaboration, reporting, and management. Its integration with Salesforce elevates the data analysis process. Pros Flexible and customizable reports that can be tailored to fit business requirements. Collaborative environment: multiple users simultaneously work together. Built-in Gantt chart functionalities to visualize project timelines and dependencies. Cons Limited data transformations. Advanced reporting results in a steep learning curve. G-Connector G-Connector acts as a virtual bridge, seamlessly connecting Salesforce and Google Sheets. It enables two-way synchronization, letting you effortlessly share Salesforce reports and [import data into Salesforce](https://skyvia.com/blog/importing-data-into-salesforce/) , even with users outside your organization. Pros Effortless bidirectional data flow. Both automatic and manual data exchange options. User-friendly drag-and-drop interface. Cons Limited advanced reporting and customization features. Klipfolio Klipfolio is a cloud-based dashboard service for data reporting. When integrated with Salesforce, it provides organizations with a versatile and user-friendly solution to enhance data visualization, analysis, and sharing capabilities. Pros Customization capabilities to tailor reporting according to business needs. Easy integration with Salesforce and various services. Scheduled reporting and automated report distribution via emails and shared links. Cons Limitations of data integration capabilities. Challenging learning process, especially for non-technical users. [Read more about Best Salesforce Integration Tools](https://skyvia.com/blog/salesforce-integration-tools/) Conclusion In this article, we\u2019ve explained what Salesforce Connect is and what features it provides. Salesforce Connect can be very useful when you need to access data in external data sources directly from Salesforce, especially if the data source stores a large amount of data and only small portions of it are needed at any time. In other cases, when you need to move external data to Salesforce, you can revert to data integration solutions. Regardless of what option you select, Skyvia can help you both as a data integration alternative to Salesforce Connect and as a helpful tool to connect databases and cloud apps via Salesforce Connect. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-connect-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Connect+for+Your+Business&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-connect-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-connect-guide/&title=Salesforce+Connect+for+Your+Business) [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) Sergey combines years of experience in technical writing with a deep understanding of data integration, cloud platforms, and emerging technologies. Known for making technical subjects approachable, he helps readers navigate complex tools and trends with confidence. Continue Reading [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Using Salesforce API in Skyvia: SOAP API vs Bulk API](https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/) [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [OData REST API for MySQL](https://skyvia.com/blog/odata-rest-api-for-mysql/)" }, { "url": "https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Choosing the Right Salesforce Data Tool By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/#respond) 3247 May 7, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-import-wizard-vs-data-loader%2F) [Twitter](https://twitter.com/intent/tweet?text=Choosing+the+Right+Salesforce+Data+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-import-wizard-vs-data-loader%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/&title=Choosing+the+Right+Salesforce+Data+Tool) As a modern, client-centric platform, Salesforce understands the demands of its users and does its best to meet them. In particular, it provides organizations with a suite of tools to help onboard their data onto the platform. Among these, the Data Import Wizard and Data Loader stand out as two powerful, built-in options for migrating business information into the Salesforce ecosystem. While both tools are native to Salesforce, they serve different needs and user profiles. In this blog post, we\u2019ll explore their features, compare their benefits and limitations to help you make the right choice. Table of Contents Understanding Salesforce\u2019s Data Import Wizard Understanding Salesforce\u2019s Data Loader Salesforce Data Import Wizard vs. Data Loader How to Import Contacts with Data Import Wizard How to Use Data Loader for Bulk Updates New Approach to Salesforce Data Management with Skyvia Real-World Use Cases: Applications of Salesforce Data Tools Conclusion Understanding Salesforce\u2019s Data Import Wizard [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard is a user-friendly tool that comes included with every Salesforce subscription. Accessible directly from the Salesforce interface, it requires no additional setup, which is a big advantage in terms of time and cost savings. With support for Insert, Update, and Upsert operations, the Data Import Wizard covers the core needs of data importing and management within Salesforce. It works with a number of standard and custom objects and can handle up to 50k records in a single batch, which is a great fit for small and mid-size businesses. In addition, the Data Import Wizard provides essential field mapping features, including: Mapping CSV columns to Salesforce fields. Auto-matching columns to standard fields based on header names. Manual mapping to custom fields. Remembering previous mappings for repeat imports. The tool also respects data validation rules during the import process, blocking transferring records that fail to meet the applied criteria. This way it helps to maintain the quality of information that enters into Salesforce. Data Import Wizard Key Features No software installation required : available directly within Salesforce. Non-technical users friendly : designed with a step-by-step interface accessible via the Salesforce Setup menu. Supports CSV format : allows importing data from CSV files. Offers pre-built object templates for common data types : streamlines import flows for standard objects like Leads, Contacts, Accounts, Solutions, Campaign Members, and custom objects. Built-in deduplication and validation checks : identifies potential duplicate records and enforces Salesforce validation rules before completing the import. Field mapping options : enables automatic and manual field mapping. Data Import Wizard Limitations Maximum 50,000 records at once : not suitable for large enterprise data loads because of the imposed limitation per import. No automation support : imports must be launched manually each time. No data export functionality : supports data transfer in one direction only, [exporting data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) requires other tools. Does not support certain standard objects : objects with complex relationships, such as Opportunities, Cases, and Campaigns, are not supported through the Wizard. Basic error reporting only : displays error messages after completing import runs but lacks the detailed logs and retry options available in Data Loader. Understanding Salesforce\u2019s Data Loader The Salesforce Data Loader is a desktop-based tool capable of handling bulk data tasks. Compared to the Data Import Wizard, it enables you to interact with your data in more ways, including support for Delete and Export operations. With the capacity to process up to 5,000,000 records at once, it is a practical solution for businesses working with large datasets. You can run Data Loader with user interface (UI) or in command-line mode (CLI), which makes it suitable both for non-technical and advanced users. Also, the support of CLI provides foundation for automation capabilities, including: Insert, Update, Upsert, Delete, Export / Export All operations Scheduling jobs using operating system tools [Batch processing](https://skyvia.com/blog/batch-etl-processing/) with mapping files Integration into larger automation pipelines. Unlike Data Import Wizard that offers simplified, UI-based mapping, Data Loader enables full control over field mappings, catering to more complex scenarios. In particular, it enables the creation of mapping files \u2013 pre-defined instructions that specify how CSV columns map to Salesforce fields. Although these files are created manually, they can be reused afterwards across multiple imports. Data Loader Key Features Handles large data volumes: supports up to 5 million records per operation. Supports multiple object types and complex relationships: including standard and custom objects, and lookup fields. Enables scheduled tasks and data loading automation: via command-line interface and OS-level scheduling tools. Offers flexibility with interactive UI or CLI: suitable for both business users and advanced technical users. Supports all core data operations: Insert, Update, Upsert, Delete, Export, and Export All. Reusable field mapping files: define once, reuse for consistent and efficient imports. Data Loader Limitations No native scheduling UI : Data Loader doesn\u2019t have a built-in interface for scheduling; automation is only possible via CLI and OS-level tools. macOS CLI limitation : since CLI is maintained only for Windows, automating and scheduling data operations on macOS may require workarounds or third-party tools. Not cloud-based : requires installation and lacks the accessibility of cloud-based solutions. Manual setup for automation : configuring automated tasks using CLI implies scripting skills, which may be a barrier for non-technical users. No metadata migration options : since Data Loader is limited to working with data only, you cannot use it to import metadata such as custom objects, Apex code, or validation rules. Batch-only processing : doesn\u2019t support real-time or event-driven data operations. Salesforce Data Import Wizard vs. Data Loader Now that we\u2019ve reviewed both tools in detail, let\u2019s wrap up with a summary table comparing their key features. Capabilities Data Import Wizard Data Loader Admin access yes yes User access yes no Data export no yes Data loads schedule no yes Data deletion no yes Records support up to 50 000 up to 5 000 000 Deduplication yes no User interface yes yes Error handling no yes Import/update\u00a0\u00a0custom object yes yes Import/update Accounts yes yes Import/update Contacts yes yes Import/update Leads yes yes Import/update\u00a0\u00a0Person Accounts yes yes Import/update\u00a0\u00a0Campaign Members yes yes Import/update Opportunities no yes Import/update Cases no yes Import/update Solutions yes yes Toggle workflow rules off yes no Validation rules fire yes yes The Reasons for the Choice Of course, choosing the right tool is a careful process that cannot be done hastily. Still, it\u2019s always helpful to have a quick reference guide to point you in the right direction. Choose Data Import Wizard if You are ok with manual imports, and you don\u2019t need automation. 50 000 records limit is enough to meet your business needs. The Wizard supports the objects you need to import. You have basic import needs and work with relatively small datasets. You prefer a user-friendly interface with minimal setup effort. Choose Data Loader if Your workloads require processing more than 50,000 records per batch. You rely on regular scheduling of large data loads, such as nightly imports. You occasionally need to [delete data from Salesforce](https://skyvia.com/blog/how-to-mass-delete-records-from-salesforce-using-sql-or-data-loader/) . Your data imports involve complex scenarios, such as data transformations and advanced field mappings. You need to export and offload data from Salesforce. How to Import Contacts with Data Import Wizard Things To Consider Before You Start The most common reasons for import failures include poor CSV formatting, unmapped required fields, and other data issues. Many of these errors could be avoided with proper preparation. Salesforce [emphasizes the importance of preliminary steps](https://salesforce.vidyard.com/watch/ARIjWm2qrDkJVJxEhReFug) before importing your data and recommends the following: Step 1. Clean Up Your Data Step 2. Prepare Salesforce Compare your data to each Salesforce object. If you\u2019re tracking information that is not yet captured by Salesforce, you should create custom fields where the new records will go. Step 3. Prepare an Import File This step involves matching the names of your columns to the fields in Salesforce. Match field names. Add a column for the record owner. Add a column for a parent record if you are importing Opportunities, Contacts or other objects that have parent records. Step-by-step Guide As mentioned earlier, the Data Import Wizard can be accessed directly through the Salesforce UI. In your Salesforce account, go to Setup . Locate the Data Import Wizard using the Quick Find search box. On the Data Import Wizard page, click Launch Wizard! Choose the type of data you want to import (e.g., Accounts, Contacts, Leads, etc.). Select the desired action: Add new records Update existing records Add and update records simultaneously. Upload your CSV file as the data source for import: Map your source fields to Salesforce fields: Note : The Data Import Wizard will automatically map most fields. If any fields are unmapped, you must manually match them \u2013 otherwise, they won\u2019t be imported. Review your settings and start the import. How to Use Data Loader for Bulk Updates Prerequisites Complete the preliminary preparation steps outlined above . [Download and install](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/loader_install_mac.htm) Data Loader on your system. Ensure you have the necessary permissions for the operations you\u2019re doing in Salesforce. Step-by-step Guide Create a CSV import file containing the records you want to update. Note : Since we are updating existing records, the import file must include their record IDs, along with the new values for the fields to be updated. You can export record IDs with Data Loader. Ensure the data is clean and correctly formatted to match Salesforce field requirements. Launch the Data Loader and click Update . Log in with your Salesforce credentials. Choose the Salesforce object you want to update (e.g., Account, Contact, Opportunity). Click Browse and select the prepared CSV file from your system. Click Create or Edit a Map to map CSV columns to Salesforce fields. Use Auto-Match Fields to Columns to automatically map fields, or map them manually as needed. Review the settings and click Next , then Finish to start the update process. Monitor the progress and review the success and error logs generated after the operation. New Approach to Salesforce Data Management with Skyvia [Skyvia Salesforce Data Loader](https://skyvia.com/data-integration/salesforce-data-loader) is an industry-leading solution that offers a number of advantages over other data loading tools. To start, it supports the full spectrum of DML operations within your Salesforce org.: Insert new records into Salesforce. Update existing records. Delete unwanted or outdated information. Additionally, as part of the broader Skyvia platform, it can be integrated into more complex scenarios. With Skyvia Import, you can synchronize data across 200+ sources, leveraging advanced platform features such as: ETL, ELT, and reverse ETL : broad [data processing options](https://skyvia.com/blog/best-data-processing-tools/) to meet diverse integration needs. Automated data sync : customize integration scenarios based on your business logic. Data replication : create and maintain up to date a full copy of your dataset. Bi-directional data sync : enable effortless two-way data transfers between different apps and platforms. Flexible data export : extract Salesforce data to CSV files, cloud storage, or FTP servers. Error checking : built-in error handling for accurate data processing. Secure connection via Azure cloud : benefit from safe and reliable data transfers within the Azure cloud infrastructure. The short video below demonstrates how to import data from Google Drive into the Salesforce Contact object in just four steps. Advantages of Using Skyvia for Salesforce Data Management Automated and scheduled data import : you can schedule tasks for importing data into Salesforce on a regular basis. This is especially useful for businesses requiring periodic database updates with minimal manual effort. Advanced data handling : Skyvia supports all core operations \u2013 Insert, Update, Delete, and Upsert in Salesforce, ensuring the most efficient data management on the platform. Strong filtering and export options : with advanced filtering settings, Skyvia enables fine-grained data export from Salesforce. You can also create custom SQL queries for even more precise data extraction. Robust data mapping and transformation : you can transform and map incoming data into Salesforce-acceptable formats, reducing the risk of errors during imports. Detailed logging and error reporting : Skyvia provides per-record error logs, activity monitoring and email notifications, helping users timely address and resolve import issues. Skyvia Salesforce Data Loader vs. Salesforce Data Loader Parameter Skyvia Salesforce Data Loader Salesforce Data Loader Pricing \u2013 Free tier; \u2013 Flexible subscription plans; \u2013 Trial option with full access to all features. \u2013 Free Cloud or desktop Cloud Desktop Number of records Unlimited Up to 5 million User interface Yes Yes CLI No Yes Support for scheduled tasks Yes, via UI Yes, via CLI scripting Support for all DMLs (including UPSERT) Yes Yes Support for all Salesforce editions Yes Yes Advanced mapping features Yes Limited Data format support Multiple file formats + connections to cloud apps and databases CSV only Connectivity to file storages Yes No Support for related objects and attachments Yes Yes Error handling Yes Yes Email notifications Yes No Number of Salesforce connections Unlimited Limited to one org at a time [Salesforce to Salesforce](https://skyvia.com/blog/salesforce-to-salesforce-integration/) integration Yes No Additional support for data sources Yes No Support for other data integration scenarios Yes No Real-World Use Cases: Applications of Salesforce Data Tools For companies working with large volumes of customer, sales, or operational data, the importance of tools like Data Import Wizard and Data Loader is clear \u2013 they\u2019re an integral part of daily business operations. Their native belonging to the Salesforce ecosystem makes them not only convenient but also financially rewarding. For nearly all organizations, using these tools means less manual data entry, fewer errors, and more accurate, reliable data. The use cases below illustrate how businesses across a range of industries benefit from incorporating Data Import Wizard and Data Loader into their workflows. Retail & E-commerce Migrating from an outdated CRM to Salesforce is a common scenario in retail. Data Import Wizard can simplify this process by enabling easy transfer of customer contacts and leads into Salesforce, with minimal setup. Event Marketing Salesforce\u2019s ability to automate marketing tasks and synchronization with popular email platforms make it a top choice for marketing teams. Event-driven companies can use Data Import Wizard for importing attendee lists and arrange timely follow-ups for the most efficient campaigns. Real Estate & Property Management Similarly, real estate agencies can use Data Loader to update client records and property information in bulk, streamlining operations. Financial Services Organizations with complex data needs, like financial institutions, benefit from Data Loader\u2019s advanced features to: Automatically sync Salesforce with external systems like [accounting](https://skyvia.com/blog/salesforce-quickbooks-integration/) or [ERP platforms](https://skyvia.com/blog/salesforce-and-erp-integration/) ; Maintain data relationships between objects, provided that Salesforce record IDs for parent objects are included. Conclusion In this article, we explored two native Salesforce tools designed to help SFDC customers import data into the platform. The Data Import Wizard is a great choice for small to mid-sized businesses, provided that the 50,000-record limit fits your needs. It\u2019s simple, visual, and requires no installation. Instead, Data Loader is designed for more complex scenarios. While it does require manual setup, once configured, it can handle large volumes of data promptly and reliably. So, which tool should you choose? No need to overthink it \u2013 you can use both for different types of tasks. However, if you still run into limitations, consider giving [Skyvia a try](https://app.skyvia.com/) . It\u2019s free to start, and the two-week trial with full feature access lets you explore its capabilities to the fullest. F.A.Q. for Choosing the Right Salesforce Data Tool What is the main difference between Data Import Wizard and Data Loader? Data Import Wizard is a browser-based tool for importing up to 50,000 records, ideal for less technical users. Data Loader is a desktop application suited for importing, updating, deleting , or exporting large volumes of data, up to 5 million records. Can I use Data Import Wizard to delete records? No, Data Import Wizard does not support deleting records. To delete records in bulk, use Data Loader, which allows insert, update, upsert , delete , and export operations. Which tool should I use for importing data into custom objects? Both Data Import Wizard and Data Loader support importing data into custom objects. Choose Data Import Wizard for smaller, simpler imports, and Data Loader for larger volumes or more complex data manipulation. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-import-wizard-vs-data-loader%2F) [Twitter](https://twitter.com/intent/tweet?text=Choosing+the+Right+Salesforce+Data+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-import-wizard-vs-data-loader%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/&title=Choosing+the+Right+Salesforce+Data+Tool) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Complete Guide to Salesforce Data Loaders in 2025](https://skyvia.com/blog/salesforce-data-loaders/)" }, { "url": "https://skyvia.com/blog/salesforce-data-loaders/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Complete Guide to Salesforce Data Loaders in 2025 By [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) [0](https://skyvia.com/blog/salesforce-data-loaders/#respond) 193 April 30, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-loaders%2F) [Twitter](https://twitter.com/intent/tweet?text=Complete+Guide+to+Salesforce+Data+Loaders+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-loaders%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-loaders/&title=Complete+Guide+to+Salesforce+Data+Loaders+in+2025) Imagine Salesforce as a high-performance car. It looks sleek and powerful, but without your customer and business data which are fuel, it\u2019s going nowhere. And data loading is the fuel system, delivering the vital inputs that keep the engine humming. But loading that fuel manually? That\u2019s like pouring gasoline with a teaspoon. You need the right tools. That\u2019s where Data Loaders come in. Yet, there are differences among them. While the built-in Data Import Wizard handles basic tasks, more powerful loaders are built for speed, automation, and precision that turbocharge your CRM. Let\u2019s look at what makes data loaders necessary. Enrichment . They can combine CRM information with other business systems. For instance, a company uses Salesforce, Shopify, HubSpot, and SAP Logistics. In such a case, data loaders help build a single source of truth. Migration . This is a one-time process of fueling a new or enhanced Salesforce environment with existing data. For example, loaders can transfer customer records or sales histories when transitioning from legacy systems. Backups . Businesses can regularly extract and store copies of SF data, such as lead records or transaction logs, to recover it in case of accidental deletions or system failures. This guide provides step-by-step instructions and detailed comparisons to help Salesforce administrators, developers, and business analysts navigate the landscape of available data loaders. We will explore the most popular tools, including Salesforce Data Loader, Skyvia, Dataloader.io, Data Import Wizard, and Workbench. Table of Contents Salesforce Data Loader Importing Data to Salesforce Exporting Data from Salesforce Automating Salesforce Data Loader Skyvia Importing Data to Salesforce Exporting Data from Salesforce Mass Deleting Salesforce Data Dataloader.io Importing Data to Salesforce Exporting Data from Salesforce Data Import Wizard Importing Data to Salesforce Workbench Exporting Salesforce Data via SOQL Queries Bulk Insert Salesforce Data Use Cases for Salesforce Data Loading Comparing Salesforce Data Loaders & Choosing the Right One Common Errors & Troubleshooting Conclusion: Choosing the Right Data Loader for Your Needs Salesforce Data Loader [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) is a go-to tool to move large amounts of data in and out of the CRM system. It\u2019s a native desktop application that you install directly on your computer. You can use it at no extra cost, and it will be automatically updated with each new release. Best For Data teams in marketing, sales, or operations where bulk operations is part of the daily grind. Salesforce admins who need a free and powerful tool for large-scale imports, updates, or exports. Advanced users who want more control and flexibility than the basic Data Import Wizard provides. Command-line enthusiasts who want to automate tasks via scripts (Windows users only). Key Features It performs import and export records by executing insert, update, upsert, and delete operations. Handles large volumes: processes as many as 5 million records per batch. Works with a wide variety of standard and custom objects. Supports field mapping. Performs updates across multiple objects in one run. Detailed logs help you identify and fix issues. Cons Learning curve: it\u2019s not the most beginner-friendly tool. Scheduling limitations: no native scheduling in the user interface. If you want automated jobs, you\u2019ll need the CLI (Windows only). Formatting matters: your data needs to be properly formatted before loading. Performance hiccups: expect occasional timeouts, bugs, or crashes \u2013 lowering the batch size can help. Importing Data to Salesforce Let\u2019s break it down with real-life examples. We\u2019ll walk through how to use Salesforce Data Loader for importing new records, exporting information for analysis, and automating routine tasks \u2013 so you can see how to apply it in your day-to-day operations. First, we will show an example of importing data into Salesforce. We will import Contacts from a CSV file. Step-by-Step Guide Step 1 Install Data Loader on your computer. You may need to install Zulu JDK first. To get the details, follow the official [Documentation](https://help.salesforce.com/s/articleView?id=000383107&type=1) . Step 2 Open Salesforce Data Loader. As we already have contacts in Salesforce, we choose Upsert . Log in to your Salesforce account. Select an object to insert the data. In our case, it\u2019s Contact. Upload a CSV file and click Next . The Data Loader will show the preliminary information. Next, select the field with unique identifiers to match contacts. Check matches to related objects and mapping. Start the process. When it\u2019s done, you can check the list of errors and download it to a CSV file. Best Practices Ensure CSV headers match Salesforce fields. If records already exist, use the Upsert command to prevent duplicates. Review error logs after processing for any issues. When importing large datasets, adjust the batch size (default 200 records per batch) to balance speed and Salesforce API limits. Test complex imports in a Sandbox environment first. Exporting Data from Salesforce Regularly [exporting data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) helps maintain business continuity and access to historical records. In this example, we\u2019ll show how to export an account list for reliable reporting, audits, and long-term retention. Step-by-Step Guide Open Salesforce Data Loader. Click Export . Select the Account object and the folder to save the file. Select fields to export and optionally use SOQL queries to filter records. Proceed to finish the process. When it\u2019s ready, click View Extraction to see the resulting CSV file and download it. Best Practices Use SOQL filters to export relevant data only. This will help to reduce processing time and file size. Automating Salesforce Data Loader To automate tasks in Salesforce Data Loader, you must use a command-line interface (CLI). CLI automation is ideal for large-scale, repetitive tasks without manual intervention, but requires more technical setup and configuration. Step-by-Step Guide Generate the encryption key. Encrypt the Password for your Salesforce username. Set up the field mapping file. Create a process-conf.xml file with import configuration settings. Run import. For a detailed, step-by-step guide on using the Salesforce Data Loader CLI, refer to the [official documentation](https://developer.salesforce.com/docs/atlas.en-us.dataLoader.meta/dataLoader/command_line_intro.htm) . Best Practices To start working with CLI, familiarize yourself with importing and exporting data through the user interface \u2013 this helps you better understand CLI functionality. You can also explore sample configuration files located in the samples\\conf directory (e.g., C:\\Users\\{userName}\\dataloader\\version\\samples\\). Use CLI to automate operations during off-peak hours, optimizing resource usage. Skyvia [Skyvia](https://skyvia.com/) is a cloud-based integration and automation platform that allows users to import, export, sync, and delete Salesforce data without installation. With support for 200+ connectors, including databases, cloud storages, and popular SaaS applications, Skyvia simplifies complex data tasks. Its intuitive no-code interface makes it accessible to both technical and non-technical users. Best For Businesses looking to load Salesforce data without technical complexity, especially those with minimal IT support. Teams requiring a user-friendly solution for scheduled automatic, high-volume imports and updates into CRM. Companies needing an all-in-one platform for syncing Salesforce data with other systems. Key Features 200+ connectors that allow for direct Salesforce integration with other cloud apps, databases, data warehouses, or data storage. Scheduled import/export of any Salesforce objects, replication, and bidirectional data sync. Cloud-based, no installation required. Advanced mapping for data transformations. Backup and automation. A freemium tier or unlimited connections on paid plans. Cons The free plan has usage limits. Importing Data to Salesforce Loading Salesforce data doesn\u2019t have to involve complex scripts or manual steps. In this section, we\u2019ll discover how to automate and simplify common tasks like importing new records, exporting for reporting, and bulk-deleting using Skyvia. Let\u2019s begin by showing you how to import external CSV files into Salesforce, using the example of importing a contact list. Step-by-Step Guide Step 1. Create a connection to Salesforce Log in to [Skyvia](https://app.skyvia.com/) and click Create new > Connection. Find Salesforce in the list. Click Sign in with Salesforce and enter your credentials. Step 2. Create Import task. Click Create New > Import. Select your Data Source \u2013 CSV upload manually . Select the Salesforce connection as a target . Click Add New . Drop your CSV file to the Source area and check the CSV separator. Next, select the Contact object and choose the Upsert operation. This updates only the changed records, avoiding duplicates. Map the required fields between your CSV and Salesforce. The fields with matching names will be mapped automatically. You can map directly to the column, set the constant value, or use expressions and target lookup. Save the task. Click Run to execute immediately, or Schedule for recurring imports. Best Practices Validate data quality before import. Schedule regular imports to automate updates. Verify how fields are mapped to maintain data integrity. Exporting Data from Salesforce Let\u2019s show the example of exporting SF data into CSV files. Similarly, Skyvia can export records to external sources such as SQL databases or cloud storages. Step-by-Step Guide: Step 1. Create a connection to Salesforce Log in to [Skyvia](https://app.skyvia.com/) and click Create new > Connection. Find Salesforce in the list and set up a connection as we showed above. Step 2. Create Export task Click Create New > Export . Select the Salesforce connection as a sourc e. Select the Target type \u2013 CSV upload manually . Click Add New . Choose the Account object, select columns to export, and apply filters if needed. If you want to automate the uploads, select the template for the file name. Run immediately or Schedule automatic exports. Best Practices Automate exports for regular backups. Apply filters to export only necessary records. Mass Deleting Salesforce Data Here you\u2019ll learn how to mass delete Salesforce records safely with Skyvia. We will cover a case of deleting a lead list with a CSV file. Step-by-Step Guide Step 1. Create a connection to Salesforce Log in to [Skyvia](https://app.skyvia.com/) and click Create new > Connection Find Salesforce in the list and set up a connection as described above. Step 2. Create an Import task Prepare a CSV file with the leads you want to delete. The file should contain the record\u2019s unique identifiers. Click Create New > Import. Select your Data Source \u2013 CSV upload manually . Select the Salesforce connection as a target. Click Add New . Drop your CSV file into the Source area and check the CSV separator. Next, select the Lead object and the Delete operation. Map the ID column from your file to the corresponding field in Salesforce. Save the task. Click Run to start the deletion. Check the results on the Monitor and Log tabs. Best Practices Always backup your information before deleting it. To get a .CSV file with record IDs, you can export it with Skyvia using filters or complex queries, as described above. Carefully review the .CSV file to avoid accidental deletions. Dataloader.io Dataloader.io is a cloud tool developed by MuleSoft for importing, exporting, and deleting data. It offers a simple web interface that allows users to map fields, schedule jobs, and automate data tasks without installing software. It also supports OAuth authentication and direct integrations with cloud storage services. Best For Small businesses needing to perform quick bulk operations. Users who need a no-install solution. Teams looking for simple scheduled automation. Key Features Web-based; no installation required. Supports Import, Export, and Delete operations with basic field mapping. Scheduled automation capabilities. Direct cloud storage integrations (Dropbox, Box). Secure OAuth authentication \u2013 users can log in with their Salesforce credentials. Cons Limited features in the free plan. No advanced transformations or complex data processing. No additional cloud integrations except Salesforce. Importing Data to Salesforce In this guide, we\u2019ll walk you through two essential use cases with Dataloader.io: importing from a CSV file into Salesforce (using the Contacts object as an example) and exporting data, such as account records, for backup or business intelligence tasks. These scenarios reflect common needs like syncing leads from marketing campaigns or generating external reports for stakeholders. Let\u2019s start by importing a contact list. Step-by-Step Guide Log in to Dataloader.io and start a new Import Task . Select the operation type and the object you want to import data into. Upload your CSV file. Map fields between your CSV and Salesforce object. Review your task summary and select either the Bulk API or Batch API based on your needs. Finally, click Save to keep your task for later or schedule it, or Save and Run to execute it immediately. Best Practices Verify and clean the CSV before importing. Choose the right API mode \u2013 use Bulk API for large datasets and Batch API for lower-volume tasks. Perform a dry run to verify that all mappings are correct and data formats align. Exporting Data from Salesforce Let\u2019s see how to export records from Salesforce to a CSV file using Dataloader.io. We\u2019ll use the Accounts object as an example. Step-by-Step Guide Log in to Dataloader.io and start a new Export Task . Select the Account object, check the columns to export, and apply filters if needed. Select the cloud connection to save the file, if needed. Click Save to keep your task for later or schedule it, or Save and Run to execute it immediately. Best Practices Automate exports to ensure regular data backups. Data Import Wizard The [Data Import Wizard](https://help.salesforce.com/s/articleView?id=xcloud.data_import_wizard.htm&type=5) is a built-in tool that provides a user-friendly approach for importing certain information directly within Salesforce. It is primarily suited for importing standard objects like Leads, Accounts, Contacts and selected custom objects. It provides a guided, step-by-step process for uploading CSV files and mapping fields and supports uploads of up to 50,000 records per import. Best For Users who need a simple and guided import process with no additional software. SF admins handling occasional bulk imports. Small to mid-sized businesses with moderate data import needs. Key Features Built directly into Salesforce. Supports standard objects and select custom objects. Simple, guided field-mapping process and duplicate detection. No installation or setup required. Handles up to 50,000 records per import. Cons Limited to 50,000 records per import. No scheduling and automation options. Lacks advanced transformation and ETL capabilities. Importing Data to Salesforce Let\u2019s see how to import accounts and contacts using the Data Import Wizard. Step-by-Step Guide Login to Salesforce and type Data Import Wizard in a search bar. Click Launch Wizard . Choose Accounts and Contacts object and the way to match the fields. Upload your CSV file and select a value separator. The wizard will auto-map fields between your CSV file and Salesforce. Review the mappings and make adjustments if necessary. Once field mapping is complete, review the summary. If everything looks good, click Next , then Start Import to begin the process. Once the import is complete, check the status to ensure all records were imported successfully. The system will display any errors or skipped records so you can address any issues. Best Practices Follow the [recommendations](https://help.salesforce.com/s/articleView?id=xcloud.import_prepare.htm&type=5) to prepare the data. Don\u2019t run several import jobs simultaneously, even if they are in different browser windows. Workbench Workbench is a web-based tool used by Salesforce developers and administrators for advanced data operations. Workbench gives a flexible way to to manage data, troubleshooting, and testing processes. It connects to Salesforce using REST and SOAP APIs and can run complex SOQL queries, perform bulk operations (insert, update, upsert, delete), execute Apex code and manage metadata. Best For Developers and admins who need API-level access to SF data. Users comfortable with SOQL queries and bulk operations. Key Features Web-based tool for API access and data management. Supports Bulk API operations: Insert, Update, Upsert, Delete, and Export. Advanced SOQL and SOSL query execution. Metadata API access for managing objects and fields. Free to use. Cons No user-friendly interface compared to other tools like Skyvia. Workbench has not been officially tested or documented. Requires familiarity with Salesforce APIs and SOQL. Exporting Salesforce Data via SOQL Queries In this section, we will show how to use Workbench to perform an export from Salesforce using SOQL queries. Step-by-Step Guide Log in to Workbench. Navigate to Jump to > SOQL Query and select the object to export data. Write and execute your custom query (e.g., SELECT Name FROM Account). Sort and filter records, if needed. Select Bulk CSV and click Query . Download results as a CSV file by clicking on its name. Best Practices Troubleshoot issues with Debug Logs feature. Use Metadata API to create, update, or delete custom objects and fields. Use the Apex Execute feature to run Apex scripts for automating tasks and processes. Bulk Insert Salesforce Data Now let\u2019s show how to use Workbench to import a contact list to Salesforce. Step-by-Step Guide Log in to Workbench. Select the Insert action and a Salesforce object. Select a correctly formatted CSV file. Map fields, verify mappings, and click Confirm Insert . Best Practices: Always preview field mappings carefully. Consider using Bulk API mode for large-volume operations. Use Cases for Salesforce Data Loading Knowing what a tool does is only half the story, but seeing how it transforms raw data into business intelligence is where the magic happens. After exploring the ins and outs of [Salesforce data loaders](https://skyvia.com/blog/salesforce-best-data-loaders/) , let\u2019s walk through real-world scenarios where they move the needle for teams and organizations. 1. Data Migration Moving large datasets between Salesforce instances or from other apps or databases is a part of mergers, acquisitions, or org consolidations. Cirrus Insight used Skyvia for a [Salesforce to Salesforce migration](https://skyvia.com/case-studies/cirrus) to consolidate operations into a single organization while preserving customer data. A4 International [migrated client data](https://skyvia.com/case-studies/a4international) from an external SQL Server-based system (Streamics) into Salesforce, providing a smooth transition. 2. Routine Data Synchronization Keeping Salesforce in sync with ERP, accounting, marketing, or data warehouses is a key to operational efficiency. A real estate company [established a two-way sync](https://skyvia.com/case-studies/real-estate-company) between Salesforce, MS SQL, and Mailchimp, ensuring up-to-date records across all platforms. A4 International [set up hourly syncs](https://skyvia.com/case-studies/a4international) between Streamics (SQL Server) and Salesforce, giving the sales team real-time insights. 3. Mass Data Updates Bulk updates improve operations like adjusting pricing across a product catalog or modifying lead assignments. Cirrus Insight [automa](https://skyvia.com/case-studies/cirrus) [t](https://skyvia.com/case-studies/cirrus) [ed data transfers](https://skyvia.com/case-studies/cirrus) from [Stripe to Salesforce](https://skyvia.com/blog/stripe-salesforce-integration/) , to ensure always-updated customer and subscription records. 4. Data Cleanup & Mass Deletion Removing obsolete or duplicate records enhances data integrity and storage efficiency. 5. Automated Backups & Archiving Regularly exporting your information to external storage ensures secure backups and disaster recovery. Dale Carnegie , a global leadership training company, [used Skyvia](https://skyvia.com/case-studies/dalecarnegie) to replace Salesforce\u2019s internal backup system as their business expanded. 6. Recurring Imports & Exports Automating routine data imports, such as daily lead uploads, and exports, like nightly report generation, reduces manual work and keeps the business running smoothly. GOintegro [integrated MySQL with Salesforce](https://skyvia.com/case-studies/gointegro) with Skyvia to improve internal processes. Comparing Salesforce Data Loaders & Choosing the Right One Criteria Salesforce Data Loader Skyvia Dataloader.io Data Import Wizard Workbench Installation Desktop App Cloud-Based Web-Based Built-in Web-Based Pricing Free Free & Paid Plans (per account) Free & Paid Plans (per user) Free Free Max Number of Records 5 Million 10,000/month (Free), Unlimited (Paid) 10,000/month (Free), Unlimited (Paid) 50,000 per import Limited by API Bulk Processing \u2705 Yes \u2705 Yes \u2705 Yes \u274c Limited \u2705 Yes Automation \u2705 CLI Scripts \u2705 Built-in Scheduler \u2705 Scheduled Jobs \u274c No \u2705 Via Apex Connectors (Beyond CSV) \u274c No \u2705 200+ (Cloud Apps, File Storages, Databases, etc.) \u2705 Cloud Storage \u274c No \u2705 SOQL Query Number of Salesforce Connections 1 \u2705 Unlimited 1 (Basic & Pro), 10 (Enterprise) 1 1 Error Handling Logs Automatic Handling & Logs UI-Based Reports Basic Errors Manual Review Best For Admins & Devs handling bulk data Both non-tech & tech users needing integrations Small businesses & web-based imports Basic imports for business users Devs & Admins using SOQL & APIs Common Errors & Troubleshooting Data loading is like a moving day: organized chaos is still chaos. Even the best tools can stumble into a lost field or hit a locked door. Here are the usual suspects for failed data loadings and the smart fix ideas: Invalid Field Mappings \u2013 check field names in your CSV so they match Salesforce. Duplicate Records \u2013 use deduplication rules or an Upsert operation to avoid clones. API Limits Exceeded \u2013 check your API usage and consider [batch processing](https://skyvia.com/blog/batch-etl-processing/) . Permission Errors \u2013 verify user permissions and object access in Salesforce. Bulk API Failures \u2013 if Bulk API jobs fail, try switching to Batch API for better control. Conclusion: Choosing the Right Data Loader for Your Needs As businesses grow, so do their data challenges. The right data loader not only simplifies current tasks but also scales with your needs, working in the long run. We compared the most popular tools \u2013 each of them has its strengths, from the simplicity of the Data Import Wizard to the advanced automation of Skyvia. To choose the best data loader, assess your data complexity, volume, integration needs, and automation requirements. Salesforce Data Loader \u2013 best for performing bulk data operations and CLI-based automation. Skyvia \u2013 ideal for integrations, automated data tasks, and multi-user environments. Dataloader.io \u2013 suitable for web-based simple imports and exports. Data Import Wizard \u2013 quick, occasional data uploads directly within Salesforce. Workbench \u2013 preferred for developers and advanced admins needing powerful SOQL queries and bulk API operations. F.A.Q. for Salesforce Data Loaders Which Salesforce data loading tool is best suited for beginners or simple tasks? The native method: [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard, as it\u2019s web-based, user-friendly, and ideal for importing up to 50,000 records without technical setup . Other options: third-party platforms like Skyvia, Talend, or Hevo, as they also provide no-code, drag-and-drop solutions. What are the main advantages of using tools like Salesforce Data Loader or Skyvia compared to the built-in Data Import Wizard? Salesforce Data Loader and Skyvia handle much larger data volumes (up to 5 million records), support more operations (update, upsert, delete, export), offer advanced mapping, and can automate or schedule recurring tasks-features not available in the basic Data Import Wizard . Can I use these tools for complex operations like mass updates or deletions, and are there specific tool recommendations for these actions? Yes, both Salesforce Data Loader and Skyvia support complex operations like mass updates and deletions. They allow you to select records by ID or criteria and perform bulk changes efficiently . What are the crucial data preparation steps I should take before using any Salesforce data loader to ensure a smooth process? Prepare a clean, well-formatted CSV file, ensure accurate field mapping to Salesforce objects, check for duplicates or data errors, and verify that required fields (like IDs for updates/deletes) are included . Which of the mentioned data loaders offer automation or scheduling features for recurring data tasks? Salesforce Data Loader offers scheduling via its command-line interface, while Skyvia provides user-friendly, cloud-based scheduling for automated imports and exports without requiring coding or server setup . How do tools like Workbench differ from others like Salesforce Data Loader or Skyvia, and when should I consider using it? Workbench is a web-based tool focused on SOQL/SOSL queries, metadata exploration, and ad hoc data management. It\u2019s ideal for developers or admins needing to run queries, explore metadata, or perform one-time data tasks, while Data Loader and Skyvia are better for bulk, recurring, or automated data operations. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-loaders%2F) [Twitter](https://twitter.com/intent/tweet?text=Complete+Guide+to+Salesforce+Data+Loaders+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-loaders%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-loaders/&title=Complete+Guide+to+Salesforce+Data+Loaders+in+2025) [Liudmyla Mykolenko](https://skyvia.com/blog/author/liudmyla-mykolenko/) A dedicated technical writer, Liudmyla brings extensive experience in creating and managing diverse learning materials. Passionate about user-centered documentation, she thrives on enhancing user experiences through clear, engaging, and accessible content. With a keen analytical mindset and a collaborative approach, Liudmyla excels in bridging information gaps and simplifying complex concepts. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-data-migration-best-practices/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce Data Migration Best Practices By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/salesforce-data-migration-best-practices/#respond) 5034 February 14, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-migration-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Data+Migration+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-migration-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-migration-best-practices/&title=Salesforce+Data+Migration+Best+Practices) Over the last two decades, Salesforce has evolved from a cloud-based CRM into a dynamic hub that connects key business operations across sales, customer service, marketing, e-commerce, and analytics. Its centralized, scalable platform helps businesses overcome challenges like fragmented systems, inconsistent input quality, and managing massive data volumes. However, the journey to migrate into Salesforce is rarely straightforward. This blog post highlights the key challenges of Salesforce data migration and offers practical guidance to support you every step of the way. From preliminary activities and selecting the right migration tools to post-transition strategies for continuous improvement, we\u2019ve got you covered. Table of contents What is Salesforce Data Migration? Why Migrate to Salesforce? Types of Salesforce Data Migration Phases of Salesforce Data Migration Phase 1: Preparation Phase 2: Migration Phase 3: Quality Assurance Salesforce Data Migration Checklist Challenges in Salesforce Data Migration and How to Overcome Them Salesforce Data Migration Tools Salesforce Data Import Wizard Salesforce Data Loader Skyvia Jitterbit Dataloader.io Salesforce Inspector SFDX Data Move Utility (SFDMU) Top Salesforce Migration Tools Compared How to Migrate Data to Salesforce with Skyvia Post-Migration: Ensuring Continuous Improvement Conclusion What is Salesforce Data Migration? Generally, it is the process of transferring information \u2013 records, details, entries, statistics \u2013 from external systems, such as legacy databases or other platforms, into Salesforce. In practice, it goes far beyond this; it is a complex operation shaped by the size, format, and accuracy of the source records. And the fair share of its success depends on the preparation stage before the actual migration: how accurate and consistent the data is and how well it is structured to fit within Salesforce\u2019s framework. Why Migrate to Salesforce? There are a number of reasons why companies may undertake a shift to this platform. Let\u2019s briefly outline the most common of them: Consolidation of Systems. The functionality of the other system has been replicated within Salesforce, and there\u2019s no need to use and maintain two separate platforms. Adoption as a New CRM. The business is undergoing a Salesforce CRM migration, transitioning from another CRM solution, and needs to bring its existing data into the new system. [Check out this case study](https://skyvia.com/case-studies/exclaimer) to learn how shifting to Salesforce helped Exclaimer boost its sales and marketing opportunities. Collaboration with Non-Salesforce Users. Another reason could be that a business is collaborating with a company that doesn\u2019t use Salesforce but requires periodic synchronization between their systems. On the other side of the coin, a business may need to move data between two or more different Salesforce orgs. Reasons for doing this may include: Multi-Org Operations . A larger, multi-departmental business operates multiple SF orgs in conjunction with one another and needs to shift information between them. Scalability Issues . The current Salesforce org\u2019s data model was not designed to be scalable, forcing the business to rebuild a more robust and scalable org \u2013 just as it happened with the [Cirrus Insight case](https://skyvia.com/case-studies/cirrus) . Benefits of Migrating to Salesforce As a modern, constantly evolving platform Salesforce provides an ecosystem of tools finely tuned to meet diverse business needs. It offers built-in mechanisms for easy integration with other systems, efficiently handles large volumes of data, and delivers advanced analytics and AI capabilities. Unsurprisingly, transitioning to Salesforce can significantly boost overall business processes. It provides the following benefits: Unified view of customer information through the consolidation of information from various sources. Reduced manual effort and increased operational efficiency enabled by robust automation tools. Scalability to support growth , accommodating increasing data volumes and user demands. Improved marketing effectiveness and decision-making powered by advanced analytics and reporting capabilities. Confidentiality and compliance of sensitive information , thanks to top-tier native security measures. Extensive customization options with a broad range of native, partner, and third-party integrations available on AppExchange, including BigQuery, Snowflake, Power BI, and Skyvia. Types of Salesforce Data Migration Companies often migrate to Salesforce from a variety of tools and platforms to achieve their long-term goals, such as centralizing business data and improving its accessibility. Let\u2019s take a closer look at the most common types of SF migration: Type Tools Involved Processes Database migration Relational databases, such as [Microsoft SQL Server](https://skyvia.com/blog/5-ways-to-connect-salesforce-to-sql-server) , [Oracle](https://skyvia.com/blog/oracle-and-salesforce-integration/) , and MySQL Mapping tables and fields to Salesforce objects, ensuring data integrity during the transfer. NoSQL Databases, such as MongoDB and Cassandra Aligning the unstructured content with Salesforce\u2019s schema. Application migration Legacy CRM Systems like Siebel, SAP CRM, or Microsoft Dynamics Transition of extensive customer details, and the replication of critical functionalities within Salesforce. Industry-specific apps, such as Fiserv (finance & banking), Epic (healthcare), [Shopify](https://skyvia.com/blog/shopify-salesforce-integration/) , Magento (e-commerce) Salesforce Classic to Lightning migration Reconfiguring data, updating customizations, and migrating UI-centric features like Visualforce pages, workflows, or dashboards to align with Lightning\u2019s enhanced features. Storage migration Document management systems like SharePoint or Google Drive Linking Salesforce\u2019s cloud storage to external storage systems through integrations or third-party tools. On-premises file servers Cloud migration Other cloud-based CRMs like [HubSpot](https://skyvia.com/blog/hubspot-salesforce-integration/) or Zoho CRM Unifying customer management under a single platform. ERP systems, such as SAP or Oracle ERP Phases of Salesforce Data Migration Phase 1: Preparation Thorough preparation is the foundation of a successful outcome. This stage covers both business and technical requirements, so investing the time to plan it carefully will pay off in the long run. Planning Questions \ud83d\udd39 What kind of data do you want to migrate? This is the starting point for all your planning. How well is the data cleansed? How complex is it, and how does it align with the Salesforce model? Understanding these factors will help you shape a solid [data migration](https://skyvia.com/blog/connect-salesforce-to-sql-server/) strategy. \ud83d\udd39 What is the migration order? Plan the sequence carefully to avoid failures, especially when dealing with parent-child relationships (e.g., Master-Detail relationships). Ensure dependencies are met \u2013 if parent records aren\u2019t inserted first, child records will fail. \ud83d\udd39 What is the migration scope? Define exactly what data needs to be migrated. Will it be a full shift or just a subset? Will it impact the entire organization or only a single team? Larger migrations require significant time and effort \u2013 ensure all stakeholders are informed and prepared for the upcoming changes. \ud83d\udd39 What is the migration timeline? Once the scope is set, establish a timeline with clear milestones. Users need to know precisely when the cutover will happen and which system they should rely on to perform their \u2018business as usual\u2019 tasks during the transition. \ud83d\udd39 Are there any security concerns? Data security and compliance should never be an afterthought. To protect sensitive information, ensure your migration plan aligns with governance policies, regulatory requirements, and security best practices. \ud83d\udd39 What about a data governance plan? There\u2019s no point in migrating your information if it won\u2019t be maintained properly afterward. Define how it should be managed post-migration \u2013 what validation rules will be in place? Who will be responsible for the input quality? How will ongoing data hygiene be enforced? These should all be part of your governance plan. \ud83d\udd39 What tool will be used? Choose your migration tool wisely. Consider the limitations of each tool and compare them against the volume and complexity of records you need to move. The right tool can make a huge difference in efficiency and success. Data Selection This step is exactly what it sounds like \u2013 identifying what data to migrate and determining its source of truth. Collection. Start by listing all the systems and repositories where your data currently resides, such as legacy systems, CRMs, spreadsheets, or third-party applications. Identify ownership for each dataset \u2013 this will facilitate collaboration during the migration process. Evaluation . Assess the quality and relevance of existing data. Identify redundant or obsolete information \u2013 unless it is really needed, migrating historical records is often just a waste of time and resources. Reconsider data model . The beauty of migrating to a new system is that you have a fresh canvas to build on. Has your data model grown overly complex with time? Take this opportunity to simplify structures and [clean them up](https://skyvia.com/blog/enhance-salesforce-data-quality-and-cleaning-with-skyvia) . A well-thought-out model ensures the scalability of Salesforce orgs paving the road for long-term business success. Naming convention . When building the data model out, ensure to have a consistent naming convention for your objects\u2019 and fields\u2019 API names. API names are used when writing Apex or building declarative automations with tools like Flow Builder. Salesforce\u2019s best practice would be to keep the Labels and API names aligned where possible. Data Mapping The goal of this step is to define how entries from a [data source](https://skyvia.com/blog/best-data-pipeline-tools/) system align with Salesforce\u2019s structure. The greater the differences between these two data models, the more complex the mapping process will be. Assign unique IDs . To ensure integrity and accuracy, each record in your source system should have a unique identifier. Note : This is crucial for tracing relationships between objects: data points in one system must correspond correctly to related data points in another. For example, when migrating backend customer details along with past contract information from a previous CRM, each contract record must reference the correct customer ID from the backend system. Gather metadata samples . This helps you understand the structure of existing source data. Knowing the exact field names, data types, and relationships allows users to avoid guesswork and ensures fields are mapped correctly to the corresponding SF objects. Metadata samples also help prevent discrepancies caused by differences in naming conventions, field lengths, or data formats between a source system and Salesforce. With these samples, you can easily identify fields that require transformation, such as converting date formats or standardizing picklist values. Map source to target . Once you\u2019ve analyzed your data, begin mapping tables, fields, and values from your source system to the appropriate SF objects, fields, and values to ensure that everything migrates to the correct locations. As a best practice, create a detailed mapping document that outlines exactly how each element from the source maps to the target in Salesforce. This will serve as a key reference throughout the migration process and help prevent errors. Data Cleansing Clean your content before migration \u2013 plan for where it\u2019s going, not just where it\u2019s coming from. Quality issues from your old systems should not follow you into Salesforce. Gather all relevant data from various sources intended for moving and inspect it for inconsistencies, duplicates, missing values, and anomalies. Identify and eliminate redundant records, standardize formats, and make sure that all required fields are populated before attempting to import records. Phase 2: Migration This is the pivotal stage of the entire process where the actual data movement happens. Before initiating it, make sure that: All source data is thoroughly backed up. Only high-quality, clean content makes it into Salesforce. Chosen migration tools are configured according to the predefined mapping and transformation rules. Before this, it would make sense to run a pilot migration using a subset of your data. This allows you to identify potential issues and make necessary adjustments before the full-scale transition. Now, take a deep breath \u2013 it\u2019s go time. Proceed with the complete migration, keeping a close eye on the process to quickly resolve any errors or interruptions. Implement robust error-handling mechanisms to capture and log migration errors, including validation failures and system errors. Utilize logging frameworks or Salesforce debug logs to record migration activities and monitor progress in real-time. Phase 3: Quality Assurance Once the transition is complete, shift your focus to post-migration activities to ensure everything is in place and functioning as expected. Validation . Check the data for completeness, correctness, and proper relationships between entities. This way, you can ensure it has retained its integrity, especially on key pieces of information such as contact details (like email addresses, phone numbers, etc). Cleanup . Surely enough, you have already cleaned your data BEFORE, but you\u2019ll likely need to do some cleanup afterward. Often, during a migration, you\u2019ll find you have duplicates due to merging more than one source. Consolidate and tidy them up to ensure your users have the best experience on the SF platform. End-user engagement . Invite your end-users to interact with the migrated data. Their feedback is crucial in identifying any remaining issues and finalizing adjustments before full deployment. Salesforce Data Migration Checklist Now that we\u2019ve examined the migration phases in detail, let\u2019s summarize them with a visual checklist to wrap things up. Challenges in Salesforce Data Migration and How to Overcome Them Transitioning to a new system is always about instability. Before you can celebrate success and enjoy all the benefits, you\u2019ll need to rebuild workflows, adapt processes, and prepare your team for inevitable disruptions \u2013 including changes to their daily routine and potential downtime. And let\u2019s be honest \u2013 any operational hiccup can directly impact revenue. Read on to discover the key business challenges of Salesforce data migration and, more importantly, how to overcome them. 1. Data relevance and quality Challenge: Not all legacy information is useful. Businesses must decide what is essential for their operations and regulatory compliance. Solution: Define a clear selection strategy involving stakeholders from sales, marketing, and customer service. Conduct an audit to determine what should be migrated, archived, or discarded. 2. Business continuity and downtime Challenge: Data movement can disrupt daily operations, especially if users cannot access critical information during the transition. Solution: Develop a phased migration plan to minimize downtime. Consider running Salesforce in parallel with legacy systems during the transition, ensuring teams can continue operations smoothly. 3. Change management & user adoption Challenge: Migrating to Salesforce often involves changes in workflows, processes, and user habits. Employees may resist adopting the new system if they don\u2019t see the value or struggle with the transition. Solution: Clearly communicate the migration to all stakeholders. Provide comprehensive training, and develop a post-migration support plan to ensure users understand and embrace the new system. 4. Compliance and regulatory risks Challenge: Businesses operating in regulated industries (e.g., healthcare, finance, legal, etc.) must ensure that migrated data complies with privacy laws (GDPR, HIPAA, CCPA, etc.). Solution: Involve legal and compliance teams in planning. Implement data encryption, access controls, and audit trails to ensure compliance during and after migration. 5. Cost management & budget overruns Challenge: Unexpected challenges, such as data quality issues, integration complexities, or extended project timelines, can lead to higher migration costs than originally planned. Solution: Define a realistic budget, factoring in source cleanup, tools, integration efforts, and training. Consider automation and third-party tools to reduce manual labor. 6. Stakeholder alignment & decision-making Challenge: Different departments (IT, sales, customer service, finance) have different priorities for what data should be migrated and how it should be structured. Misalignment can cause delays. Solution: Establish a cross-functional team with representatives from all impacted departments. Make sure the migration goals are clearly communicated and understood by all parties. Salesforce Data Migration Tools Salesforce Data Import Wizard The [Salesforce Data Import Wizard](https://help.salesforce.com/s/articleView?id=xcloud.data_import_wizard.htm&type=5) is a built-in import tool that operates within the SF web app \u2013 there\u2019s no need for a software download to use this tool. It works with a number of Standard Objects as well as Custom Objects and allows up to 50k records to be uploaded at once. Pros User-friendly : Intuitive interface suitable for users with minimal technical expertise. No installation required : Accessible directly within Salesforce; no need for external installations. Duplicate management: Offers basic duplicate detection during import. Cons Record limit: Can import up to 50,000 records at a time. Limited functionality: Primarily supports importing data; lacks export capabilities. Object support: Not all standard and custom objects are supported. Best for One-time integration tasks where automation is not required. Scenarios of importing up to 50,000 records. Straightforward import processes without complex transformations or scheduling requirements. Salesforce Data Loader The [Salesforce Data Loader](https://help.salesforce.com/s/articleView?id=xcloud.data_loader_about.htm&type=5) is a desktop-based tool designed for handling bulk data tasks, such as importing, exporting, and deleting records in Salesforce. It\u2019s built to manage large volumes of information efficiently, making it a go-to solution for businesses working with extensive datasets. Pros High volume handling : Capable of importing, exporting, updating, and deleting up to 5 million records. Broad object support : Supports a wide range of standard and custom objects. Advanced features : Includes functionalities like field mapping and [batch processing](https://skyvia.com/blog/batch-etl-processing/) . Cons Technical complexity : May present a steeper learning curve for non-technical users. Installation required: Must be installed locally on your machine. Scheduling limitations : It doesn\u2019t support the scheduling of data loads natively. Best for Cases where managing large amounts of customer data is critical in areas like marketing and sales. Skyvia Skyvia\u2019s powerful cloud-based platform supports all types of [Salesforce data migration,](https://skyvia.com/connectors/salesforce) providing a dedicated connector. Even if your data is in CSV format, you can easily import it using [Skyvia\u2019s data loader](https://skyvia.com/data-integration/salesforce-data-loader) . With Skyvia Import, you can connect Salesforce to other cloud apps or databases and migrate their content with little to no effort into your CRM environment. Additionally, Skyvia supports transferring data between multiple orgs facilitating [Salesforce to Salesforce](https://skyvia.com/blog/salesforce-to-salesforce-integration/) migration. Beyond this, [Skyvia Backup](https://skyvia.com/backup/salesforce-backup) lets you back up your SF org before major changes, providing an added layer of security. Pros Cloud-based: No installation required; accessible from any browser. Versatile integration: Supports [connection to over 200 data sources and destinations](https://skyvia.com/connectors) . Scheduling: Offers automated scheduling of migration tasks. Ease of use : Skyvia is ranked among the top user-friendly [ETL tools](https://skyvia.com/blog/etl-tools/) on the market [by G2 Crowd](https://www.g2.com/products/skyvia/reviews) . Cons Subscription-Based: Advanced features may require a paid subscription. Lack of hands-on guides : The platform\u2019s extensive functionality is not fully covered by video tutorials, which some users may find complex. Best for Businesses needing automated and scheduled data import processes. Scenarios requiring DML operations with UPSERT functionality. Complex integration tasks that involve data relations and splitting. Scenarios of regular updates and synchronization of Salesforce with other systems. Jitterbit [Jitterbit Data Loader](https://www.jitterbit.com/application/salesforce-data-loader/) is a free, open-source tool that simplifies the import and export of data between flat files, databases, and Salesforce. It offers a user-friendly, wizard-based interface, supports main DML operations and bulk load, and includes features such as automatic backups and cloud management for efficient data handling. Pros Open-source : Free to use with a supportive community. Automation: Allows scheduling of data migration tasks. Easy-to-use interface: Intuitive design simplifies the migration process. Cons Limited support: Being open-source, official support may be limited. Resource intensive: Can consume significant system resources during operation. Best for Medium-to-large migrations or frequent data integrations that require minimal technical expertise. Bulk data operations across Salesforce and databases. Dataloader.io Developed by MuleSoft, [Dataloader.io](http://dataloader.io) is a cloud-based application that allows users to securely import, export, and delete unlimited amounts of data in Salesforce. It requires no installation and offers features like intelligent mapping, scheduling, and integration with cloud storage platforms like Box, Dropbox, and Google Drive. Pros Web-based: No installation required; operates entirely in the cloud. Ease of use: Simple interface with drag-and-drop functionality. Scheduling : Supports scheduling of data loads. Cons Record limitations : The free version has limitations on the number of records per import. Feature restrictions: Advanced features are available only in paid versions. Best for Non-technical users who want a simple, cloud-based tool for scheduling automated data imports/exports. Salesforce Inspector It\u2019s a lightweight browser extension for Chrome and Firefox that adds to the Salesforce interface advanced exploration tools. It allows users to view, edit, import, and export data directly within the platform. With real-time access to field metadata and record details, Salesforce Inspector is a perfect choice for administrators and developers needing an on-the-fly tool for managing data in real-time. Pros Browser extension : Quick access without the need for separate applications. Real-time data : Allows viewing and editing directly within the SF platform. Cons Limited scope: Best suited for small-scale tasks rather than large migrations. Security considerations: As a third-party extension, data permissions must be carefully handled. Best for Small-scale operations, quick edits, metadata exploration, and efficient troubleshooting of records within Salesforce. SFDX Data Move Utility (SFDMU) [SFDMU](https://help.sfdmu.com/) is a developer-focused, open-source plugin for the Salesforce CLI. Its built-in functionality allows you to perform complex migrations across SF environments, including scratch orgs and sandboxes. The utility supports object sets, enabling the execution of multiple migration jobs in one run. Its command-line interface is designed to handle complex data models with multiple dependencies, allowing you to modify specific records or fields across interconnected objects. It\u2019s especially effective for development teams working on CI/CD pipelines or transferring data between multiple environments in highly customized Salesforce setups. Pros Command-line tool : Offers advanced control over migration processes. Complex data handling : Capable of managing complex data relationships and dependencies. Automation : Supports scripting for automated migration tasks. Cons Technical expertise required : Designed for users comfortable with command-line interfaces. Setup complexity : Initial configuration can be challenging for some users. Best for Developers performing complex migrations, moving relational data, or managing large, iterative projects involving Salesforce scratch orgs or sandboxes. Top Salesforce Migration Tools Compared The table below provides a brief outline of each tool\u2019s capabilities, helping you choose the most suitable option for your migration needs. Tool Key features Cost Ease of use Customer support [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard \u2013 Built-in tool within Salesforce; \u2013 Supports importing data into standard and custom objects; \u2013 User-friendly wizard interface; \u2013 Handles up to 50,000 records per import. Included with Salesforce subscription. \u2013 Intuitive and straightforward for non-technical users; \u2013 Accessible directly within the Salesforce interface. \u2013 Standard Salesforce support channels; \u2013 Extensive documentation available. Salesforce Data Loader \u2013 Desktop application by Salesforce; \u2013 Supports INSERT, UPDATE, UPSERT, DELETE and export operations; \u2013 Handles up to 5 million records per batch; \u2013 Command-line interface for automation. Free. \u2013 Requires installation; \u2013 More complex interface suitable for experienced users; \u2013 Command-line interface available for advanced tasks. \u2013 Standard Salesforce support channels; \u2013 Comprehensive documentation and community forums. Skyvia \u2013 Cloud-based data integration platform; \u2013 Supports data import, export, replication, and synchronization; \u2013 Connects with various data sources, including databases and cloud services. Freemium (free tier available); paid plans for advanced features). \u2013 User-friendly web interface; \u2013 Suitable for users with varying technical expertise. \u2013 Email and chat support for paid plans; \u2013 Detailed online documentation and tutorials. Jitterbit \u2013 Cloud-based tool. \u2013 Supports INSERT, UPDATE, UPSERT, DELETE, query, and bulk load operations; \u2013 Intuitive data mapping and transformation features. Free. \u2013 Modern and intuitive interface; \u2013 Designed for quick setup and use. \u2013 Community support; \u2013 Access to Jitterbit\u2019s knowledge base. Dataloader.io \u2013 Web-based application; \u2013 Supports import, export, and delete operations; \u2013 Allows scheduling tasks; \u2013 Auto-detection of relationships and intelligent mapping. Freemium (free tier available); paid plans for advanced features. \u2013 Clean and intuitive user interface; \u2013 No installation required; \u2013 Suitable for users of all levels. \u2013 Email support for paid plans; \u2013 Comprehensive online resources and community forums. Salesforce Inspector \u2013 Browser extension; \u2013 Provides quick access to view and edit Salesforce data; \u2013 Supports data import and export operations. Free. \u2013 Highly accessible within the browser; \u2013 Minimalistic interface; \u2013 Best suited for quick, ad-hoc tasks. \u2013 Community-driven support; \u2013 Limited official support channels. SFDX Data Move Utility (SFDMU) \u2013 Command-line tool; \u2013 Facilitates complex data migrations; \u2013 Supports migrating data with relationships intact; \u2013 Ideal for sandbox seeding and org-to-org data transfers. Free. \u2013 Requires command-line proficiency; \u2013 Suitable for advanced users and developers. \u2013 Community support through forums and GitHub; \u2013 Limited official support. How to Migrate Data to Salesforce with Skyvia Skyvia supports all types of Salesforce data migration, including connection to legacy systems. Watch these use case videos to learn how the cloud platform can facilitate Salesforce integration with other systems: Salesforce and SQL Server Integration MS Dynamics 365 Business Central and Salesforce Integration You can find more videos of your interest on our [youtube channel](https://www.youtube.com/@SkyviaPlatform) . Post-Migration: Ensuring Continuous Improvement Effective subsequent strategies are essential for the long-term success and sustainability of your Salesforce implementation. Beyond the technical aspects of migration, focusing on user adoption and ongoing management is crucial for maximizing the value of your investment and driving business outcomes. Strategies for user adoption and ongoing management Offer ongoing education and support resources, such as training materials, documentation, and video tutorials, to reinforce learning and address any new feature updates or changes. Create friendly competitions or challenges to motivate users to explore and leverage Salesforce features to their fullest potential. Implement regular quality checks and audits to identify and address any data issues or inconsistencies. Stay informed about Salesforce releases, updates, and new features to leverage the latest capabilities and functionalities to benefit your organization. Conclusion Migration is a challenging but necessary step for business growth. While the risks involved can be significant, the rewards far outweigh them. As you\u2019ve seen, the success of the operation is directly tied to the effort you put in \u2013 approach it wisely and stay prepared for any challenges along the way. The right migration tool can make all the difference. Consider using Skyvia for a seamless migration experience. Its advanced functionality supports backing up your data beforehand, scheduling ongoing updates, and even integrating non-programmable legacy systems, resolving compatibility issues. Take the next step with confidence \u2013 migrate data to Salesforce with Skyvia. FAQ for Salesforce Data Migration What is Salesforce data migration? It\u2019s the process of transferring data from other systems \u2013 such as databases, legacy CRMs, or cloud storage \u2013 into Salesforce. This process ensures that businesses can consolidate and leverage their data effectively within the SF ecosystem. What are the types of data migration in Salesforce? Companies migrate to Salesforce from various platforms, such as: \u2013 Databases (SQL, NoSQL, or other data repositories) \u2013 Applications (legacy CRMs or industry-specific apps) \u2013 Storage (moving files and documents) \u2013 Cloud (data transfer from on-prem or other cloud services) How do I prepare for a successful Salesforce data migration? Prioritize planning, testing, and collaboration. Here\u2019s how to prepare: \u2013 Establish a cross-functional team to oversee the process. \u2013 Define a project plan for scope, timelines, order, and budget. \u2013 Choose the right tool based on data complexity and volume. \u2013 Follow a migration checklist before execution. What are the challenges of Salesforce data migration? Salesforce migration challenges fall into two categories: \u2013 Business-related: Strategic, operational, and financial concerns, such as business continuity, compliance, and stakeholder alignment. \u2013 Technical: Data mapping issues, handling large data volumes, error logging, and system integrations. Can I automate Salesforce data migration? Yes! Many tools offer automation features for scheduling data loads and keeping Salesforce in sync with other systems. Some popular options include: \u2013 Skyvia \u2013 Jitterbit Data Loader \u2013 Dataloader.io \u2013 SFDMU How do I choose the right data migration tool for Salesforce? When selecting a migration tool, consider: \u2013\u00a0Data volume limits. \u2013\u00a0Integration needs: Ongoing synchronization vs one-time import. \u2013\u00a0Cost:\u00a0Compare free vs. paid options. \u2013\u00a0Ease of use: Does your tool require coding skills? \u2013\u00a0Customer support: What\u2019s the level of support available? How does Skyvia facilitate migrating to Salesforce? Skyvia simplifies SF migration through its advanced automation and integration capabilities. It\u2019s ideal for ongoing data updates and sync with other systems. With our Salesforce Data Loader, you can easily connect to your SF org and perform a full range of actions: INSERT, UPDATE, and DELETE. What is Salesforce org-to-org migration? It is transferring data between different SF orgs. This is necessary when: \u2013 Syncing multiple orgs within an enterprise. \u2013 Rebuilding a new SF org due to limitations in the existing data model. \u2013 Merging SF environments after acquisitions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-migration-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Data+Migration+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-data-migration-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-data-migration-best-practices/&title=Salesforce+Data+Migration+Best+Practices) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/salesforce-integration-best-practices/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Salesforce Integration Best Practices in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/salesforce-integration-best-practices/#respond) 2046 March 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Integration+Best+Practices+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-integration-best-practices/&title=Salesforce+Integration+Best+Practices+in+2025) Nowadays, the market is flexible and changing fast. Businesses aiming to stay competitive must provide exceptional customer experiences, make informed decisions based on accurate data, and achieve operational efficiency. Data integration platforms, like Salesforce, help solve many critical challenges in this way. In this article, we\u2019ll discuss why integration data into Salesforce is so important, what benefits it may bring to users, and go through the most successful practices and integration patterns. Table of Contents Understanding Salesforce Integration and Its Importance Real-life Examples of Integrating Salesforce with Additional Platforms Salesforce Integration Best Practices What are the Common Integration Patterns? Conclusion Understanding Salesforce Integration and Its Importance Integrating data with Salesforce means connecting it with other data sources and systems, like ERP, HR, finance, marketing platforms, etc., automatically exchanging data between them. Companies use many services, making this integration quite a challenge, \u2013 so why should businesses integrate data into Salesforce? If you need to enhance business operations in the organization. If your company wants to elevate its customer experience. Let\u2019s discover more reasons why data integration is essential. 360-Degree Customer View Integration consolidates customer data from multiple touchpoints into Salesforce, offering a complete view of customer interactions to personalize: Customer service. Targeted marketing. Sales strategies tailored to individual customer needs. For instance , businesses can synchronize customer data between Salesforce and an ERP system like [QuickBooks](https://skyvia.com/connectors/quickbooks) or [Oracle NetSuite](https://skyvia.com/connectors/netsuite) . This ensures sales teams can access up-to-date information on customer orders, inventory levels, and payment statuses. Integration tools can also automate the flow of support ticket information into Salesforce, helping sales and service teams to collaborate more effectively and resolve issues faster. Watch the video on how to revolutionize your Salesforce experience and learn how you can automate your business processes. Streamlined Business Processes Connecting Salesforce with other systems automates workflows and data exchange, eliminating manual data entry, reducing errors, and saving time. For example , integrating Salesforce with an email marketing platform such as Mailchimp via [data integration tools](https://skyvia.com/blog/data-integration-tools/) allows for automatically updating email lists based on Salesforce contacts, facilitating targeted marketing campaigns directly from CRM data. You can also use a [Salesforce SMS integration](https://www.heymarket.com/integrations/salesforce/) such as Heymarket\u2019s to track all of your text messages in your CRM and trigger texting flows based on your Salesforce data. Read more on these cases from our articles: [Stripe to Salesforce Integration: Easy-to-Follow Guide](https://skyvia.com/blog/stripe-salesforce-integration/) [Salesforce and ERP Integration](https://skyvia.com/blog/salesforce-and-erp-integration/) [How to connect Intercom to Salesforce](https://skyvia.com/blog/salesforce-intercom-integration/) [Marketo Salesforce Integration: Tutorial Guide](https://skyvia.com/blog/marketo-salesforce-integration/) [Shopify Salesforce Integration: 3 Best Methods](https://skyvia.com/blog/shopify-salesforce-integration-3-best-methods/) Improved Data Quality Data integration helps the accuracy and quality of data increase because: There is no need to enter data into several systems. Data isn\u2019t duplicated. There is no human factor. Teams can reduce the risk of data silos, duplications, and discrepancies, ensuring that the business operates on reliable data. For instance , data integration solutions like Skyvia reduce the complexity of connecting Salesforce with other systems. It offers [200+](https://skyvia.com/connectors/) pre-built connectors and templates that simplify the integration process, allowing companies to quickly set up and automate data flows without extensive IT resources, boosting productivity. Scalability and Flexibility As business needs to evolve, the integration setup can be easily adjusted or expanded to include new platforms or data sources. Integrating Salesforce with additional third-party platforms means a more connected and advanced business ecosystem. It amplifies Salesforce capabilities, leveraging its CRM strengths while enhancing functionality, scalability, and flexibility, saving time. Honest pay-as-you-go [pricing](https://skyvia.com/pricing/) is also a substantial benefit in this case. Data Security and Compliance Integrating systems through secure platforms ensures that data exchange complies with data protection regulations. Data integration platforms offer secure data transmission channels and adhere to best practices in data security, helping businesses maintain the confidentiality and integrity of their data. Real-life Examples of Integrating Salesforce with Additional Platforms Here are the real user stories showing how integrating Salesforce with other platforms helps businesses in different cases. Exclaimer improved Salesforce Integration And Transformed Data Connectivity Background: Exclaimer is a UK-based SaaS company that develops email signature management software. They selected Salesforce as a central data system and the single source of truth for all company departments. However, the fast data generation and integration between [ERP and CRM systems](https://skyvia.com/blog/comprehensive-guide-to-erp-and-crm-integration/) became challenging. Solution: They found Skyvia as a no-code system, providing drag-and-drop UI and simplifying data transfer without additional staff training. Result: Analysts can connect Salesforce with the local billing system, enabling smooth billing data, AR, MRR, and subscription details flow, [benefiting 25-30%.](https://skyvia.com/case-studies/exclaimer) Cirrus Insight enhanced Salesforce-QuickBooks Integration Background: Cirrus Insight is a CRM platform integrating Salesforce with third-party solutions, such as Microsoft\u2019s Office 365 and Gmail. They sought to integrate Salesforce with QuickBooks to enhance productivity and reduce costs. Solution: Cirrus Insight selected Skyvia to help with Salesforce-to-Salesforce and Stripe-to-Salesforce integration and connect with QuickBooks. Result : This integration improved operational efficiencies, streamlined financial reporting, and saved time and resources, allowing them to focus on critical business goals. Salesforce Integration Best Practices Integrating Salesforce effectively into your business processes and systems landscape depends on: Business goals. Approaches. Best practices selection. Let\u2019s consider the practice checklist and select the ones suitable for your business. Conducting Comprehensive Planning and Analysis Dive deep into your current and future business needs, the data landscape, and system dependencies to identify the integration goals. This step informs the integration project\u2019s scope, requirements, and potential challenges. Prioritizing Data Security and Compliance Ensure that data handled within and through Salesforce complies with laws like GDPR and standards such as ISO. Use secure data transmission methods and implement access controls to protect sensitive information. Utilizing Robust Integration Tools and Platforms Use integration platforms like [Skyvia](https://skyvia.com/) , [MuleSoft](https://skyvia.com/etl-tools-comparison/#Mulesoft) , or [Dell Boomi](https://skyvia.com/etl-tools-comparison/#Boomi) , which simplify complex integration by offering extensive Salesforce connectivity options, providing pre-built templates, and supporting custom workflows. [Go here to see a comparison of capabilities](https://skyvia.com/etl-tools-comparison/) . Implementing Data Mapping and Transformation Accurately map data fields between Salesforce and other systems, transforming data into the appropriate formats and structures required for seamless integration and functionality. Establishing Real-Time Synchronization For critical business operations, implement real-time or near-real-time data synchronization to ensure that changes in one system are immediately reflected in Salesforce, maintaining data coherence and operational responsiveness. Enabling User Training and Adoption Tailor training to different user roles, emphasizing the benefits and changes the Salesforce integration brings. Foster user adoption through comprehensive training programs and support resources. Monitoring and Maintenance Monitor the integration for performance and errors. Establish maintenance routines to update integrations based on new Salesforce releases or changes in connected systems. Iterative Testing and Optimization Adopt an iterative approach to testing, starting small and expanding scope as confidence grows. Use feedback to optimize and refine the integration, ensuring it meets evolving business needs. Embracing Scalability and Future-Proofing Design your integration architecture with scalability, anticipating future business growth, data volumes, and new integration points to avoid future rework. Seeking Expert Guidance and Support Feel free to consult with Salesforce integration experts or partners for complex integrations. Their experience can provide invaluable insights, best practices, and troubleshooting support. Create a Data/Process Integrity Checklist Develop a checklist to assess the integrity of data and processes affected by the integration, ensuring ongoing accuracy and reliability. Review and Refine the Project Plan Regularly check your project plan, refining objectives, timelines, and resources in response to challenges encountered and lessons learned during implementation. Choose a Tool to Help Manage the Process Use project management tools like Jira, Trello, Asana, etc., to keep track of tasks, milestones, and documentation, ensuring a coordinated approach across teams and stakeholders. What are the Common Integration Patterns? Integration patterns offer distinct advantages and trade-offs, and their suitability depends on factors such as performance requirements, data volume, latency tolerance, and system architecture. You may combine them to address diverse integration needs within your enterprise ecosystems. Let\u2019s review the most common ones and see how Salesforce uses them. Request and Reply This pattern means that one system sends a request to another and waits for a response before proceeding. It\u2019s often used in scenarios requiring immediate feedback or a specific response . Examples include API calls, RPC (Remote Procedure Call), and synchronous web service interactions. While this pattern offers real-time responsiveness, it may cause latency and potential bottlenecks, especially in distributed systems. Salesforce provides REST and SOAP APIs allowing external systems to send synchronous requests for accessing, manipulating, and querying Salesforce data. For example, a mobile application may use the Salesforce REST API to fetch customer records or update real-time sales opportunities. Fire and Forget Contrary to Request and Reply, the Fire and Forget pattern means sending a message or request to a destination without waiting for a response. Once the message is sent, the sender doesn\u2019t expect any response from the recipient. This pattern is helpful for scenarios where immediate response is unnecessary, such as logging, event notification, or non-critical data updates . It reduces coupling between systems and can improve scalability and fault tolerance. The Salesforce Platform Events feature provides asynchronous communication between Salesforce and external systems, allowing them to publish and subscribe to events. External systems can publish events to Salesforce, such as notifying Salesforce about inventory updates or customer interactions, not waiting for a response. Batch Data Synchronization Batch data synchronization periodically synchronizes data between systems in bulk, typically at scheduled intervals. Instead of processing individual data changes in real time, batch synchronization collects and processes large data volumes in batches. This pattern is suitable for scenarios with high data volume, limited bandwidth, or where real-time synchronization is not required . Batch Data Synchronization helps optimize resource usage, reduce network overhead, and improve system performance. Salesforce offers tools like [Data Loader](https://skyvia.com/solutions/salesforce-solutions) to facilitate batch data synchronization between Salesforce and external DBs, ERP systems, or legacy apps. Alternatively, you can use more advanced third-party tools like [Skyvia Data Integration](https://skyvia.com/data-integration/) . Companies can schedule data imports or exports to synchronize large volumes of data between Salesforce and external systems at scheduled intervals. Remote Call-In A system invokes functionality or services hosted on a remote server or platform. This pattern is commonly used in distributed architectures , where components must communicate across network boundaries. Remote Call-In can involve synchronous (e.g., RPC, RMI) or asynchronous (e.g., message queuing) communication mechanisms. It allows using external capabilities or resources without replicating functions locally, promoting code reusability and modular design. Salesforce supports remote call-in mechanisms through various integration methods, like Apex Callouts, Apex REST services, and External Services. External applications can invoke Salesforce functionality or retrieve data remotely by integrating third-party services for address validation or payment processing. Data Virtualization This pattern allows apps to access and manipulate data from multiple sources, like databases, web services, and files as if they reside in a single, unified data source. Instead of physically moving or replicating data, Data virtualization provides a layer of abstraction that dynamically integrates and shows data in real-time . This pattern facilitates data integration, federation, and abstraction without the need for complex ETL processes or data movement. Data virtualization promotes agility, reduces data duplication, and simplifies data access for applications and users. Salesforce External Objects and [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) virtualize external data sources within Salesforce without physically replicating the data. Companies can access and query data from external systems like ERP or legacy databases directly within Salesforce, leveraging real-time data without the need for data replication Conclusion As your business grows, its integration needs also evolve. And, to keep pace, companies need to consider strategic planning, scalability, flexibility, security, etc. The Salesforce integration strategies of 2024 mean a balanced approach using various models, like API, asynchronous communication, and hybrid integration. Prioritize robust data management, scalability optimization, and strong security measures to ensure seamless connectivity, compliance, and value maximization across integrated systems. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+Integration+Best+Practices+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-integration-best-practices/&title=Salesforce+Integration+Best+Practices+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-integration-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce middleware integration tools By [Brenna Schewing](https://skyvia.com/blog/author/brenna-schewing/) [0](https://skyvia.com/blog/salesforce-integration-tools/#respond) 3658 April 21, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+middleware+integration+tools&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-integration-tools/&title=Salesforce+middleware+integration+tools) Salesforce is a popular CRM platform that is used worldwide for tracking data, collaborating, and storing important information. Though Salesforce is an amazing tool, it can become even more powerful with integration. Salesforce integration tools make it possible to connect Salesforce with two or more systems. As a result, your organization can increase business efficiency by sharing data and automating business processes. Moreover, by integrating Salesforce with other systems and applications, businesses can avoid information silos. This occurs when useful data is generated or recorded in one system but never integrated or used to facilitate business efficiency. Below, we have compiled a list of the top Salesforce integration tools. Keep reading to learn more. Table of Contents Top 11 Integration Tools Skyvia RapidiOnline Commercient Boomi Jitterbit MuleSoft TIBCO Apache Airflow Talend ZoomInfo Conga Conclusion Skyvia [Skyvia Data Integration](https://skyvia.com/) makes it simple to connect Salesforce with all major cloud applications and on-premise data sources. As a result, it\u2019s easy to automate business data flows and processes, gain valuable insights from data analytics, and set up data warehousing as business data replication. Skyvia is an easy-to-use, cloud-based ETL solution that requires no coding knowledge. The platform seamlessly links with both local and cloud-based services, enabling the creation of ETL, ELT, and reverse ETL processes. Equipped with a range of data analysis features, the service aims to simplify and enhance business capabilities. The platform delivers a top-notch data service suitable for businesses of all kinds and still requires no maintenance. Skyvia also dynamically adjusts data load in real-time and offers adaptable pricing plans. All this makes it simple for Salesforce administrators to perform simple data integration. Pros: Easy-to-use, no-code solution Allows direct data loading between cloud apps and databases Performs bi-directional synchronization CSV import/export Offers advanced integration features such as powerful field mapping and complex transformation expressions Pricing: Free trial Five different pricing plans, starting from $15 per month Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required NO Security Users\u2019 private data stored by Skyvia, is encrypted at rest using AES 256-bit encryption, which is one of the strongest ciphers available. Read more about data storage [here](https://skyvia.com/security) . Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Free plan available. Paid plan starts from $15 per month Support Availability 24/7 (Support Portal, Live Chat, and more) RapidiOnline [RapidiOnline](https://www.rapidionline.com/) specializes in Salesforce and Microsoft Dynamics integrations. Common integrations include Salesforce with MS Dynamics NAV, AX, GP, 365 FO, and 365 BC. However, RapidiOnline also supports a wide variety of other solutions, APIs, and databases. RapidiOnline is simple to use with pre-configured templates, ERP insight Lightning components, and no required programming knowledge. Furthermore, RapidiOnline is flexible, working with all Microsoft Dynamics versions and supporting on-premise, hosted, and cloud deployments. Pros: No-code solution Performs bi-directional synchronization Strong security features, including no data storage Minimal implementation experience required Cons: High pricing No live chat, must submit a ticket in order to be in touch with an integration expert Pricing: Starts at $315 per month Criteria Result Bi-directional Synchronization Customers can choose one direction or bi-directional Automation Capabilities YES Coding Knowledge Required NO Security Salesforce AppExchange Partner certified. All transmissions via the internet are SSL encrypted and all data is processed end-point to end-point in memory directly, with no intermediate storage. No customer data is stored on any Rapidi server. Any sensitive configuration data is only stored on an encrypted basis. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Starts at $315 per month Support Availability Email and documentation Commercient [Commercient](https://www.commercient.com/) is an ERP to Salesforce integration solution that promises a quick turnaround. What\u2019s more, Commercient is also a no-code solution that only requires a simple download. Commercient can synchronize Salesforce and ERP data as frequently as required. You are able to configure the tool to perform a sync once a day, every hour, or even in real-time. Pros: No-code solution Performs bi-directional synchronization Minimal implementation experience required Fast\u2013Commercient promise a 24-hour turnaround Cons: Limited customer support Some users complain that the implementation process is longer than promised Pricing: $15 per user per month Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required NO Security Offers several security features, including Role-Based Access Control, Encryption, Multi-Factor Authentication, Data Backup, Recovery, and Compliance. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Starts at $15 per user per month Support Availability Email and documentation Boomi [Boomi\u2019s AtomSphere](https://boomi.com/platform/) Salesforce integration solution is a no-code tool with an intuitive visual interface. Boomi offers users a library of pre-existing connectors, making it simple to quickly set up a Salesforce integration. However, if your Salesforce integration project is more complex, Boomi\u2019s cons could quickly outweigh its pros. You will likely need to pay extra for unique connectors or add on additional development costs. Pros: Easy-to-use, drag-and-drop UI, a library of connectors, pre-built integration processes, and reusable components Automated data mapping and other pervasive intelligence features Boomi\u2019s Master Data Hub enables bi-directional synchronization Cons: Users report that the initial setup can be quite complex Program has limited functionality with Salesforce platform events Limited debugging updates to Boomi AtomSphere Program isn\u2019t ideal for high-volume integrations Complex integrations will require further support at extra costs, including a development budget No testing framework Pricing: Starts at $50 per month with the first 90 days free Pricing system is usage-based, so costs could mount quickly Limited customer support Criteria Result Bi-directional Synchronization YES (however, requires coding knowledge) Automation Capabilities YES Coding Knowledge Required NO (for basic integrations only) Security Data that is processed in a local Atom never enters the Boomi data center. Data storage is behind the firewall on a customer server where the Atom is deployed. The data is transported directly to either the SaaS or local application through a connector configured by the user. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Free trial for 90 days followed by $50 per month Support Availability Email and documentation Jitterbit [Jitterbit\u2019s Harmony](https://www.jitterbit.com/harmony/) platform facilitates business efficiency by integrating Salesforce with any software-as-a-service, on-premise data source, or cloud application. To do this, Jitterbit Harmony offers built-in integration templates that make it easy to connect Salesforce with thousands of applications. Moreover, the HarmonyConnection Builder makes it possible for users to build reusable application connectors. Pros: Click-based solution Easy-to-use visual interface with drag-and-drop functionality Strong security features Cons: Some coding knowledge required Must use Jitterbit Data Loader Limited customer support Pricing: Free trial available before purchase Most basic plan starts at $1000 per month Criteria Result Bi-directional Synchronization NO Automation Capabilities NO Coding Knowledge Required YES Security Offers several security features, including Role-Based Access Control, Encryption, Multi-Factor Authentication, Data Backup, Recovery, and Compliance. Cloud Storage Connectivity YES Required Implementation Experience Experience required Pricing Starts at $1000 per month Support Availability Email and documentation MuleSoft [MuleSoft](https://www.mulesoft.com/) is a data integration platform owned and operated by Salesforce. Therefore, it has excellent functionality when performing a Salesforce integration. MuleSoft utilizes no-code solutions to automate repetitive tasks with robotic process automation. Even better, MuleSoft can connect data from any system with ease and speed. Pros: Reduces redundancies and increases productivity through automation Improves customer service experience and reduces data silos Backed by Salesforce so highly efficient for Salesforce integrations Great option for enterprise clients Cons: Requires expert-level experience to implement and maintain Coding knowledge is required High prices could deter small companies Pricing: The most basic tier of MuleSoft pricing begins at $80,000 per year Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required YES Security MuleSoft follows the strongest control criteria, both within the IT infrastructure and the production platform. MuleSoft complies with ISO 27001, SOC 1, SOC 2, PCI DSS, and HIPAA. Cloud Storage Connectivity YES Required Implementation Experience Experience required Pricing Starts at $80,000 per year Support Availability Email, documentation, and dedicated support staff TIBCO [TIBCO Scribe](https://www.tibco.com/products/cloud-integration) facilitates Salesforce integrations through an intuitive drag-and-drop user interface. Users can quickly perform a no-code Salesforce integration and consequently streamline business processes. TIBCO Scribe also executes easy mapping transformations such as XML/JSON and XPATH. Moreover, web service development is made simple (REST and SOAP). Pros: Trusted solution that has been around for 14 years Offers pre-built integration solutions Supports all integration styles, including: APIs: RESTful, GraphQL, AsyncAPI, and gRPC Event-driven integration flows Messaging File-based integration Legacy integrations, including SOAP, XML, mainframe Cons: Tibco Scribe is only effective for simple Salesforce integrations User reviews mention occasional glitches and errors that necessitate debugging module Implementation experience required Minimal community support and documentation Pricing: Free trial for 30 days The most basic plan is $400 per month, while the premium plan is $1500 per month Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required NO Security Offers several security features, including Role Based Access Control, Encryption, Authentication, Audit Trails, Secure message, and Threat Detection. Cloud Storage Connectivity YES Required Implementation Experience Experience required (for basic integrations) Pricing Free trial followed by $400 per month for a basic plan and $1500 for a premium plan Support Availability Email and documentation Apache Airflow [Apache Airflow](https://airflow.apache.org/) is a tool that allows users to create, schedule and oversee Salesforce workflows. The Salesforce integration provider package uses directed acyclic graphs (DAGs) to create said workflows. Therefore, it is possible to easily integrate with various systems with just this single platform. Pros: Free, open-source ETL tool Strong documentation and community support Cons: No personalized customer support Time-consuming to use should any problems arise Not designed for bi-directional synchronization Pricing: Free Criteria Result Bi-directional Synchronization NO Automation Capabilities YES Coding Knowledge Required NO (for basic integrations only) Security Offers several security features, including Role-Based Access Control (RBAC), Authentication and Authorization, Transport Layer Security (TLS), Secret Management, Auditing, Logging, and Containerization. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Free Support Availability Community support and documentation Talend [Talend Open Studio](https://ua.talend.com/products/talend-open-studio/) is an ETL tool that integrates and synchronizes Salesforce data with other applications. As an ETL tool, Talend extracts, transforms, and then loads the data from one system to another. First, the data is extracted from a variety of data sources and loaded in a data warehouse. During the transformation stage, data is organized, cleaned, and sorted. It is then loaded to the appropriate location. Pros: Free, open-source tool Strong community support Cons: Some basic coding knowledge required Limited customer service for a free version Not ideal for large data volumes Limited training documentation for administrations No bi-directional synchronization Pricing: Free Criteria Result Bi-directional Synchronization NO Automation Capabilities YES Coding Knowledge Required YES Security Offers several security features, including User Authentication and Authorization, Encryption and Secure Data Transfer, Secure Password Management, Data Masking and Redaction, Role-Based Access Control (RBAC), and an Audit Trail Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required (for basic integrations) Pricing Free for open source ETL tool Support Availability Community support and documentation ZoomInfo [ZoomInfo](https://www.zoominfo.com/) is a sales intelligence software platform that houses a huge database of company and contact information. ZoomInfo can be integrated with Salesforce in order to enrich new leads, contacts, and accounts. ZoomInfo uses a bi-directional sync so that any changes made in ZoomInfo or Salesforce will be automatically updated in either platform. Pros: Users can easily filter data from the Account, Contact Lead, and Opportunity objects with ZoomInfo to set up helpful automated alerts and execute improved searches Simple to implement Cons: Cost might be prohibitive for smaller companies Some users find the user interface to be counterintuitive Pricing: $14,995 per year for the most basic package Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required NO Security Offers several security features, including Data Encryption, Secure Network Architecture, Access Controls, Third-Party Audits, Data Privacy, and Incident Response. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required Pricing Starts at $14,995 per year Support Availability Phone, email, and chat support Conga [Conga](https://conga.com/) offers software tools to manage documents, contracts, and perform sales automation. It\u2019s quite simple to connect Congra with Salesforce via automation solutions that are available on the AppExchange. As a result, users can easily automate the creation of documents, contract management, and more. Pros: Automation of revenue processes increases business efficiency Simple to perform Conga Salesforce integration Automation of document creation minimizes errors Cons: Ease of use is dependent upon project complexity Users report that customer service options are quite minimal. Limited troubleshooting documentation and how-to videos The most basic support package doesn\u2019t offer much one-on-one time with an agent. Users will have to pay more to get advanced customer support if needed. Pricing: $25 per month; however, available tools are not combined in one bundle. Instead, users have to purchase separate tools depending on their needs Criteria Result Bi-directional Synchronization YES Automation Capabilities YES Coding Knowledge Required NO Security Offers several security features, including Secure Data Storage, Access Controls, Audit Trails, Compliance, and Two-Factor Authentication. Cloud Storage Connectivity YES Required Implementation Experience Minimal experience required (for basic integrations) Pricing Starts at $25 per month but separate tools cost extra Support Availability Email and documentation Conclusion Salesforce integration tools are essential for any data-driven organization. These tools enhance and sync data, thus helping organizations to gain valuable data-driven insights. However, it can be difficult to choose the best Salesforce integration tool. There is an overabundance of possible Salesforce integration solutions on the market. We have provided this list of top Salesforce integration tools to help you select what is the best for your organization. We highly recommend Skyvia for its ease of use, low cost, and stellar customer support. Start your free trial of Skyvia Data integration today. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+middleware+integration+tools&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-integration-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-integration-tools/&title=Salesforce+middleware+integration+tools) [Brenna Schewing](https://skyvia.com/blog/author/brenna-schewing/) Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/salesforce-intercom-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to connect Intercom to Salesforce By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/salesforce-intercom-integration/#respond) 3965 February 16, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-intercom-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect+Intercom+to+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-intercom-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-intercom-integration/&title=How+to+connect+Intercom+to+Salesforce) Strong customer relationships are more important than ever for successful businesses of any size and scope. You can have the best products or services in the world, but if you don\u2019t treat your customers with the respect they deserve, they won\u2019t buy from you. Almost half (44%) of respondents of the [Harvard Business Review Analytic Services 2022](https://www.intercom.com/resources/guides/future-proofing-businesses-with-modern-customer-engagement) say a lack of collaboration and siloed efforts are among the top impediments to successful customer engagement. Simply put, if you want to improve your sales efficiency and increase revenue, you need to be good at building customer relationships regardless of industry. And since communication plays a big part in sales, it\u2019s a must-have to apply integrated CRM and CRP tools to connect with customers. Thus, this article is a must-read for anyone who wants to get more out of business data. If you want to know about Intercom and Salesforce integration, you\u2019ve come to the right place. In this guide, you\u2019ll learn two ways to integrate Intercom with Salesforce to help you better manage your customer relationships: manual and automatic. Table of Contents About Intercom About Salesforce Challenges of Intercom and Salesforce Integration Benefits of Intercom and Salesforce Integration Integration via Intercom manual connection Integration via Skyvia Setting up Skyvia Import scenario for data integration Setting up Skyvia Data Flow scenario for data integration Conclusion More articles about Salesforce integrations for Sales and Marketing Teams About Intercom [Intercom](https://skyvia.com/connectors/intercom) is a leading provider of modern communication and customer engagement solutions for digital businesses. It\u2019s an all-in-one customer communications platform trusted by sales, marketing, and support teams from 25k+ companies globally. Key features Reducing response times and resolving issues immediately with powerful AI chatbots and/or custom bots. Resolving customer inquiries with a fast and efficient inbox. Saving team time by solving customers\u2019 problems before they contact the business. Delivering conversational support seamlessly across various channels: messengers, emails, SMS, etc. Getting all of the tools, data, reporting, and apps working together. About Salesforce [Salesforce](https://skyvia.com/connectors/salesforce) is a popular CRM platform for any sized business: from small startups to large enterprises. Because of its ability to add custom fields, it\u2019s a great choice for any organization. This service gives a deep insight into every interaction between customers and businesses. Salesforce provides the full marketing stack to help companies manage their customers and marketing channels. Key features Opportunity Management. Get a holistic picture of all key information for sale, including the stage, products, and other important details. Account management. The service tracks all customer interactions (via email, chat, and phone) for future reference. Easy-to-use interface. It has a drag-and-drop feature to automate business processes visually. Challenges of Intercom and Salesforce Integration Data integration can be an uphill battle. There are always new challenges to overcome; however, some are unchanged. Manual data entry processes . It\u2019s important to have all the project/customer/account data unified and in one place. However, implementing the manual processes increases the chance of mistakes, complicates communication, and overall is time-consuming. Additionally, they lead to miss-placed orders, revenue discrepancies, and lengthy periods closed. Lack of data visibility between departments . If there\u2019s poor data visibility between departments, its impact will be felt, leading to more missed orders, higher lost sales, and increased customer dissatisfaction. Whereas unified and transparent data helps with building a successful communication strategy, eliminating mistakes, and allowing the team to build trust and cohesion. Benefits of Intercom and Salesforce Integration The Salesforce Intercom integration helps companies meet their sales goals, customer service expectations and increases productivity by providing a powerful lead management solution. This integration allows you to access all the relevant financial and operational data faster, in one place, and without doubts about data quality. Overall, the end-users benefit from making informed business decisions and improving performance. Unifying business data . Many businesses have a huge amount of data that needs updating not from time to time but in almost real-time. Integrating Salesforce and Intercom allows for seamless communication and data transfer between the two platforms, reducing the need for manual data entry and minimizing errors. By combining data, businesses also gain a complete view of their customers\u2019 insights, including their interactions with the company, purchase history, and support requests. Better tracking and reporting . With data from multiple sources, tracking and analyzing customer behavior and performance becomes much easier, allowing teams to make more informed decisions and improve their overall strategy. Increased conversion rate, sales, and customer satisfaction . Using the combined data, sales and customer service teams can identify and prioritize the most valuable and risky cases, resolve customer issues faster, and improve customer satisfaction. Integration via Intercom manual connection Intercom offers [native integration with Salesforce](https://www.intercom.com/integrations/salesforce) with the help of its built-in application. It can be found in the Intercom App store within the Sales category. Note! This app cannot be installed from the Salesforce AppExchange. Native integration allows: [Keep your data in sync with regular updates in the direction you choose: Intercom to Salesforce or Salesforce to Intercom](https://www.intercom.com/help/en/articles/4497943-install-and-configure-the-salesforce-app#h_b3af4ce8c8) . [Automatically assign conversations to the correct lead owner in Intercom](https://www.intercom.com/help/en/articles/4497943-install-and-configure-the-salesforce-app#h_df079f4292) . [See the latest Salesforce data in your inbox and in contacts and use it to target and filter content](https://www.intercom.com/help/en/articles/4497943-install-and-configure-the-salesforce-app#h_df079f4292) . [See basic live data from within Salesforce (separate to mapped attributes)](https://www.intercom.com/help/en/articles/5203865-using-the-salesforce-app-in-intercom) . [Automatically log conversations, notes, and messages as Salesforce tasks, so you have a complete interaction history with each lead/contact](https://www.intercom.com/help/en/articles/4497943-install-and-configure-the-salesforce-app#h_df079f4292) . [Create cases in Salesforce from Intercom conversations](https://www.intercom.com/help/en/articles/4497943-install-and-configure-the-salesforce-app#h_df079f4292) . Prerequisites Before starting integration with Salesforce, the user needs to have the following accounts and permissions: Admin permissions for both services: Salesforce and Intercom. API access for the Salesforce account (for this, you need either Enterprise or Unlimited edition). Intercom also requires access to the next Salesforce objects and their fields: Lead (Email, LastName, FirstName, Company, Phone, Status (aka Lead Status), LeadSource, OwnerId). Contact (Email). Account (OwnerId). Task (Description (aka Comments), CreatedById (aka Created By), ActivityDate (aka Due Date), Subject, Type, Status, Priority, OwnerId (aka Assigned to)). Integration process Install the app from the App store and authorize it into your Salesforce account. Note! You can test the integration by installing the app into the test workspace and the Salesforce Sandbox environment. However, it\u2019s only available on certain Intercom pricing plans. Though after installing, the data is mapped by default and is ready for synchronizing, it\u2019s better to configure data mapping preferences manually. That\u2019s how you can avoid errors and discrepancies in the data flow. The app also allows to define the direction of the sync process (from Intercom to Salesforce or Salesforce to Intercom). Note! Before data mapping, you need to read [How is data synced between Intercom and Salesforce](https://www.intercom.com/help/en/articles/1047665-salesforce-integration-troubleshooting-and-f-a-q#h_ee50436db6) to be aware of any important details. Important! In case of duplicated objects, Intercom Users will be prioritized over Intercom Leads . In Salesforce, Contacts will be prioritized over Leads . When there are two matching users or two matching leads, the last updated one will be matched. When you\u2019ve finished the mapping process, the application starts data synchronization. However, this process requires time; a delay may take up to 5 minutes. Though this native integration method is easy to install as it\u2019s a part of the Intercom service, it has its limits. Among the problems you may encounter are the following: The relationship between Lead/User and Company in Intercom and the relationship between Leads/Contact and Accounts in Salesforce doesn\u2019t sync. This [relationship needs to be managed on the respective platforms separately](https://www.intercom.com/help/en/articles/1047665-salesforce-integration-troubleshooting-and-f-a-q) . Intercom (via native integration app) handles updating Salesforce tasks for Intercom conversations in two ways: automatically or manually. When doing this automatically, the task is updated with the conversation when the conversation is first created and when it\u2019s closed. When doing it manually, Intercom only sends a snippet of the conversation in the task and includes the conversation link if you want to read more. If you want [to have the full conversation in Salesforce when using the manual method, this isn\u2019t possible](https://forum.intercom.com/s/question/0D52G00004tQVI4SAO/can-i-create-salesforce-tasks-from-intercom-conversations-automatically) . Integration via Skyvia [Skyvia](https://skyvia.com/) makes your data work for you by offering a wide variety of data solutions that are easy to use. The service is a cloud-based platform that connects to local and cloud-based applications, so you can create [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) solutions. However, it\u2019s more than that, as it helps to analyze, process, and visualize data. Skyvia\u2019s cloud-based solution provides seamless integration and maintenance and helps you to manage and scale data at any load. Key Skyvia advantages The platform provides rich functionality and wide opportunities to implement complex integrations. The service implements complex logic, applying parallel processing, error interception, etc. No deployment and installation are needed as the service is cloud-based and browser-based. The interface is user-friendly and convenient. Business users can configure data flows visually as there\u2019s a no-code wizard available. Skyvia\u2019s approach has an advantage when it comes to debugging workflows. With its Data Flow and Control Flow features, users can observe the execution of the task in real time, monitor the data count, and have detailed information about the errors. In this article, we show how to easily and quickly configure automated data integration (as a periodic update) between Salesforce and Intercom with the help of Skyvia. Since the Skyvia platform is very flexible and has many capabilities, we demonstrate three possible scenarios for data integration. Business case description Sales and customer support alignment: By integrating Salesforce and Intercom, sales and customer support teams can easily access and share customer information, leading to more efficient and effective communication and collaboration. The sales and support team will have an up-to-date unified data source: It\u2019s a cost-effective practice that reduces the need for searching and cleaning data about Leads, Users/Contacts, and Companies/Accounts. Creation of Salesforce Tasks/Cases from Intercom Conversations. Sending conversations automatically makes it easy to keep track of your customers\u2019 activity. Data analysis and reporting: Integrating Salesforce and Intercom enables businesses to analyze and report on customer interactions and data, helping them to identify trends, improve processes, and make data-driven decisions. Before implementing integration scenarios: establish connections between the two services and Skyvia. Note! You must [create a Skyvia account](https://app.skyvia.com/login) if you don\u2019t have one yet. You can also try Skyvia with a free trial version. First, you need to create connections between [Intercom and Skyvia](https://docs.skyvia.com/connectors/cloud-sources/intercom_connections.html) as well as between [Salesforce and Skyvia](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) . Sign in with Skyvia and click +NEW in the top menu, then click the Connection button on the left menu. In the opened window, select the service you want to connect. In this case, it\u2019s either Salesforce or Intercom. Sign in with the chosen service, enter credentials for your account, and create a connection. Note! Always choose a unique name for your new connection since the default one always states Untitled. Note! More details on setting up [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Intercom](https://docs.skyvia.com/connectors/cloud-sources/intercom_connections.html) connections can be found within Skyvia documentation. Setting up Skyvia Import scenario for data integration This simple scenario is created for data integrations between two data sources. For example, in this case, we import Leads and Contacts from Salesforce to Intercom and back. Click +NEW in the top menu and select Import scenario in the Integration category. In the opened window, you can find the detailed instructions on the right and the workspace with Source and Target data flows on the left. Select the data connections (those created in the step above). Define the Source Type and apply additional options if needed. 4. Use the Add new button on the right to create a new data import task. 5. Specify Source settings. Select a source file , cloud app object , or database table/view to import data from, and, if necessary, configure data filtering. Alternatively, you can select the Execute Command or Execute Query action to query data (in advanced mode). 6. Specify Target settings. Select the target object (table) or multiple objects (tables) to import data to and the operation to apply when loading data. 7. Configure mapping of target columns to source columns, expressions, constants, lookups, etc. 8. Schedule the Import package for automatic execution by clicking Schedule on the top left menu. 9. Once you\u2019ve finished setting your Import package, click Create to continue. Now you can run this package and get your data integrated. Important! In the mapping table, there is a required field, LastName, that has no corresponding field in the Intercom object Lead . The solution is using Skyvia\u2019s mapping by expression. Advantages of the Import integration scenario The Import integration scenario works best for companies that don\u2019t need to update data very often and when it\u2019s possible to select the required amount of data for migration using filters. However, updating data is impossible as it\u2019s a simple downloading process, and with each subsequent download, the old data will be overwritten. If this integration scenario isn\u2019t enough, Skyvia has [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) packages for more advanced integrations. Setting up Skyvia Data Flow scenario for data integration This complex integration scenario is created to help businesses of any size integrate with multiple data sources. Data Flow is a powerful, easy-to-use solution that helps manage all data transformations in a single unified location. It can synchronize information about Leads , Users/Contacts, Companies/Accounts , creation of Salesforce Tasks/Cases from Intercom Conversations , and other scenarios with more complex logic. Note! When you open Data Flow for the first time, it will offer you the Fast Guide to the main features, which is very useful for newcomers. In this case, we show how to update Salesforce data ( Lead table) by enriching it with data from Intercom ( Lead table). Add the first component to the diagram: Source . (All the components can be added to your data flow by dragging them from the components list on the left-side menu). 2. Configure the Source details: select an Intercom connection to get data from, select and configure the action to perform to obtain data, and map its parameters if needed. 3. Add the Lookup block to match input records with records from another data source and add columns of the matched records to the scope. Specify the Behavior, Scope, and Property fields and select a Salesforce connection to get data from. 4. Add a destination point for your updated data: Target block. It stores input records in a data source and writes data to logs. It should be configured the same way as the Source component (connection to load data, actions to perform, parameters if needed). 5. Add the Row Count components to the data flow for counting the input rows and writing the result to a data flow variable. 6. Click on the top menu and add Variables to put the number of the rows and select the relevant variable in each Row Count block. 7. Add the Target block again to write data to a log. Connect the necessary component output to it and select Log as its Connection . The only action available for the Log connection is Write , so select it in the Action list. 8. Select the log Table to write data and map its columns to the input columns using the Mapping Editor. Note! You can download the logs by opening the run details on the data flow Monitor or Log tab, selecting the necessary run, and using the Download link near the corresponding log name in History Details. 9. Schedule the package for automatic execution by clicking Schedule on the top left menu. 10. Once you\u2019ve finished setting your Data Flow package, click Save to continue. Now you can run this package and get your data integrated. Advantages of the Data Flow scenario The Data Flow integration scenario is well-suited for companies that need constant and regular data updates. In conjunction with Control Flow, it allows complex automation of business processes using many tools/platforms. Using Data Flow, you can customize the load Lead/User/Contact/Account data depending on different parameters and use multiple final destinations for data such as CRM/marketing tools and many others simultaneously. Conclusion With the growth in Internet sales, there have been tremendous opportunities for businesses to develop strong customer connections online. However, the scale and nature of online business, combined with rising customer expectations, make it very difficult to build and maintain those relationships. Businesses need to be able to track the performance of every marketing, sales, and support activity. It allows companies to optimize their marketing efforts, increase the level of customer satisfaction, and make better sales decisions. Salesforce and Intercom are tools that allow you to collect valuable data about your clients and then use that information to increase your sales and market share. This Intercom Salesforce integration guide covers the following: native integration and seamless integration by Skyvia. We also demonstrate a real-life example of solving business cases using Skyvia\u2019s integration scenarios. You can try out Skyvia to connect your business data from anywhere right now! More articles about Salesforce integrations for Sales and Marketing Teams [How to connect Hubspot and Salesforce easily](https://skyvia.com/blog/hubspot-salesforce-integration) [Asana and Salesforce Integration: 2 easiest methods](https://skyvia.com/blog/asana-salesforce-integration) [How to connect Shopify and Salesforce](https://skyvia.com/blog/shopify-salesforce-integration) [Salesforce QuickBooks Integration: 3 Different Ways](https://skyvia.com/blog/salesforce-quickbooks-integration/) [Simple ways to connect Jira and Salesforce](https://skyvia.com/blog/jira-salesforce-integration) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-intercom-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+connect+Intercom+to+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-intercom-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-intercom-integration/&title=How+to+connect+Intercom+to+Salesforce) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-query-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Top 5 Salesforce Query Tools in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-query-tools/#respond) 1699 April 10, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-query-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+5+Salesforce+Query+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-query-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-query-tools/&title=Top+5+Salesforce+Query+Tools+in+2025) Salesforce is a mega-popular CRM due to its high degree of customizability, which makes it suitable for complementing customer records with specific details. But how do you avoid getting lost in the ocean of customer data? You need a Salesforce query tool to get the necessary information quickly. Such tools help you extract Salesforce data based on specific criteria. For instance, you might need a list of clients who haven\u2019t made any purchases in the last six months and send a promotional email to them. Or, you\u2019d like to find all Accounts with no associated Contacts with addresses. A Salesforce query tool can also help you calculate an average deal size for clients from the selected industry. In fact, Salesforce query tools carry out many other significant functions that organizations can benefit from. This article lists efficient solutions for querying Salesforce data and explains their functionality in detail. Table of Contents Understanding Salesforce Query Tools Top 5 Salesforce Query Tools in 2025 Finding the Right Salesforce Query Tool for Your Requirements Salesforce Query Tools: Final Thoughts Understanding Salesforce Query Tools What is a Salesforce Query Tool? A Salesforce query tool is usually a standalone software application designed for data query and export. However, it can also be a part of another, more sophisticated platform. For instance, [Workbench for Salesforce](https://skyvia.com/blog/salesforce-workbench-alternatives/) has a query module along with metadata management and API exploring features. Another embedded solution example is [Skyvia Query](https://skyvia.com/query) , which is a part of the data management platform. Depending on the tool, [SQL or SOQL](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/) query language is used to interact with Salesforce data. SOQL is a native Salesforce query API, primarily designed to work with Salesforce data. A commonly used SOQL query is similar to the following structure: SELECT list_of_fields [subquery]\nFROM object_name\n[WHERE condition_expression]\n[GROUP BY list_of_fields]\n [HAVING condition_expression] \n[ORDER BY list_of_fields {ASC|DESC} [NULLS {FIRST|LAST}] ]\n[LIMIT count_of_rows_to_return]\n[OFFSET count_of_rows_to_ignore] SOQL allows you to create an advanced query with joins, conditions using LIKE, etc. Modern Salesforce query tools also provide visual Salesforce query builders, where you can manually drag and drop Salesforce fields and specify filtering options. The query results are available for viewing and downloading. Why You Need Salesforce Query Tools Data stored in Salesforce doesn\u2019t bring much value when standing still. It\u2019s not enough just to accumulate more and more information about customers. Data needs to work for you and reveal insights for your business development. Querying Salesforce data helps to extract value from it in the following ways: Perform complex reporting and analysis. Take advantage of filtering capabilities that go beyond standard ones in Salesforce. Explore relationships between your data. Get an overview of data from the different objects\u2019 perspectives. Ensure enhanced data management. Ask questions and get exact answers to them. What types of customers buy your products? Where are most of your clients located? What is the best-performing source for bringing new leads? How much money was earned over a certain period of time? For instance, you can search for the list of clients based on a certain criterion indicated in the WHERE statement. UPDATE Contact\nSET FirstName = 'John'\nWHERE CustomerCity = \u2018Buenos Aires\u2019 You can also use query tools to group Salesforce records, order them, and perform a range of other operations on data. Obviously, such solutions provide a number of benefits: Efficiency . Queries save much time in gathering the required information and enhance your data-related workflows. Accuracy . The Salesforce query tools allow you to specify filters, increasing search results\u2019 accuracy and precision. Clear vision . Formulate specific queries related to your customers, leads, or deals and analyze the obtained query results. Top 5 Salesforce Query Tools in 2025 We are here to reveal effective tools for querying Salesforce data. Some rely on standard SQL language, which is widespread in the tech world. Others rely on SOQL, which is typical within Salesforce-centered environments. Tool 1: Skyvia [Skyvia](https://skyvia.com/) is a universal cloud data platform used for a wide variety of data-related tasks: Data querying Data integration Workflow automation OData and SQL endpoint creation SaaS Backup Let\u2019s focus on the [Skyvia Query](https://skyvia.com/query) product, an online SQL client and visual query builder. Skyvia Query tool allows users to: Create SQL statements Salesforce query editor or use drag-and-drop functionality to create requests in a visual Salesforce query builder. View the query results in the browser and export them to a PDF or CSV file. Perform [mass updates](https://skyvia.com/blog/mass-update-salesforce-records/) of Salesforce data. Query data from 200+ other popular applications and databases. Obtain the query results directly in the spreadsheet programs with an [Excel add-in](https://skyvia.com/excel-add-in) or [Google Sheets add-on](https://skyvia.com/google-sheets-addon) . That way, you can instantly apply reporting and analytics functions in Excel or Google Sheets and share the obtained results with stakeholders. Pros Cons 1. Visual query editor 2. SQL language support 3. Results are available for download in CSV or PDF 4. Query Salesforce data directly in Excel or Google Sheets 1. Only up to 5 queries per day are available for free As you see, Skyvia is an easy-to-use Salesforce query tool for easy data extraction even with no SOQL or SQL skills. This could be a great advantage for anyone interacting with Salesforce data. Try how Skyvia Query works for free. Given that Skyvia is a universal cloud data platform, you can also perform a range of other data-related tasks, such as: [Enrich Salesforce data](https://skyvia.com/blog/best-crm-data-enrichment-tools/) by integrating additional details about clients from datasets or other tools. [Backup Salesforce data](https://skyvia.com/blog/salesforce-backup-solutions-and-tools/) to ensure its integrity. [Send your Salesforce data to a data warehouse](https://skyvia.com/blog/data-warehouse-best-practices/) and combine it with data from other tools to perform advanced analytics. See what people say about Skyvia: When to Use Best Skyvia Query is the best for database professionals who know SQL but don\u2019t know Salesforce SOQL. It is also suitable for people that know neither of the languages with its visual query builder. It\u2019s a great tool when you need to quickly create a small report and view it in a browser or export a larger report to a CSV file. Additionally, Skyvia Query is great for performing mass updates or deletes using SQL DML statements. Skyvia does not require specifying Salesforce IDs for mass updates; you can specify SQL WHERE conditions to select records for updating or deleting. The third optimal scenario is querying Salesforce data directly from Excel or Google Sheets via the corresponding add-ons. Tool 2: Salesforce Developer Console Salesforce Developer Console is a native Salesforce development environment that includes a number of tools, including Salesforce [Query Editor](https://help.salesforce.com/s/articleView?id=platform.code_dev_console_tab_query_editor.htm&language=en_US&type=5) . It allows users to run SOQL queries and SOSL searches from the browser. Unlike other tools discussed here, it does not allow visual query configuration and requires you to enter these queries manually. With Salesforce Developer Console Query Editor, you can: Run SOQL queries Run SOSL searches View the query results in the browser Edit the query results in the browser Pros Cons 1. Native Salesforce solution, always available 2. Allows viewing and editing data in the browser 3. Supports SOSL4. Free 1. Requires knowledge of SOQL or SOSL 2. Has no visual query configuration When to Use Best Salesforce Developer Console is available for all Salesforce users with enough privileges. Its name suggests that it is intended to be used by developers. This tool requires SOQL and/or SOSL knowledge. It does not provide a visual query builder or anything for visual query configuration. It is useful when you want not only to execute queries against Salesforce but also to immediately edit their results. Note that you need to include record IDs to query in order to edit data. This tool also does not provide the means to export results to a CSV file, so it won\u2019t suit for generating CSV reports. Tool 3: SOQL (Salesforce Object Query Language) Builder [SOQL Builder](https://marketplace.visualstudio.com/items?itemName=salesforce.salesforcedx-vscode-expanded) is a tool from the Salesforce Extension Pack. Developers and administrators use this service to build queries in the visual editor. The query results can be saved in a CSV or JSON file. Pros Cons 1. Native Salesforce solution 2. Visual query editor 3. Query results can be downloaded 1. Requires knowledge of SOQL 2. It\u2019s possible to retrieve a maximum of 2000 records for one SOQL query When to Use Best SOQL Builder is intended more for developers, as it is a part of Visual Studio Code add-on. People with or without SOQL knowledge can use it, as it allows visual query configuration, but only for simpler single-object queries. You can use it to create short CSV reports of up to 2000 records because of the tool\u2019s limitations. Tool 4: Workbench [Workbench](https://workbench.developerforce.com/login.php) is a web-based Salesforce query tool suitable for administrators, developers, and business analysts. It\u2019s also a popular solution for exploring the Salesforce metadata, resetting user passwords in the organizational account, and working with Force.com APIs. Workbench uses [SOQL query language](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm) , which is similar to standard SQL but with some limitations on the SELECT statement. It\u2019s also possible to perform queries in the visual builder using filtering and sorting parameters. Pros Cons 1. Has a number of additional functions 2. Free 3. Visual query builder 1. The latest version of this service might work unstable When to Use Best People with and without SOQL knowledge can use Workbench. It can present results in the browser and export them to CSV and XML. While most tools described here can export to CSV, Workbench can be especially useful when you need to export your reports to XML. Additionally, Workbench can be used to modify data. It can perform single record updates/deletes or mass updates/deletes, but you must specify the corresponding data in a CSV file. Workbench can also be used to import data from CSV files. Tool 5: Data Loader [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) is a Windows OS desktop application available for download on the official Salesforce online portals. It\u2019s designed by Salesforce to help users create, update, and delete records in bulk with a visual wizard instead of SOQL queries. Salesforce Data Loader allows you to select specific data and download it via visual wizard. Just click the Export button, select the Salesforce object and fields, apply filters if needed, and there you go! Pros Cons 1. Salesforce native solution 2. Has a visual query builder 3. Free 1. Available only for Windows users When to Use Best You can use Salesforce Data Loader if you want to use a locally installed solution instead of a cloud one. This tool is more oriented for importing and exporting CSV files to and from Salesforce objects, but it allows exporting results of a custom query. Salesforce Data Loader allows you to configure query visually, so SOQL knowledge is not required. But if you are a SOQL professional, you can edit the configured query. Finding the Right Salesforce Query Tool for Your Requirements Let\u2019s look at the table below to help you decide which Salesforce query tool would suit you best. Query language Query builder Parameter Skyvia SQL Yes In PDF or CSV files. Additionally, it\u2019s possible to query Salesforce data directly in Google Sheets and Excel. Salesforce Developer Console SOQL, SOSL No text, XML, JavaScript, or CSS SOQL Builder SOQL Yes CSV or JSON file Workbench SOQL Yes CSV or XML file Data Loader No querying Yes CSV file In fact, SOQL Builder and Workbench can perform specific queries to extract the data of interest. All the requests are made using SOQL language. If you know the specifics of this query language, these tools might be a good choice. Data Loader isn\u2019t a standard query tool, though it allows users to select Salesforce data via visual builder, apply filtering, and export it in a CSV file. This tool is convenient to use, though it\u2019s suitable only for Windows computers. Skyvia combines most of the features and advantages of the above-mentioned tools and adds other valuable ones. You can construct queries in a visual builder not only for Salesforce data but also for other popular apps and databases. At the same time, Skyvia can enrich Salesforce data with additional details about your clients, back it up, and integrate it with other sources. Salesforce Query Tools: Final Thoughts Building and maintaining customer relationships is crucial for any business. Salesforce query tools are among those solutions that help companies manage and improve their understanding of target audiences, sales lifecycles, and other aspects. Skyvia allows you not only to query Salesforce data but also to perform a range of other operations on it. In fact, Skyvia is an all-in-one platform suitable for multiple data-related operations. You can try all that now with the [free version of Skyvia](https://app.skyvia.com/register?) and upgrade later at any time. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-query-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+5+Salesforce+Query+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-query-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-query-tools/&title=Top+5+Salesforce+Query+Tools+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-quickbooks-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce and QuickBooks Integration: 4 Methods & Best Practices By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/salesforce-quickbooks-integration/#respond) 4684 January 14, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+QuickBooks+Integration%3A+4+Methods+%26+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-quickbooks-integration/&title=Salesforce+and+QuickBooks+Integration%3A+4+Methods+%26+Best+Practices) Summary Salesforce and QuickBooks serve different business purposes and have distinct data structures, creating challenges for data exchange between sales and accounting teams. Salesforce QuickBooks integration enables automated data exchange, reducing manual data entry and improving accuracy and efficiency. There are four primary methods for Salesforce QuickBooks integration: third-party tools, custom integration, middleware platforms, and manual import/export. Benefits of integration include improved accuracy, efficiency, collaboration, customization, scalability, and a unified source of truth for business data. Choosing the right integration method depends on business needs, costs, complexity, and the size of the organization. CRM and accounting services are designed for different purposes and solve various business tasks. Such software flagships as Salesforce for sales and QuickBooks for accounting have completely different data structures but are often used by different teams in the same business. These differences create additional difficulties for data exchange between sales and accounting teams. Salesforce QuickBooks integration is a way to establish a smooth handshake between these systems. It helps to build a solid bridge for data exchange and collaboration between sales and accounting teams. In this article you will learn what is QuickBooks and Salesforce integration, what benefits it brings to businesses, and how to set up the integration. There are four available methods of Salesforce QuickBooks integration: third-party tools, middleware platforms, custom integration, and manual export/import. We will explore all of them, compare their features, and show you several examples of real data integration scenarios. Table of contents What is Salesforce QuickBooks Integration? Benefits of Salesforce QuickBooks Integration Methods of Salesforce QuickBooks Integration Method 1: Using Third-Party Integration Tools Top 5 Third-Party Tools for Salesforce QuickBooks Integration Pros and Cons of Using Third-Party Integration Tools How to Integrate Salesforce with QuickBooks Using Skyvia Import Method 2: Custom Integration Development REST API Integration Webhooks and Triggers Pros and Cons of Using Custom Development Method 3: Middleware Platforms Top 5 Middleware Solutions for Salesforce QuickBooks Integration Pros and Cons of Using Middleware Platforms How to integrate Salesforce with QuickBooks Using Skyvia Data Flow Method 4: Manual CSV Import/Export Export Data from QuickBooks Import CSV into Salesforce Pros and Cons of Manual CSV Import and Export Best Practices for Salesforce QuickBooks Integration Conclusion What is Salesforce QuickBooks Integration? Accounting, sales, and other teams use Salesforce and QuickBooks to automate business processes and resolve day-to-day business tasks. [Salesforce](https://www.salesforce.com/) is the cloud CRM platform for sales, marketing, analytics, and reporting. Its solutions cover the most critical business needs in one place. [QuickBooks](https://quickbooks.intuit.com/) is an accounting platform for managing and controlling finance-related processes. It helps to monitor company cash flow, control profit and expenses, manage analytics and reporting, and assist with critical financial decision-making. Integration is a process of connecting Salesforce and QuickBooks to establish a smooth exchange of this data between sales and accounting teams. Integrating these two platforms eliminates manual data entry by automating processes. For example, you can transfer Salesforce opportunities, accounts, or contacts to QuickBooks for billing and invoicing. Or load QuickBooks invoices, payments, or customer records back into Salesforce for sales and customer management. Benefits of Salesforce QuickBooks Integration Salesforce QuickBooks integration helps companies handle various data-related tasks. Here is the list of benefits from the integration of these two services. Accuracy Having two separate systems operating requires constant manual reconciliation and updates. Manual operations may cause errors such as duplications, mismatched information, or data discrepancies. Salesforce and Quickbooks sync helps to avoid human errors by automating routine processes and eliminating manual data entry. The integration provides consistent data exchange across both platforms and eliminates manual operations with data. Efficiency Integration can perform such routine tasks as generating invoices, updating payment statuses, or syncing account balances. It saves time and resources. Integration allows employees to focus on high-value activities. It helps to avoid unwanted delays in processes, improving process efficiency. Collaboration Data integration ensures better communication and transparency of data exchange between Salesforce and QuickBooks. For example, sales teams gain direct access to financial data, such as outstanding balances or payment history, from within Salesforce. Accounting teams can process sales in near real-time mode. Customization Integration enables setting up complex logical data flows across platforms. Businesses can tailor integration to their specific needs. For example, a company may load only specific data matching some conditions or execute different integration tasks in a particular order. Scalability Integration grows with your business, handling increasing data volumes and more complex workflows without requiring manual updates. Single Source of Truth Unified data from Salesforce and QuickBooks provides comprehensive insights into customer behavior and financial performance. Companies can provide personalized and informed service by having all customer interactions and financial data in one place. Integration also enables centralized reporting and analytics, providing comprehensive information. Methods of Salesforce QuickBooks Integration There are several methods of Salesforce QuickBooks integration. Choosing the right integration method takes work. To make a reasonable decision, you have to define the business needs, evaluate costs for integration, and analyze the existing integration methods and tools. We compared the existing integration methods by complexity,\u00a0cost, scalability, and business size. Method Ease-of-use Cost Scalability Business size Third-Party Services High Moderate High SMBs, Enterprises Custom API Integrations Low High High Enterprise Middleware Platforms Moderate High High Enterprise Manual Import/Export High Low Low Small businesses We will describe each method in detail below. Method 1: Using Third-Party Integration Tools Third-party tools are one of the most popular methods for integrating QuickBooks and Salesforce. Most of them require no coding and provide wizard-based interfaces. [QuickBooks App Store](https://quickbooks.intuit.com/app/apps/search?searchTerm=Salesforce&queryID=48ffc276998b1452a76f6748224bbe20&filterLanguage=English&filterRating=0) and [Salesforce Appexchange](https://appexchange.salesforce.com/appxSearchKeywordResults?keywords=QuickBooks) are marketplaces that may become starting points for exploring existing [data integration tools](https://skyvia.com/blog/data-integration-tools/) . Below, we describe five tools for Salesforce QuickBooks integration. Best for: Small to medium businesses whose use cases are not covered by built-in data integration tools. Top 5 Third-Party Tools for Salesforce QuickBooks Integration There is a variety of third-party tools that help to sync QuickBooks and Salesforce. We chose the top 5 solutions and explored their features, advantages, and disadvantages. Skyvia (Import) [Skyvia](https://skyvia.com/) is a no-code, versatile, cloud-based integration platform that connects Salesforce and QuickBooks Online or Desktop through a no-code approach. It can solve a range of data-related tasks and supports various cloud apps and databases. Skyvia [Import](https://skyvia.com/data-integration/import) is a wizard-based ETL tool that provides one-way data loading from a data source to a cloud app or database, applying data transformations using mapping capabilities. Best suited for: SMBs to enterprise businesses that need an affordable, no-code platform for Salesforce, QuickBooks, and other app integrations. Skyvia supports both QuickBooks Online and Desktop versions and offers powerful data transformation capabilities. Features No-code integration for Salesforce and QuickBooks Online and Desktop. Wizard-based and designer-based tools. Real-time and scheduled integration options. Pros Easy to use, requires minimum technical expertise. Flexible scheduling options provide near real-time integration. Multi-purpose tool for different data integration scenarios. Supports both QuickBooks Online and Desktop and other 200+ cloud apps and databases. Advanced transformation features. Cons Advanced scenarios and customizations may require basic SQL knowledge to build complex queries and expressions or write custom SQL commands. Pricing Free for basic tasks. Paid plans start at $19/month for small data volumes, with scaling based on features and usage. DBSync [DBSync](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N300000016bTHEAY&tab=d) is a cloud-based solution for Salesforce and QuickBooks integration. It supports both QuickBooks environments: Online and Desktop. With DBSync, you can sync data like customer information, invoices, and payments. Best suited for: Mid-sized to enterprise businesses looking for integration solutions offering advanced integration capabilities such as bi-directional sync and advanced customization features. Features Pre-built integration templates. Real-time and scheduled data sync. Bidirectional integration. QuickBooks Online and Desktop compliance. Pros Flexible and customizable integration templates. QuickBooks Desktop support. Real-time updates. Cons The steep learning curve for non-technical users. Essential limitations on the free plan. Customized integration requires setup and maintenance effort. Advanced features like real-time integration are available for the Enterprise plan. Pricing Price varies depending on the complexity of workflows, available features, and data volume. The free plan is available, with limits of up to 10000 records and three integrated objects. Breadwinner [Breadwinner](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N30000000q7fhEAA) is a native Salesforce tool designed to integrate Salesforce with financial services, such as QuickBooks Online and others. Best suited for: Small to medium-sized businesses working in QuickBooks Online are looking for real-time financial data sync to operate simple and direct accounting workflows without additional complexity. Features Real-time sync of financial data such as invoices, payments, and customer data. Support for multi-currency and multi-entity setups. Automatic invoice generation and updates based on Salesforce actions. Pros Easy to install and use with minimal configuration. Fully embedded in Salesforce. Real-time sync. Cons QuickBooks Desktop is not supported. Limited customization opportunities. A limited number of integrated objects. Pricing Pricing is based on user count and additional features. The service offers a free trial. Zapier [Zapier](https://zapier.com/) is a no-code integration and automation solution that connects Salesforce and QuickBooks. Best suited for: For small businesses setting up basic integration workflows or trigger-based automations between Salesforce, Quickbooks, and other apps. Features Automates specific data-related tasks like creating invoices in QuickBooks from Salesforce opportunities. Pre-built workflows for common integration use cases. Connects with thousands of other apps alongside Salesforce and QuickBooks. Pros Affordable and accessible for small businesses. User-friendly interface with drag-and-drop workflow designer. Pre-built templates gallery. Cons Limited opportunities for enterprise businesses. Pricing Zapier offers a free plan for basic workflows. Paid plans start at $19.99/month. Workato [Workato](https://www.workato.com/) is a cloud solution for data integration and business process automation. Best suited for: Medium to large enterprises planning to integrate multiple systems and handle enterprise-grade processes using advanced automation features and scalability for complex workflows. Features Pre-built templates called recipes. Advanced data transformation features. Multiple supported connectors. Pros Scalability and customization. Automation capabilities. QuickBooks Online and Desktop environments support. Cons Steep learning curve. Expensive for small businesses. Pricing Starts at $10,000/year, depending on usage and features. To summarize, we prepared a table comparing all the mentioned tools using different criteria. Tool Supported Versions Customization Cost Ease of Use Best Suited For Skyvia QuickBooks Online & Desktop High Free plan and paid plans vary by product and start from $79/month High Small to enterprise businesses DBSync QuickBooks Online & Desktop High Contact sales to get the pricing Moderate Mid-sized to enterprise businesses Breadwinner QuickBooks Online Moderate Contact sales to get the pricing High Small to medium-sized businesses Zapier QuickBooks Online Low to moderate Free plan and paid plans vary by feature and start from $19.99+/month High Small businesses Workato QuickBooks Online & Desktop High Contact sales to get the pricing Moderate Enterprises with complex workflows Pros and Cons of Using Third-Party Integration Tools Pros Ease-of-use . Most third-party tools offer wizard-based interfaces and ready templates to build integrations with a few clicks. Cost efficiency . Third-party tools are less expensive compared with middleware platforms and custom integrations. Compatibility . Most third-party tools support both online and desktop versions of QuickBooks. They also support other services and applications, allowing you to integrate multiple sources. Cons Limited customization capabilities . Advanced customizations for unique or highly specific integrations by third-party tools require more resources, costs, and time. Performance dependency . Integration performance depends on third-party tool architecture and infrastructure. You have no impact on unexpected delays or downtimes. Step-by-Step Guide: Integrating Salesforce with QuickBooks Using Skyvia Skyvia is a no-code cloud data integration platform that solves various data-related tasks, such as one-way and bi-directional data sync,\u00a0 ETL, ELT, Reverse ETL, and others. Skyvia supports [200+ data sources](https://skyvia.com/connectors) , including Salesforce and QuickBooks. Skyvia offers several solutions for QuickBooks Salesforce integration: Import is a wizard-based ETL tool for one-way data loading from a data source to a cloud app or database. Synchronization is designed for bi-directional integration. Skyvia offers more advanced integration scenarios for more complicated data pipelines using Data Flow or Control Flow . These tools allow you to create complex flows with data splits, conditions, and other data manipulation opportunities. Automation provides the trigger-action solution that monitors a specific event or condition in the chosen data source. We will describe the simple bi-directional Salesforce QuickBooks integration use case using Import. We will insert new records into QuickBooks Customers from Salesforce Contacts data and vice versa. To set up the integration, you need an active Skyvia account. [Sign up](https://app.skyvia.com/register) with Skyvia for free if you don\u2019t have it. To connect Salesforce and QuickBooks, perform the following steps: Create connections to both tools. Create the Import integration to sync data. Launch the Import. Check integration results. STEP 1: Create Connections to Salesforce and QuickBooks. 1. Click +Create New \u2013> Connection in the top menu and search for QuickBooks in the connectors\u2019 list. 2. On the Connection Editor page, name the connection and select the environment. Click Connect to QuickBooks , enter your QuickBooks credentials, and click the Sign In button. Select a QuickBooks company, wait until the token is generated, and click Create Connection . Click + Create New -> Connection ,\u00a0and select Salesforce. Select the Salesforce environment and authentication type. Click Sign In with Salesforce, enter your Salesforce credentials, and click Log In. Optionally specify the advanced settings. STEP 2: Create the Import integration. Click + Create New and select Import in the top menu. Select the Source type. Set your Salesforce connection as a Source and QuickBooks connection as a Target. Click Add task on the right to create the Import task. On the Source definition tab, select the Simple task editor mode. Select the Contact object and specify the filters if necessary. You can use Advanced mode to run custom queries or commands.\u00a0Find more details in the [Import task creation manual](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html) . On the Target Definition tab, select the Customers object as a target and choose the action you need to perform against the target table. Set up the mapping. Skyvia offers different mapping types. You can map column to column directly. Or transform data with other available [mapping types.](https://docs.skyvia.com/data-integration/common-package-features/mapping/) After the mapping is completed, save the task and the integration. STEP 3: Launch the integration [Schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) the integration to run automatically. You can set the schedule to work weekly or daily at a specific time or interval. STEP 4: Check Integration Results After the run, you can check the result in the integration run history on the Monitor and Log tabs. If needed, click on the specific run to check the results and view the per-record log. To perform the import oppositely from QuickBooks Customer to Salesforce, you can create another import according to the above steps. In this case, select QuickBooks as a Source and Salesforce as a Target, add the Import task, map the needed fields, and run the integration. We recommend avoiding overlapping the schedules if both integrations process the same objects. That\u2019s all. We have performed a bi-directional integration of Salesforce Contact and QuickBooks Customer. Method 2: Custom Integration Development If you have programming skills and technical knowledge, you can create a custom app for integration between [Salesforce](https://developer.salesforce.com/) and [QuickBooks](https://developer.intuit.com/app/developer/homepage) via API. This method is best suited for businesses with complex integration needs or custom workflows. There are two approaches to implementing a custom integration between Salesforce and QuickBooks: REST API integration and webhooks . With REST API, you can create an app that sends API requests to Salesforce and QuickBooks APIs and processes their responses. With Webhooks, Salesforce or QuickBooks will automatically notify your app about the changes in data. Let\u2019s look at both of these options in more detail. Best for Enterprises with complex use cases, and who is ready to invest in full data integration implementation. Integrate Salesforce and QuickBooks using REST API To implement this method, use the REST API to access QuickBooks Online and Salesforce data. Connecting to QuickBooks Online Accounting API The QuickBooks Online Accounting API allows integration for any app supporting REST API with QuickBooks and Salesforce in both directions in real-time. To set up integration with QuickBooks using REST API, perform the following steps: Create a developer account in QuickBooks. [Sandbox company](https://developer.intuit.com/app/developer/qbo/docs/develop/sandboxes/manage-your-sandboxes) will be created automatically in this case. Create An App and define scopes . [Scopes](https://developer.intuit.com/app/developer/qbo/docs/learn/scopes) help you to manage access to your accounting data. When the app is created, it gets unique credentials : [Client ID and Client Secret](https://developer.intuit.com/app/developer/qbo/docs/get-started/get-client-id-and-client-secret) . The app needs to obtain an [access token](https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization/oauth-2.0) to exchange data with QuickBooks Online. If the user grants permission, the QuickBooks OAuth 2.0 server sends an authorization code back to your app. This code is exchanged for access tokens. These tokens are tied to users\u2019 now authorized QuickBooks Online company. Before performing any transaction via API, you have to create the common [basic predefined entities](https://developer.intuit.com/app/developer/qbo/docs/develop/basic-implementations/basic-invoicing-implementation) in QuickBooks, such as accounts, tax codes, and customers. Now, you can perform API calls to QuickBooks and make integration actions as per the [API Reference](https://developer.intuit.com/app/developer/qbo/docs/api/accounting/all-entities/account) . You can refer to commonly used [use cases](https://developer.intuit.com/app/developer/qbo/docs/workflows) documented by QuickBooks. Connecting to Salesforce REST API Salesforce offers several web interface options to access its data, and REST API is one of them. To access your Salesforce data using REST API, you need a Salesforce org with API access and the API Enabled user permission within that org. To connect to Salesforce data via API, you should do the following: Sign up with the [Developer edition](https://developer.salesforce.com/signup) . Developer edition provides you with the Salesforce org for development and testing. If you already have a sandbox or development org, you can use it for Salesforce REST API integration. Confirm that you have API Enabled as permission for your user profile by following the instructions in [User Permissions](https://help.salesforce.com/s/articleView?id=sf.admin_userperms.htm&type=5) in Salesforce Help. For Developer Edition, this permission is assigned by default. Create a Connected App . Sign in to Salesforce, click Setup \u2013> App Manager \u2013> New connected App, and insert the basic information about your app according to the [instructions](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_basics.htm&type=5) . Then, you have to [enable OAuth Settings](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_api_integration.htm&type=5) for API Integration. Get a Consumer Key and a Consumer Secret . In the Salesforce Setup menu, enter App Manager in the Quick Find box, and then select App Manager. Click the dropdown menu for the connected app that you created and select View. Copy the Consumer Key and Consumer Secret values and save them for later use in this quick start. Set up authorization . Send a request to the Salesforce OAuth endpoint following the [recommendations](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart_oauth.htm) . Now, you can perform API calls to Salesforce and make integration actions per the [API Reference](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_list.htm) . You can refer to a commonly used [example](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_user_tasks.htm) documented by Salesforce. Integrate Salesforce with QuickBooks using Webhooks and Triggers This approach enables the implementation of real-time, one-direction integration using webhooks. Webhook is a feature that works the following way: when something in the data has changed, webhook sends a notification about these changes. To implement a webhook, you have to configure the entity and the event to notify you about it and establish a URL on the receiving end to accept the data. To use webhooks in QuickBooks, you need to configure an endpoint QuickBooks servers can call whenever the notifications are triggered. You have to configure webhooks separately for sandbox and production environments. The details on how to use webhooks in QuickBooks are available [in QuickBooks documentation](https://developer.intuit.com/app/developer/qbo/docs/develop/webhooks/) . Salesforce also supports the tracking and notifying process for entities\u2019 changes. Salesforce has a built-in tool called Apex Triggers . Apex triggers enable you to perform custom actions before or after changes to Salesforce records, such as insertions, updates, or deletions. You can learn more about Salesforce triggers in [Salesforce developer](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_triggers.htm) docs. Pros and Cons of Custom Integration Development Pros Flexibility . You can access any available object and perform any possible data-related operation. You fully manage your integration and decide what web service to use and how to set the integration up. You can develop your own integration app to meet your specific requirements. Real-time integration is supported using webhooks. Cons You need a developer account in QuickBooks and Developer Edition in Salesforce to use this method. Requires programming skills . It is complicated and requires a lot of time and resources for development and testing. Requires constant maintenance . Method 3: Middleware Platforms Middleware platforms represent another method of Salesforce QuickBooks integration. They can handle complex integrations for enterprise-size businesses. Middleware platforms enable the integration of various services and applying complex workflows. They provide extensive customization options, cross-service logic, and advanced transformations. In contrast to third-party tools, middleware platforms are usually more expensive and complicated. Let\u2019s look at the top five middleware tools. Best for: Enterprises with complex use cases, which need the outsource assistance for data integration implementation. Top 5 Middleware Solutions for Salesforce QuickBooks Integration MuleSoft [MuleSoft](https://www.mulesoft.com/) is an enterprise-grade middleware platform owned by Salesforce, offering robust integration capabilities for connecting applications, data, and devices. It enables seamless data flow between [Salesforce and QuickBooks](https://www.salesforce.com/solutions/small-business-solutions/integrations/quickbooks-crm-integration/) , with extensive customization options. Best suited for: Large enterprises with complex integration needs and resources for handling complex workflows. Features API designer. Real-time integrations and advanced analytics. Prebuilt templates. Pros Complex use cases support. Multiple apps support. Salesforce native support. Advanced analytics and monitoring. Cons Expensive for small and mid-sized businesses.. Technical knowledge is required. Steep learning curve. Pricing Pricing varies depending on available features. Skyvia Data Flow [Skyvia Data Flow](https://docs.skyvia.com/data-integration/data-flow/) is a no-code, cloud-based data pipeline designer for data integration and synchronization. It helps to build custom flows and apply advanced transformations to integrate Salesforce, QuickBooks, and other 200+ supported data sources. Best suited for: Both SMBs and enterprises looking for easy to use data sync and process automation solutions for various apps and services. Features Pre-built connectors for Salesforce and QuickBooks (Online & Desktop). Intuitive, no-code design. Real-time and batch synchronization options. Data backup and transformation capabilities. Pros Affordable and easy to set up. Supports both QuickBooks Online and Desktop. Suitable for small businesses with straightforward needs. Cons Limited scalability for large enterprises. Less suitable for highly complex workflows. Pricing Free for basic use; paid plans start at $159/month. Boomi [Boomi](https://boomi.com/) is a cloud platform from Dell that helps to connect various services. It offers low-code tools for different data integration use cases, including QuickBooks Salesforce sync. Best suited for: Mid-sized and large businesses implementing complex integrations and data management. Features Pre-built connectors and user-friendly interface. AI-guided integrations. Real-time integration and batch data processing support. Pros Low-code tools. Flexible pricing for businesses of various sizes. Strong community and support network. Cons May require additional investment for advanced customizations. Steep learning curve for complex workflows. Pricing Pricing varies depending on available features. Jitterbit [Jitterbit](https://www.jitterbit.com/) is an integration platform that focuses on simplifying connectivity between Salesforce, QuickBooks, and other applications. Its intuitive tools allow users to build custom integrations quickly. Best suited for: Large businesses looking for data integration of multiple services. Features Pre-built templates for Salesforce-QuickBooks workflows. Low-code design for integration creation. Real-time and batch data sync options. API manager for creating custom APIs. Pros Pre-built templates. Flexible options for data integration and API management. Strong customer support and documentation. Cons It may require technical knowledge. Expensive for small businesses. Pricing Pricing varies depending on available features. SnapLogic [SnapLogic](https://www.snaplogic.com/) is an intelligent platform offering AI-driven solutions to connect Salesforce, QuickBooks, and other services. It specializes in automating data flow across cloud and on-premise applications. Best suited for: Businesses looking for complex data integration and automation solutions. Features AI-powered automation capabilities. Real-time integration. On-premise and cloud connectivity. User-friendly interface. Pros AI-driven recommendations simplify integration processes. Scalability. Cons Expensive for small businesses. Steep learning curve. Pricing Pricing varies depending on available features. Pros and Cons of Using Middleware Platforms Pros Advanced \u0441ustomization . Middleware platforms offer customization capabilities for unique or highly specific integrations. Scalability . Middleware platforms are designed to handle large data volumes without compromising performance. Middleware solutions enable complex integration workflows and advanced process automation. Cons Costs . Middleware platforms are more expensive compared with third-party tools and manual integration via built-in tools. Complexity . Integration setup requires significant technical expertise or the involvement of an expert. Maintenance . Middleware platforms require continuous maintenance and regular monitoring. Step-by-Step Guide: Integrating Salesforce with QuickBooks Using Skyvia Data Flow With Skyvia [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) , you can implement complex scenarios using advanced transformations. We will show you how to create QuickBooks customers from Salesforce contacts and accounts. To start building a data flow, you need to have an active Skyvia account and valid connections to the data sources. To build a custom integration scenario, perform the following steps. Create a new Data Flow. Add the components. Run the integration. Let\u2019s explore each step in detail below. STEP 1: Create a new Data Flow. STEP 2: Add the components. Drag the Source component to the diagram and select the connection. Select the desired action. Data Flow allows you to build queries in a visual editor, write custom commands, and even run Salesforce reports. We selected the Execute Query action. Select the Contact object or its specific fields to query. Set filters if needed. Add the Lookup component to the diagram and connect. It will help you to fetch the company name and website from the Account object. Select connection, command, object, lookup key, and result fields. Map the lookup key. In our case, we map Account.Id to Contact.AccountId. Add the Target component to the diagram. Select the connection, action, and object. Complete the mapping in the Parameters section. STEP 3:\u00a0 Run the integration by schedule or do it manually. [Schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) the integration to run automatically or run it manually. You can set the schedule to work weekly or daily at a specific time or by a particular interval. We showed you a simple example of Salesforce QuickBook sync using Skyvia Data Flow. Visual Data Flow designer and various components allow you to tailor any integration according to your business needs. With Data Flow, you can integrate various data sources in a single integration, load data in any direction, and apply advanced data transformations. Method 4: Manual CSV Import/Export There is one more method to sync Salesforce and QuickBooks. It involves exporting data to a CSV file from one side and importing this CSV file to another side. You can export Salesforce data to CSV and then import [CSV to QuickBooks](https://skyvia.com/blog/how-to-import-csv-into-quickbooks-online-and-desktop/) or vice versa using such tools as Skyvia Export and Import or\u00a0 Salesforce Data Loader and QuickBooks\u2019 built-in import/export. Best for: Small businesses to implement sporadic simple data integration use-cases like bulk data import, not involving third-party tools or outsource expertise. Step-by-Step Guide: Export Data from QuickBooks To export QuickBooks Online data manually, perform the following steps: Go to QuickBooks, click the gear icon, and select Export data in the TOOLS section. Choose the object type to export: Reports or Lists. Enable the object toggle to include an object in the export. Click Export to Excel to retrieve QuickBooks data to the local file. As a result, you get a Zip file with a separate CSV file for each object. You can use these files as sources for import to Salesforce. Step-by-Step Guide: Import CSV into Salesforce The easiest way to import CSV files to Salesforce is to use the built-in Salesforce tool [Data Import Wizard](https://help.salesforce.com/s/articleView?id=sf.data_import_wizard.htm&language=en_US&type=5) . To do that, perform the following actions: In Salesforce, click the gear icon and select Setup . In the Quick Find box, type Data Import and select Data Import Wizard. Scroll down and click Launch Wizard . Select the target object and choose what to do with your data. Define the field mapping and start the import. As a result, you get your CSV file content imported into Salesforce. Pros and Cons of Using Manual CSV Import and Export Pros Free or low-cost Easy to use. Great for ad hoc data transfers without requiring a fixed integration setup. Cons This method is time-consuming. It requires repeated manual activities. High risk of human errors. No automatic updates. Scalability Issues: Unsuitable for large data volumes or frequent integration needs. Best Practices for Salesforce QuickBooks Integration You are now aware of all the methods of Salesforce QuickBooks integration. You can proceed to the next steps. Before you start implementing your scenario, let\u2019s look at several hints to help you build integration successfully. Pay Attention to Planning Think of the objectives of your integration. Think of the use case logic. Plan what objects you will integrate, what data you will exchange, what actions you will perform, and what result you expect. Evaluate risks. Analyze what resources the integration takes and what risks it produces. Choose the Method Choose the integration method based on the information above. Choose the approach that is the best for your requirements and technical environment. Evaluate Changes Think about how QuickBooks and Salesforce business processes will change after the integration is implemented. Evaluate the processes\u2019 compatibility. Ensure Compatibility Ensure your Salesforce and QuickBooks subscriptions and versions are compatible with the planned integration. QuickBooks offers both Online and Desktop versions, each with distinct integration capabilities. Evaluate ROI Evaluate the potential ROI of the integration by analyzing costs, time savings, error reduction, and improved data accuracy. All these points will help you prepare to implement data integration in your company. Conclusion Integration between Salesforce and QuickBooks is a great solution to reduce the amount of manual routine operations and mitigate the risks of human mistakes. There are several methods to integrate Salesforce and QuickBooks. You have to carefully analyze your business needs, workflows, architecture, and other specifics to decide what method is perfect for your organization. We compared the top 5 of the most popular tools among the third-party solutions and the top 5 middleware platforms.\u00a0Each tool has specific features and particular pros and cons. We explored each method of Salesforce QuickBooks sync in detail and learned how to build Salesforce QuickBooks integrations using different approaches including Skyvia solutions. It offers versatile no-code tools for businesses of all sizes. It can handle simple integration scenarios and complex workflows of large volumes of data with advanced transformations. You can find more details about Skyvia data integration opportunities [here](https://docs.skyvia.com/) . You can [register](https://app.skyvia.com/register) in Skyvia for free and start building your Salesforce and QuickBooks integration right now. FAQ for Salesforce and QuickBooks How much does it cost to integrate Salesforce with QuickBooks? The integration costs may differ from free of charge to tens of thousands of dollars monthly, depending on the features and usage. The CSV Import/Export method is free and is included in QuickBooks or Salesforce subscriptions. Some third-party tools offer free subscriptions. Middleware solutions are usually more expensive. They offer custom prices based on needed features or usage. What about Salesforce QuickBooks integration security? Most third-party tools and middleware solutions comply with industry standards such as GDPR, CCPA, or others. Some of them use multifactor authentication encryption. You must ensure data security if you use manual CSV integration or custom API development. Can I customize the integration to meet my business needs? It depends on the selected integration method. The Custom API development method offers the best customization opportunities. The integration using the manual CSV Import/ Export is the least customizable. Third-party tools and middleware platforms offer various customization solutions such as mapping, custom workflows, and others. Is coding or technical expertise required for the integration? It depends on the chosen integration method. The easiest manual CSV Import/Export doesn\u2019t require special skills. Most third-party tools are also no-code. They offer visual editors and wizards and provide pre-built integration templates. Custom API integration requires programming skills or developers\u2019 assistance, especially when implementing complex logic. Can this integration handle large volumes of data? It depends on the solution capacity, architecture, and subscription. Middleware solutions and some third-party tools, such as Skyvia, enable managing large datasets. They provide bulk operations, [batch processing](https://skyvia.com/blog/batch-etl-processing/) , and other performance optimization features. Which method is best for my business size? Simple one-time integrations in small businesses or startups can be handled by manual CSV Import/ Export tools. Third-party tools are the most suitable for small and medium-sized businesses because they are easy to use and cost-effective. Middleware platforms are the best choice for enterprises that operate large data volumes and have complex customization requirements. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+QuickBooks+Integration%3A+4+Methods+%26+Best+Practices&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-quickbooks-integration/&title=Salesforce+and+QuickBooks+Integration%3A+4+Methods+%26+Best+Practices) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-security-best-practices/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) 10 Salesforce Security Best Practices for 2025: Full Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-security-best-practices/#respond) 1941 April 29, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-security-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Salesforce+Security+Best+Practices+for+2025%3A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-security-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-security-best-practices/&title=10+Salesforce+Security+Best+Practices+for+2025%3A+Full+Guide) Over [150,000 organizations](https://www.salesforce.com/campaign/worlds-number-one-crm/) use Salesforce to keep customer records, sales pipelines, and other business-critical information. This data is a treasure for any business. At the same time, it attracts hostile gold miners, competitors, and intruders craving sensitive information. That\u2019s why each company must act to protect its Salesforce organizational account. To help you with that, we\u2019ve prepared a Salesforce security guide. In it, you\u2019ll learn Salesforce security best practices for safeguarding data. Table of Contents Basics in Salesforce Security 10 Salesforce Security Best Practices For Your Business The Role of Skyvia in Salesforce Data Security Best Practices A Continuous Journey of Salesforce Security Wrapping Up Basics in Salesforce Security Cloud data has always been vulnerable to cyberattacks and intruder invasions. However, employees could also contribute to insider Salesforce data leaks and integrity violations. To address potential threats in Salesforce, take a look at the core components of Salesforce security first: Authorization outlines the permissions for data access and system configuration for each user based on their profile. Visibility determines which data a user can view and manipulate. At the organization-wide level, there are three visibility levels: Public Read/Write. All users can view and edit all data. Public Read Only. All users can view all data. Only content owners can edit data. Private. Content owners can view and edit only their data. Sharing rules provide access to users depending on the record type, content owner, etc. Authentication verifies the user\u2019s identity before logging into the system. Salesforce supports standard username-password access type, multi-factor authentication (MFA), and single sign-on (SSO). As for data protection, Salesforce adapts a four-tier security model: Organization level specifies who has access to the Salesforce organizational account by setting the login IP range and hours. Object level grants permission to a user to access a Salesforce object. Field level controls whether a user can see, edit, or delete fields in an object. Record level determines which records in an object can be viewed and edited by which users (depending on their profile). Understanding the basics of the Salesforce security model helps to take the necessary measures to protect organizational data from internal and external threats. You may decide who needs access to which information, what restrictions to set, etc. 10 Salesforce Security Best Practices For Your Business 1. Enable Multi-Factor Authentication (MFA) Username and password credentials are commonly used across the web to access accounts. However, this approach isn\u2019t secure enough because modern hackers quickly discover secret combinations. Check [the most popular passwords](https://www.reddit.com/r/coolguides/comments/qy9lrd/most_common_passwords_found_in_10_million/#lightbox) and avoid using them for organizational accounts. An MFA approach was invented to grant better login security. It implies a two-step login: first, a user enters their credentials; then, a user confirms the login action on their personal smartphone. 2. Restrict access by using IP ranges Set access to a Salesforce account only from the [trusted IP range](https://help.salesforce.com/s/articleView?id=sf.login_ip_ranges.htm&type=5) (corporate network or VPN). Users logging into Salesforce from external IP addresses must verify their identity. 3. Apply granular access control using profiles and permission sets In 2024, Salesforce implemented [enhanced permissions](https://help.salesforce.com/s/articleView?id=000388827&type=1) that have entirely redesigned data-sharing habits. They grant granular access control via four access levels to documents: Full Access provides the right to share, edit, and delete a document. It also allows a user to change sharing settings by adding or removing thread members, editing access levels, and adding or removing folders. Can Edit permission enables users to edit a document, view sharing links, and check collaborators. Can Comment allows users to see link sharing and edit history, send new messages and requests for editing, and leave comments. Can View allows users to view link sharing and send new edit requests. 4. Establish secure password management policies Many online services specify a list of password requirements for user account creation. Salesforce isn\u2019t an exception: it requires a password to be at least eight characters long and contain digits and characters. Salesforce also sets a security question on user registration. Experts recommend using at least a 15-character password for a Salesforce user account. So, encourage your employees to establish strong passwords for their Salesforce accounts. Administrators can use [Workbench for Salesforce](https://skyvia.com/blog/salesforce-workbench-alternatives/) to force reset weak passwords. 5. Run checks on your organization\u2019s security status Use the built-in [Health Check](https://help.salesforce.com/s/articleView?id=sf.security_health_check.htm&type=5) procedure to identify any white spots in current security settings. It calculates the organization\u2019s security score based on the [Salesforce Baseline Standard](https://help.salesforce.com/s/articleView?id=sf.security_health_check_score.htm&type=5) index. Organizations requiring extra security levels may consider creating a [custom baseline](https://help.salesforce.com/s/articleView?id=sf.security_custom_baseline_create.htm&type=5) . It allows administrators to specify the number of login attempts, password complexity, etc. 6. Evaluate the use of Salesforce Shield Another way to advance security is the [Salesforce Shield](https://help.salesforce.com/s/articleView?id=sf.salesforce_shield.htm&type=5) solution. It can be of great value to governmental, healthcare, and financial organizations. Salesforce Shield comprises three security tools for building extra levels of protection: Shield Platform Encryption helps to encrypt Salesforce fields containing sensitive data and protect it from unauthorized access. It also ensures that your data corresponds to external and internal compliance policies. Real-time Event Monitoring provides detailed information about performance, security, and usage data. It shows who accesses business-critical data at what time and where. Field Audit Trail ensures that you\u2019re aware of the value and state of Salesforce data. Another tool, [Einstein Data Detect](https://help.salesforce.com/s/articleView?id=sf.einstein_data_detect.htm&type=5) , scans your data, identifies sensitive information, and recommends measures to protect it. 7. Carry out regular audits [Salesforce auditing features](https://help.salesforce.com/s/articleView?id=sf.security_overview_auditing.htm&type=5) provide information about system use, revealing potential security issues. They don\u2019t secure a user account from existing threats but keep you on the right track in safeguarding organizational data. To make sure the system is secure, pay attention to the following usage trends during auditing: Login history. Record modification fields. Setup audit trail showing logs with modifications on organizational Salesforce account configuration. 8. Beware of phishing emails Phishing emails look very real and seem to come from a legitimate organization, but they don\u2019t. Their primary purpose is to steal confidential information (usernames, passwords, and credit card details). Phishers may send emails with a malicious link from force.com domain, for example. To recognize and timely detect malicious intents of phishing emails, check whether it asks you one of the following: Click a link Open an attachment Log in to your account Validate password If any of these options are present, it\u2019s most likely a phishing email. In such a case, you must report suspicious activity to the Salesforce Security team at security@salesforce.com . At the same time, [Salesforce recommends](https://help.salesforce.com/s/articleView?id=sf.security_overview_trust.htm&type=5) taking the following actions to decrease the risk of phishing emails: Ensure that MFA is implemented for all Salesforce user accounts. Activate IP range restrictions to allow users to access Salesforce only from the corporate network or VPN. Inform your employees about what phishing emails usually look like and instruct them not to open suspicious emails. Use tools for advanced protection that tend to block phishing emails. Implementing [DMARC and DKIM](https://powerdmarc.com/dmarc-vs-dkim/) authentication to prevent unauthorized senders from spoofing your domain. 9. Ensure connection security Another best practice implies secure HTTPS connections with any third-party domain. For that, proceed with the following steps: Click on the cog icon and go to Setup in your Salesforce organizational account. In the search box, find Session Settings and click on it. Select Require secure connections (HTTPS) for all third-party domains. 10. Make regular backups The backup and recovery strategy is among the best practices for preserving Salesforce organizational data. Decide which data needs to be backed up and how often it needs to be backed up. Then, in case of unexpected data corruption or hacker attacks, you can restore data anytime. The Role of Skyvia in Salesforce Data Security Best Practices Salesforce has native data backup and recovery solutions, but they are rather conventional. That\u2019s why Salesforce encourages users to consider third-party solutions with a more comprehensive approach and set of features for backup and recovery. To help you implement Salesforce data security best practices, we\u2019d like to introduce Skyvia. This is a universal cloud data platform containing the following products: Data Integration Query Backup Automation Connect Here, we\u2019d like to focus on the Backup product, enhancing the security of Salesforce data. [Skyvia Backup](https://skyvia.com/backup/) is designed to back up data from supported [cloud applications](https://docs.skyvia.com/connectors/cloud-sources/) either automatically on a schedule or manually. This tool helps you organize your Salesforce data backups according to the planned schedule. Skyvia Backup allows you to select data, such as sensitive information or anything you prefer. The backed-up data can then be viewed, exported, or restored directly to a data source on demand. All these operations are done via a user-friendly Skyvia interface in a web browser. It\u2019s important to note that Skyvia stores the backed-up data in an encrypted form on a secure Azure environment. Moreover, our tool uses a unique encryption key for each user, so no one except a user doesn\u2019t have any access to the backed-up data. Skyvia Backup can be used absolutely for free if the total amount of the backed-up data doesn\u2019t exceed 1 GB. Once more storage space is required, you may switch to [the paid solutions at affordable prices](https://skyvia.com/pricing/) . A Continuous Journey of Salesforce Security Monitor the most recent trends and regulations on [Salesforce Security\u2019s](https://security.salesforce.com/) official website. There, you\u2019ll find news related to security and announcements on webinars with expert recommendations. If you want to explore security enhancement practices deeply, we recommend referring to the official [Salesforce security guide](https://developer.salesforce.com/docs/atlas.en-us.securityImplGuide.meta/securityImplGuide/salesforce_security_guide.htm) . It contains all the security-related aspects and explains how to implement them. Wrapping Up Salesforce data has always been attractive to parties with malicious intentions. Therefore, it\u2019s necessary to implement the best practices for Salesforce security within your organization. Multi-factor authorization, strong passwords, granular access control, and regular audits often enhance Salesforce accounts\u2019 internal security. When data needs to travel outside of Salesforce, Skyvia protects it by offering secured channels for data transfer and safe backup solutions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-security-best-practices%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Salesforce+Security+Best+Practices+for+2025%3A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-security-best-practices%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-security-best-practices/&title=10+Salesforce+Security+Best+Practices+for+2025%3A+Full+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-to-amazon-redshift-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Salesforce to Redshift: A Comprehensive Guide By [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) [0](https://skyvia.com/blog/salesforce-to-amazon-redshift-integration/#respond) 4604 April 8, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-redshift-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Salesforce+to+Redshift%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-redshift-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-amazon-redshift-integration/&title=How+to+Connect+Salesforce+to+Redshift%3A+A+Comprehensive+Guide) Disconnected systems are one of the biggest hidden costs in businesses. When sales, marketing, and product data live in separate tools, teams spend hours stitching together reports, chasing metrics, and working from incomplete information. It slows everything down and makes real-time decision-making nearly impossible. That\u2019s where Salesforce to Redshift integration comes in. You can sync customer data directly into a data warehouse to eliminate silos, reduce manual work, and give every team access to consistent, up-to-date analytics. It\u2019s a smarter, faster way to turn data into decisions \u2014 without waiting for reports or relying on guesswork. In this article, we will learn how to connect Salesforce to Redshift using different methods. Table of contents Key Features of Salesforce Key Features of AWS Redshift Why Do You Need to Connect Salesforce to Redshift? Salesforce and Redshift Integration Options Method 1: Native Integration Method 2: Third-Party Integration Tools Method 3: Integration Using APIs Conclusion Key Features of Salesforce Salesforce is the most popular cloud CRM on the market. It has already become the standard for customer relationship management and holds a [significant 21.7% of the market share](https://cyntexa.com/blog/salesforce-statistics/) . The key benefits of Salesforce include: Improved Customer Relationship Management : Track the interactions, preferences, and purchase history of customers, which will lead to better customer service. Increased Sales Productivity : Many business processes can be automated. Your reps can focus on more important tasks instead. Always Available : Accessible from anywhere with an internet connection, on many different devices. Scalability : Can be scaled to meet the needs of the business. Customizable : Highly customizable and can be tailored to many specific requirements. Besides customer relationship management, Salesforce offers a united platform with a number of additional tools for marketing (Salesforce Marketing Cloud), data analysis (Salesforce Wave Analytics), customer service, and many more. Functionality can also be extended with customer or third-party applications. Key Features of AWS Redshift Amazon Redshift is a part of the Amazon Web Services (AWS) platform. It is a cloud-based data warehouse designed for large-scale data analytics and business intelligence. Key benefits include: Massively Parallel Processing (MPP) : Splits data processing tasks and runs them simultaneously on multiple processors, providing exceptional performance for complex analytical queries over huge volumes of data. Data Warehousing : Allows storing and processing data on a petabyte scale. Cost-Effective : Redshift offers relatively cheap data storage per gigabyte compared to Salesforce. Why Do You Need to Connect Salesforce to Redshift? Salesforce holds critical data about leads, sales, customer interactions, and support tickets. But what about financial data, marketing performance, website traffic, and product usage? That information often lives in other systems. With Salesforce to Redshift integration, business can [get all their data in one place](https://skyvia.com/learn/single-source-of-true) to see the full customer journey and make decisions faster. Salesforce offers reporting, but it\u2019s not designed for complex data analysis at scale. Large datasets can slow down reports, and certain insights (like trend forecasting or AI-powered recommendations) require advanced data crunching. Redshift can help you run complex queries on millions of records in seconds. Your business will be able to unlock the power of machine learning models to predict sales and risks, as well as to tailor customer interactions. Storing large amounts of [data in Salesforce can get expensive](https://www.salesforce.com/marketing/data/pricing/) . Instead of paying high storage costs, businesses can move rarely used but legally required records to Amazon Redshift to save some money. Benefits & Use Cases Marketing Analytics : Track campaign performance, customer segmentation, and lead scoring. Sales Analytics : Analyze sales pipelines, forecast revenue, and identify top-performing reps. Customer Service Analytics : Monitor customer satisfaction and identify service bottlenecks. Business Intelligence : Create comprehensive dashboards and reports for executives. Salesforce and Redshift Integration Options There are several options to connect Salesforce to Redshift. Below is a quick overview of each method: Method Best For Skill Level Customizability Native Tools Budget-conscious, occasional use Medium Low Third-Party Tools Easy, hands-off automation Low High Salesforce API Dev teams needing full control High High Method 1: Native Integration Salesforce offers several native ways and tools for connecting Salesforce to Redshift: MuleSoft AnyPoint Platform . This is a [Salesforce-owned solution](https://www.salesforce.com/news/press-releases/2018/03/20/salesforce-signs-definitive-agreement-to-acquire-mulesoft/) for automatic data solution that allows Salesforce and Redshift integration, as well as with other apps and systems. It comes with an additional cost, depending on data sources and use cases, and [there is no well-defined pricing published](https://www.salesforce.com/products/integration/pricing/anypoint-platform/) ; you need to contact them for a quote. CSV Export and Import . A manual but cost-free alternative. You can use Data Loader or [Salesforce Data Export](https://skyvia.com/blog/export-data-from-salesforce/) wizard to get CSV files of your objects that will be manually imported to Redshift. This method is available for Salesforce Enterprise, Performance, Unlimited, or Developer editions. In this article, we will take a closer look at the second method. Best For Ideal for organizations on tight budgets that need occasional data transfer and can manage manual processes. Option 1: Exporting Data via Salesforce Data Loader [Salesforce Data Loader](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/) is a client application for bulk import and export, available in the Enterprise, Performance, and Unlimited editions. It is a downloadable tool that supports up to 5 million records at a time. Before we start, [download Data Loader](https://developer.salesforce.com/tools/data-loader) and install it on your computer. Make sure that you have installed [JRE](https://www.java.com/en/download/manual.jsp) and [JDK](https://www.oracle.com/java/technologies/downloads/) : you can\u2019t run Data Loader without them! Then, perform the following steps: Step 1 In the Data Loader window, click Export . Step 2 A wizard will open, prompting you to sign in. Choose your environment from the drop-down menu and click Log In . Follow the authentication instructions on the screen to proceed. Step 3 Select a Salesforce object to export data from and specify the CSV file name. Click Next . Data Loader allows you to export both predefined and custom objects. You can also change the location where the result file will be saved by clicking Browse . Step 4 On the next wizard page, you can configure a SOQL query to use for exporting data. We only need to select the fields that we want to export, but if you want to filter data, you can also configure WHERE clauses. For our purposes, we can just click Select all fields , and then click Finish . Step 5 Finally, click Yes to start the process. The data will be exported to the specified file. Let\u2019s go over one more option to export data before we bring it into Redshift. Option 2: Exporting Data via Data Export Wizard To export data to CSV using the Salesforce Export Data feature, perform the following steps: Step 1 Open the Salesforce Setup by clicking the Quick Setup gear icon in the upper-right corner. Step 2 Type \u201c Data Export \u201d in the Quick Find box and, under Data, click Data Export . On the next screen, select Export Now . Step 3 Select Salesforce objects to export and click Start Export . You can export both predefined and custom objects. The data export will start, and after some time, you will receive an email with the link to the exported data: Step 4 Click this link to open the page where you can download the export results: We are now ready to move to the importing process. Importing Data to Redshift If you haven\u2019t prepared the tables for the data in Redshift, you need to create them first. Step 1 To create a table in Redshift for Salesforce Accounts, use the statement below: CREATE TABLE \"Account\" (\n \"Id\" VARCHAR(18) NOT NULL,\n \"IsDeleted\" BOOLEAN NOT NULL,\n \"MasterRecordId\" VARCHAR(18),\n \"Name\" VARCHAR(255) NOT NULL,\n \"Type\" VARCHAR(255),\n \"ParentId\" VARCHAR(18),\n \"BillingStreet\" VARCHAR(255),\n \"BillingCity\" VARCHAR(40),\n \"BillingState\" VARCHAR(80),\n \"BillingPostalCode\" VARCHAR(20),\n \"BillingCountry\" VARCHAR(80),\n \"BillingLatitude\" DOUBLE PRECISION,\n \"BillingLongitude\" DOUBLE PRECISION,\n \"BillingGeocodeAccuracy\" VARCHAR(255),\n \"BillingAddress\" TEXT,\n \"ShippingStreet\" VARCHAR(255),\n \"ShippingCity\" VARCHAR(40),\n \"ShippingState\" VARCHAR(80),\n \"ShippingPostalCode\" VARCHAR(20),\n \"ShippingCountry\" VARCHAR(80),\n \"ShippingLatitude\" DOUBLE PRECISION,\n \"ShippingLongitude\" DOUBLE PRECISION,\n \"ShippingGeocodeAccuracy\" VARCHAR(255),\n \"ShippingAddress\" TEXT,\n \"Phone\" VARCHAR(40),\n \"Fax\" VARCHAR(40),\n \"AccountNumber\" VARCHAR(40),\n \"Website\" VARCHAR(255),\n \"PhotoUrl\" VARCHAR(255),\n \"Sic\" VARCHAR(20),\n \"Industry\" VARCHAR(255),\n \"AnnualRevenue\" DOUBLE PRECISION,\n \"NumberOfEmployees\" INTEGER,\n \"Ownership\" VARCHAR(255),\n \"TickerSymbol\" VARCHAR(20),\n \"Description\" TEXT,\n \"Rating\" VARCHAR(255),\n \"Site\" VARCHAR(80),\n \"OwnerId\" VARCHAR(18) NOT NULL,\n \"CreatedDate\" TIMESTAMP NOT NULL,\n \"CreatedById\" VARCHAR(18) NOT NULL,\n \"LastModifiedDate\" TIMESTAMP NOT NULL,\n \"LastModifiedById\" VARCHAR(18) NOT NULL,\n \"SystemModstamp\" TIMESTAMP NOT NULL,\n \"LastActivityDate\" DATE,\n \"LastViewedDate\" TIMESTAMP,\n \"LastReferencedDate\" TIMESTAMP,\n \"Jigsaw\" VARCHAR(20),\n \"JigsawCompanyId\" VARCHAR(20),\n \"AccountSource\" VARCHAR(255),\n \"SicDesc\" VARCHAR(80),\n CONSTRAINT \"PK_Account\" PRIMARY KEY (\"Id\")\n); Note: your Account object might contain custom fields that are not included in this statement. Step 2 After creating the necessary tables, you need to upload the CSV files to an Amazon S3 bucket. This is the easiest way of importing CSV files to Amazon Redshift manually. To learn how to upload files to the bucket, refer to the [Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) . Step 3 The uploaded CSV files can be imported into the Redshift table with the COPY command. The COPY command has the following syntax: COPY table_name [ column_list ] FROM data_source CREDENTIALS access_credentials [options] For example: copy Account\nfrom 's3://awssampledbuswest2/tickit/Account.csv'\niam_role 'arn:aws:iam:::role/'\nCSV\nINGOREHEADER 1; If the columns in the CSV file and the table are the same, we can omit the column list. Note that Data Loader exports CSV files with columns sorted alphabetically, so we may need to specify the column list for it. copy Account(AccountNumber, AccountSource, AnnualRevenue, BillingAddress, BillingCity, BillingCountry, BillingGeocodeAccuracy, BillingLatitude, BillingLongitude, BillingPostalCode, BillingState, BillingStreet, ChannelProgramLevelName, ChannelProgramName, CreatedById, CreatedDate, Id, Industry, IsCustomerPortal, IsDeleted, IsPartner, Jigsaw, JigsawCompanyId, LastActivityDate, LastModifiedById, LastModifiedDate, LastReferencedDate, LastViewedDate, MasterRecordId, Name, NumberOfEmployees, OwnerId, Ownership, ParentId, Phone, PhotoUrl, Rating, ShippingAddress, ShippingCity, ShippingCountry, ShippingGeocodeAccuracy, ShippingLatitude, ShippingLongitude, ShippingPostalCode, ShippingState, ShippingStreet, Sic, SicDesc, Site, SystemModstamp)\nfrom 's3://awssampledbuswest2/tickit/Account.csv'\niam_role 'arn:aws:iam:::role/'\nCSV\nINGOREHEADER 1; These examples use an AWS Identity and Access Management (IAM) role to access the specified bucket. This is the recommended way to access it. You can also use authentication via your access key ID and secret access key, like in the example of the credentials clause below, but this is not recommended. credentials 'aws_access_key_id=;aws_secret_access_key=' We need to add the CSV and IGNOREHEADER options because we are importing a comma-separated file with one header row. After the import, you can remove the CSV files from the Amazon S3 bucket or even remove the bucket itself to reduce the costs of storing your file in Amazon S3 storage. Pros This method has the following advantages: Flexibility : You have direct control over the entire process and can manually edit the file structure and the appearance of your final table in Redshift. Familiarity : Native Salesforce tools look familiar, and the learning curve to use them might be lower compared to other solutions. This solution will work great if you need to transfer records from Salesforce to Redshift once in a while and you don\u2019t need a perfect pipeline. Cons This method has some limitations that you should keep in mind: Error-Prone : Transforming data during manual transfer can be labor-intensive, and it only gets worse with large or complex datasets. There\u2019s a higher risk of errors during manual transformation. Detecting these errors might require restarting the entire process. Limited Export Amount : Salesforce Data Export and Data Loader are not available in all plans, and even in the most expensive plans, [Data Export Wizard only allows up to one export per week](https://help.salesforce.com/s/articleView?id=xcloud.admin_exportdata.htm&type=5) . Other solutions usually offer better conditions. Monitoring : You are responsible for monitoring the replication process to ensure successful execution. This can become an issue as your business grows. Let\u2019s consider a better way of building a Salesforce-to-Redshift data pipeline, this time using third-party solutions. Method 2: Third-Party Integration Tools There are a number of third-party tools that can be used for [Salesforce and Redshift integration](https://skyvia.com/data-integration/replicate-salesforce-to-redshift) . For this method, we will review Skyvia. Businesses often choose integration tools as their go-to method: this approach is simple, doesn\u2019t require heavy development, and offers reliable automation. It allows teams to get up and running quickly, reduce manual effort, and keep data in sync. Using [Skyvia](https://skyvia.com/data-integration/replication) , you can easily transfer your records into the cloud data warehouse, and it can be set up in less than 15 minutes! Skyvia also has a [free plan](https://skyvia.com/pricing) to try out the platform and see if it works for you. After configuring replication, you can schedule it: the platform will automatically import updated Salesforce records into Redshift. Only modified data will be loaded, saving you time and money. Best For Best suited for businesses that want a hassle\u2011free, continuously automated solution with minimal effort. Creating Connections Let\u2019s describe how to configure replication in Skyvia. First, you need to [register a Skyvia account](https://app.skyvia.com) . Then, create connections to your data sources \u2013 Salesforce and Redshift. Step 1 To create a connection, select + Create New > Connection on the left. Then select the respective connector. Besides Redshift and Salesforce, Skyvia supports a large number of connectors. Step 2 Signing via OAuth is the fastest way, and it is selected by default. To create a connection to Salesforce, click Sign In with Salesforce . Then you can enter the connection name and save the connection. Alternatively, you can select the Username & Password authentication method. You will need to specify your username, password, and security token . Step 3 To create a connection to Redshift, click + Create New > Connection and find Redshift. Fill in the fields required for this connection: Server, Port, User ID, Password, Database and Schema . Don\u2019t forget to also click Advanced Settings and enter the parameters for Amazon S3 storage. After you\u2019re done, click Create Connection . Creating Replication Packages Now, let\u2019s move forward with building a Salesforce to Redshift replication pipeline. Step 1 Select + Create New and then, under Integration, select Replication . Step 2 Choose Salesforce as Source and Redshift as Target connection. Step 3 After selecting connections, all we need to do is to select the Salesforce objects to replicate. Then, you can also click Schedule and set automatic replication. After this, [Salesforce to Redshift replication](https://skyvia.com/data-integration/replicate-salesforce-to-redshift) will export data automatically, and it will keep data in Redshift up-to-date without any user interaction. It can be used for any Salesforce edition. As you can see, automating Salesforce and Redshift integration with Skyvia is very simple and requires just a few minutes. However, it\u2019s often necessary not just to load data from Salesforce to Redshift, but also to get it back.\u00a0For example, it can be important when performing [Salesforce data archiving](https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs) when you load legacy Salesforce data to Amazon Redshift to save on Salesforce storage costs but still want to be able to access them from time to time from Salesforce. Let\u2019s see how Skyvia can help you integrate Amazon Redshift to Salesforce. Real-time connectivity: Salesforce to Redshift To allow your sales reps to make weighted decisions on processed information, you might need to load data back from Redshift to Salesforce. You can use [Skyvia Import](https://skyvia.com/data-integration/import) or [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) for these purposes: these [ETL tools](https://skyvia.com/blog/etl-tools/) can extract data from a data source, transform them if necessary, and load them to the target. However, in the case of [Salesforce data archiving](https://skyvia.com/blog/how-to-archive-data-in-salesforce-and-reduce-storage-costs/) , loading data back to Salesforce is not always suitable. We will consider a different scenario \u2013 connecting Redshift to Salesforce as an external data source via [Salesforce Connect](https://help.salesforce.com/s/articleView?id=platform.platform_connect_about.htm&type=5) . [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) comes at an additional cost and is available only in Enterprise, Performance, and Unlimited editions. In the developer edition, it is available for free. You can use [Skyvia Connect](https://skyvia.com/connect) to view Redshift data in Salesforce without moving it. Skyvia Connect turns your Redshift data into a special web link called an OData endpoint. Then, Salesforce uses that link to access the data. This way, you can work with your Redshift data inside Salesforce as if it were already part of it \u2014 even though the data stays in Redshift. Suppose we have loaded legacy accounts and contacts to Redshift. Let\u2019s see how we can link them back via Salesforce Connect and Skyvia Connect. Creating OData Endpoint in Skyvia Connect Step 1 Click + Create New and then, under Connect, select OData Endpoint . Click Simple mode . Step 2 Select your Redshift connection. Step 3 Select the tables with data you would like to access in Salesforce. In our case, we select the account and contact tables. Step 4 Let\u2019s change the generated entity names and add the \u2018legacy_\u2019 prefix to them. We are doing it in order for Salesforce external objects to have different, easily distinguishable names. To do it, click the pencil icon near the table name. In this window, you can edit the result name of entity and entity set, and the names of entity properties. You can also hide certain properties to remove access to them. Here, we need to add the \u2018legacy_\u2019 prefix to the values of the Name and Entity Set boxes. We need to do it for both the account and contact tables. After this, click Next step . Step 5 Now, we need to configure security settings. You can add an authentication method to protect your data. To add authentication, click Public and then + Add new . Come up with a username and a password . In a similar way, you can add IP Address ranges to allow access only from these ranges. After you finish configuring endpoint security, click Next step . Step 6 Finally, specify the endpoint name and configure the remaining settings. You can allow editing of your archived records by selecting the Read/Write endpoint mode.\u00a0That\u2019s all, you can now confirm your settings by clicking Save endpoint . Copy the Endpoint URL . We will need it in the next steps. Creating External Data Sources in Salesforce To create an external data source in Salesforce, we need to perform the following steps: Step 1 In the Salesforce Setup, type \u201c External \u201d in the Quick Find box and, under Develop, click External Data Sources . Step 2 Click New External Data Source . Step 3 Make sure to perform the steps below: Enter values for External Data Source (a user-friendly name) and Name (unique external data source identifier) fields. In the Type list, select Salesforce Connect: OData 4.0 Paste the endpoint URL into the URL field. If you want to allow changes in archived data, check the Writable External Objects . Optionally, configure other settings, like High Data Volume , depending on your data. Step 4 Next, you need to set up authentication. If you didn\u2019t create any login credentials for the Skyvia endpoint, you can skip this step. If you did set up a username and password for the endpoint, you\u2019ll need to enter those: Choose Named Principal as the Identity Type . Set the Authentication Protocol to Password Authentication . Then, type in the Username and Password you created for the endpoint. Then, click Save. Step 5 On the opened page, select the tables you want to access and click Sync . This will create the necessary external objects automatically. Creating Tabs for External Objects Although external objects are now available in Salesforce, they are not directly visible to users through the interface. We need to create tabs to make these objects accessible from Salesforce. Step 1 In the Salesforce Setup, type \u201c Tabs \u201d in the Quick Find box and click Tabs . Step 2 Click New . Select the required Object , then set the Tab Style and Description . Click Next . Step 3 On the next screen, specify the tab visibility settings and click Next . Configure the tab availability for custom apps and click Save . Pros This method has the following advantages: Minimal Effort : The platform offers a highly intuitive, no-code interface that enables a Salesforce to Redshift replication in under 15 minutes, even for non-technical users. Automated Syncs : Data can be configured to sync between the platforms automatically to keep the information backed up. Scalability : Besides Redshift, Skyvia supports over 200 connectors, making it easy to expand integrations as business needs evolve. Bi-directional Flow : Using Skyvia Connect, you can access your Redshift data straight from Salesforce. This solution will work great if you\u2019re looking for a hands-off, scalable integration that keeps Salesforce and Redshift data in sync with minimal setup and ongoing maintenance. Cons This method has some cons that you should keep in mind: Additional Costs : Some features, like Salesforce Connect, are not available in all plans. This may add some additional fees. Method 3: Integration Using APIs We\u2019ve already explored both manual and automatic ways of integrating Salesforce with Redshift. However, there\u2019s still one method for bringing different tools together, the very traditional one but still popular \u2013 APIs, and Salesforce offers a large number of them. Here are the most popular and widely used APIs provided by Salesforce: REST API SOAP API Bulk API Such a variety provides a space for developers to customize the integration of Salesforce data into other apps to the desired level. Best For Perfect for teams with big development resources looking for a highly customizable, automated integration workflow. Prepare Your Salesforce Data for Amazon Redshift To move data from Salesforce to Amazon Redshift, you first need to plan how that data will be stored in Redshift. Since Redshift is a database, you\u2019ll need to create tables with the right columns and data types to match the structure of your Salesforce data. One important thing to know: Redshift can\u2019t take data straight from Salesforce. Instead, the data has to go through a \u201cmiddle stop\u201d \u2014 like Amazon S3 or another service that Redshift works with. After that, extra tools are used to move the data from that middle stop into Redshift. Here are some services that can act as a middle stop: [Amazon S3](https://aws.amazon.com/s3/) [DynamoDB](https://aws.amazon.com/dynamodb/) [Kinesis Firehose](https://aws.amazon.com/firehose/) Before we start, you will need to do a few things: Obtain a [Salesforce security token](https://help.salesforce.com/s/articleView?language=en_US&id=sf.user_security_token.htm) . You will need it to connect the platforms. Define a [custom platform event](https://developer.salesforce.com/docs/atlas.en-us.234.0.platform_events.meta/platform_events/platform_events_define.htm) . Define the exact data that will be streamed out of Salesforce. Create a [connected app](https://help.salesforce.com/s/articleView?id=sf.admin_virtual_care_connect_amazon_to_salesforce.htm&type=5) . This will allow AWS to communicate with your Salesforce. Create an [EventBridge account](https://aws.amazon.com/eventbridge/) or log into an existing one. It catches events from Salesforce (via the platform event) and knows where to send them. Enable [Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide//Welcome.html) for EventBridge. S3 is where your records will land before loading into Redshift. Load Data from Salesforce to Amazon Redshift Here, you will find detailed instructions on the process, including minor preparatory and intermediary procedures. Establish Connection to Salesforce API Step 1 Open the [Amazon EventBridge console](https://console.aws.amazon.com/events/) . Under Integration, select Connections -> Create connection . Step 2 Perform the following steps: Provide the name and description of the connection. Select Use partner template and choose Salesforce for Partner Destinations. In the Authorization endpoint field, type [https://MyDomainName.my.salesforce.com/services/oauth2/token](https://mydomainname.my.salesforce.com./services/oauth2/token) with your domain name instead of a placeholder. NOTE: The URL [will be different](https://help.salesforce.com/s/articleView?id=xcloud.remoteaccess_oauth_endpoints.htm&type=5) in case you are using Sandbox with or without enhanced domains. Step 3 Select POST from the HTTP method drop-down list. Enter the client ID and client secret associated with your Salesforce-connected app. After you\u2019re done, click Create . Create API Destination Step 1 Open the [Amazon EventBridge console](https://console.aws.amazon.com/events/) . Go to API destinations -> Create API destination . Step 2 Provide the name and description of the API destination. In the API destination endpoint field, paste the following link: https://MyDomainName.my.salesforce.com/services/data/v54.0/sobjects/MyEvent__e, where \u201cMyevent__e\u201d is the platform event where you want to send information. Step 3 Select POST from the HTTP method drop-down list. In the Invocation rate limit field, type 300. Select the Use an existing connection option and specify the Salesforce connection created in the previous step. Click Create. Create a Rule This step is necessary to organize the sending of events to Salesforce once an Amazon S3 object is created. Step 1 Open the [Amazon EventBridge console](https://console.aws.amazon.com/events/) . Go to Rules -> Create rule . Step 2 Provide the name and description of the rule. Set Default for the Event bus option and select Rule with an event pattern in the Rule type field. Click Next . Step 3 Select Other in the Event source field. In the Event pattern field, insert this code: { \n\u201csource\u201d: [\u201caws.s3\u201d]\n}. Click Next . Step 4 Choose EventBridge API destination in the Target types and specify the existing API destination for Salesforce created in the previous step.In the Execution role, select Create a new role for this specific resource . Click Next . Step 5 Review your settings and click Create rule . To test your rule, create an [Amazon S3 object](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) by uploading a file to an EventBridge-enabled bucket. The information about the created object will be sent to the Salesforce platform event. Load Data From S3 to Redshift At this point, you already have set up a flow that can respond to new objects in S3 storage \u2013 now it\u2019s time to move it to Redshift. There\u2019s no one-size-fits-all approach here. You get to choose the level of complexity and automation that fits your use case. For that, please check the [official documentation for transferring data from Amazon S3 to Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-data.html) . As you see, the process of Salesforce to Redshift integration with APIs isn\u2019t that simple. This method is definitely more sophisticated than manual and automated data transfer with third-party tools. However, integration with APIs has its benefits \u2013 it offers a high degree of customization, which isn\u2019t available with other approaches. Pros This method has the following advantages: High Customization : APIs allow precise process tailoring to business workflow. Real-time Sync : EventBridge can be set up to trigger data transfers in real-time when specific actions happen. Security : You can enforce authentication, authorization, and data-handling protocols. Cons This method has some disadvantages: Complex Setup : Configuring Salesforce, S3, EventBridge, and Redshift requires a deep understanding of all these systems. Harder to Maintain : Unlike third-party tools, APIs require manual updates and can break with version changes. Conclusion In this guide, you learned how to integrate Salesforce and Redshift. You also discovered how to manually load data from Salesforce to Redshift by using the Salesforce Data Loader Export wizard. Connecting Salesforce to Redshift will maximize the value of your business data. This integration can help eliminate silos, enable real-time analytics, and make smarter, faster decisions. For the most efficient integration, choose a solution that offers automation, ease of use, and scalability, skipping the need for complex coding or maintenance. A cloud-based, no-code platform like [Skyvia](https://skyvia.com) handles the heavy lifting \u2014 let your team focus on what matters. FAQ to Salesforce to Redshift Why should I connect Salesforce to Redshift? Connecting these platforms can help you combine CRM data with other sources for advanced analytics, better reporting, and a complete view of your business and customers. Can I automate the data transfer process between Salesforce and AWS Redshift? Yes, using ETL tools like Skyvia or APIs, you can automate the regular syncing of data between Salesforce and Redshift. What are the benefits of using an ETL tool for this connection? ETL tools simplify integration and reduce manual effort. You can easily schedule and transform your data transfer process, and error-handling features will warn you in case something goes wrong. How can I ensure the security of my data during the transfer process? For API or native integrations, use encrypted connections (HTTPS, IAM roles), secure access credentials, and limit access via authentication and IP whitelisting. For third-party integrations, see the provider\u2019s security features. What are some common challenges in connecting Salesforce to AWS Redshift? You can run into data format mismatches, API limitations, handling large datasets, and maintaining sync accuracy over time. Third-party integration can take care of that even before the problem occurs. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-redshift-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Salesforce+to+Redshift%3A+A+Comprehensive+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-redshift-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-amazon-redshift-integration/&title=How+to+Connect+Salesforce+to+Redshift%3A+A+Comprehensive+Guide) [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) Sergey combines years of experience in technical writing with a deep understanding of data integration, cloud platforms, and emerging technologies. Known for making technical subjects approachable, he helps readers navigate complex tools and trends with confidence. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-to-amazon-s3-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Salesforce and Amazon S3 Integration: Implementation Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/salesforce-to-amazon-s3-integration/#respond) 4428 March 14, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-s3-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+Amazon+S3+Integration%3A+Implementation+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-s3-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-amazon-s3-integration/&title=Salesforce+and+Amazon+S3+Integration%3A+Implementation+Guide) Processing huge volumes of information while supporting the different systems is boring and takes time, especially when companies deal with CRM info in Salesforce and storage-heavy files in AWS S3. Manually transferring records and ensuring that everything is consistent and synchronized is inefficient and causes errors. In this scenario, the AWS-Salesforce connector says, \u201cWe\u2019re here to help you.\u201d By connecting these two platforms, companies can store , support , and analyze insights perfectly while eliminating manual exports and storage limitations. Whether the organization is seeking to offload CRM attachments to S3 , backup Salesforce records , or analyze customer information at scale , such an integration can optimize workflows, cut costs, and enhance accessibility. This guide covers why businesses need Salesforce-AWS S3 integration, the key benefits, and step-by-step methods to set it up. We\u2019ll also explore different approaches, best practices, and common use cases to help you choose the right solution. Table of contents What is Salesforce? What is Amazon S3? The Benefits of Salesforce to Amazon S3 Integration How to Set Up Salesforce to AWS S3 Integration Using Amazon AppFlow Using Cloud Data Import Tool: Skyvia Use Cases for AWS Salesforce Connector with Amazon S3 Conclusion What is Salesforce? [Salesforce](https://www.salesforce.com/) is a popular cloud-based Customer Relationship Management (CRM ) platform that helps businesses manage customer interactions, track sales, and automate workflows all in one place. It doesn\u2019t depend on the company size ( a small startup or a global enterprise); Salesforce makes it easy to store customer insights, analyze them, and improve team collaboration. With its customizable dashboards, automation tools, and AI-powered analytics, organizations can streamline sales, marketing, and customer service processes, boosting efficiency and customer satisfaction. Since everything is cloud-based, teams can access critical information anytime, anywhere, ensuring seamless collaboration. The [pricing](https://www.salesforce.com/pricing/?utm_source=chatgpt.com) here is tailored to different business needs. What is Amazon S3? [Amazon S3](https://aws.amazon.com/s3/) is a scalable, cloud-based platform where we easily store, manage, and retrieve large amounts of information. It provides secure, reliable, cost-effective storage for all digital assets, like keeping backup files and documents or handling big data analytics. With its pay-as-you-go [pricing](https://aws.amazon.com/s3/pricing/?nc=sn&loc=4) , businesses only pay for the storage they use. So, it\u2019s flexible and allows to save costs. People use this system for info archiving, content distribution, and as a centralized repository for structured and unstructured information. Companies rely on Amazon S3 because it integrates seamlessly with other AWS services, ensures high durability and security, and allows insights to be accessed anywhere. S3 simplifies storage management while keeping the information secure and accessible, not thinking about the organization\u2019s size, whether you\u2019re a small startup or a real enterprise. The Benefits of Salesforce to Amazon S3 Integration Salesforce \u2013 AWS S3 integration is a magic stick for businesses handling large amounts of CRM data, files, and attachments. We don\u2019t need to overload Salesforce storage or deal with manual data exports. Just seamlessly transfer and manage records in S3 while keeping everything accessible when required. This integration is about: Enhancing scalability. Ensuring compliance. Optimizing costs. So, it\u2019s the perfect choice for businesses looking to improve information handling and operational performance. Below are some of the key benefits of connecting Salesforce with Amazon S3. Scalability and Integration Salesforce is limited in its storage. Upgrades are as expensive as data volume grows. Businesses can integrate it with Amazon S3 to avoid this bottleneck and offload large files, attachments, and historical info. This ensures that Salesforce stays lightweight and fast while maintaining seamless access to essential records. S3 is designed for unlimited storage and scalability, meaning companies never need to think about running out of space or hitting limits as their data expands. This integration also ensures smooth connectivity with other AWS services. So, it\u2019s easy to extend data workflows and analytics beyond Salesforce. Compliance and Data Retention Industries like finance, healthcare, and legal sectors must meet data retention and compliance regulations. They may keep all historical records in Salesforce, but it\u2019s costly and inefficient. So, moving them to S3 ensures long-term storage that meets regulatory requirements. Amazon S3 offers [encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html) , [access controls](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-management.html) , and [versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) , making protecting sensitive data easier and maintaining compliance. Businesses can also set automated [lifecycle policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) , ensuring old or unnecessary info is archived or deleted based on retention rules. Cost Optimization and Storage Efficiency As data grows, we pay more. In this case, [Salesforce](https://help.salesforce.com/s/articleView?id=xcloud.overview_storage.htm&type=5) charges high fees for additional storage. Keeping files and records in [Amazon S3](https://help.salesforce.com/s/articleView?id=analytics.bi_integrate_connectors_S3.htm&type=5) instead of Salesforce helps reduce storage fees significantly while still having data accessible when needed. We may decrease payments by moving less frequently accessed data to lower-cost storage classes, like [S3 Glacier](https://docs.aws.amazon.com/AmazonS3/latest/userguide/glacier-storage-classes.html) , for long-term archiving and optimizing costs with S3\u2019s tiered storage options. So, organizations only pay for the storage they actually use, and data management has become more efficient and budget-friendly. By integrating Salesforce with Amazon S3, users can improve scalability, reduce costs, and maintain compliance without compromising data accessibility. How to Set Up Salesforce to AWS S3 Integration This section will consider two popular options for connecting Salesforce with Amazon S3. Amazon AppFlow easily automates data transfers without writing code. Skyvia\u2019s Cloud Data Import Tool provides data migration, transformation, and scheduling flexibility. Each approach has benefits, depending on your need for automation, customization, or real-time syncing. In this guide, we\u2019ll break down both methods so organizations can choose the best solution for their business and start syncing Salesforce with S3 in no time. Using Amazon AppFlow Amazon Web Services\u2019 native method of transferring data from Salesforce to S3 is Amazon AppFlow. It offers users an integration platform to transfer data from 3rd party applications to Amazon S3. Note: To perform a transfer using AppFlow, create a Salesforce developer account connected to the organization in which you want to transfer the data. Then, perform the following steps. Create an S3 bucket to hold the data from Salesforce. Open the Amazon AppFlow from the AWS console and create a new flow. Provide the flow details and select the data encryption options and flow tags. Select Salesforce as the data source, click Connec t, specify the Salesforce environment type and connection name, and log into the Salesforce developer account. Choose either Salesforce objects or events and select the desired option from the list. Select the destination as Amazon S3, specify the destination bucket, and configure additional settings as required. Choose the flow trigger, either manual or based on a schedule, then go to the next page. Configure the mapped data fields by selecting Map all fields directly or by choosing source fields manually in the Choose source fields drop-down menu. Specify any data validations and then continue to the next page. Finally, create any necessary filters, then continue to review and create the flow. Note: Read more in the [AWS documentation](https://docs.aws.amazon.com/appflow/latest/userguide/flow-tutorial-salesforce-s3.html) on transferring data from Salesforce to Amazon S3 using Amazon AppFlow. Using Cloud Data Import Tool: Skyvia Another popular approach to moving data from Salesforce to S3 is using a convenient, no-code, and user-friendly cloud data integration tool like [Skyvia](https://skyvia.com/data-integration) . Beyond working with cloud platforms, it supports hybrid environments and legacy systems , offering flexibility for businesses with complex infrastructures. The platform provides [200+](https://skyvia.com/connectors) pre-built connectors for popular cloud services, APIs, endpoints, ODBC, and other integration methods, ensuring seamless data exchange across different systems. The platform integrates with modern cloud applications , on-premises databases, and custom-built systems . Skyvia offers [ETL](https://skyvia.com/learn/what-is-elt) , [ELT](https://skyvia.com/blog/elt-vs-etl/) , [reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) , replication, data warehousing, [synchronization](https://skyvia.com/data-integration/synchronization?utm_source=chatgpt.com) , and complex data flows, including more than two data sources. [Automated](https://www.youtube.com/watch?v=ld6JdIk8HA4&t=6s) data pipelines, transformations, and scheduling simplify data migration while maintaining security and reliability. Let\u2019s review the common scenario of data export from Salesforce to AWS S3 using Skyvia: Go to the [Skyvia sign-up](https://app.skyvia.com/register?) and create a new Salesforce connection, specifying the environment, choosing OAuth 2.0 as authentication, and cache settings, then click Sign In with Salesforce and click Create Connection . Create a new Amazon S3 connection providing your IAM Access keys, which can be found under the IAM Management Console within the AWS Console. Choose the correct region, type in the name of the destination bucket, and click Create Connection again. Create a new export integration package. Select the Salesforce connector from the connection list. Change the target type to CSV To storage service , then select the Amazon S3 connector created earlier. Specify the desired folder, code page, and options if required, then add a new task. In the Task Editor menu, select the desired object and properties from Salesforce, define a file name, filters, order by rules, and compression settings, and then save the task. Finally, save the export packages and run the export as desired, or schedule the run times via the package schedule menu. Why Choose Skyvia Over Amazon AppFlow? If we compare Skyvia\u2019s ability for cloud data integration and Amazon AppFlow, the first one is more flexible and powerful for integrating Salesforce with Amazon S3. It\u2019s one of the most popular no-code [ETL tools](https://skyvia.com/blog/etl-tools/) because of its usability and simplicity, even for users without technical skills. The platform: Allows customized data transfers . Supports multiple Salesforce objects in a single export . Provides advanced filtering , SQL-based queries , built-in file compression , and reusable connections . These capabilities make Skyvia the better choice for businesses needing more control, efficiency, and customization in their data integration. Use Cases for AWS Salesforce Connector with Amazon S3 Offloading attachments and large files from Salesforce. Salesforce storage costs can rise quickly, especially when talking about large attachments, contracts, and media files. However, companies may reduce storage expenses by offloading these files to Amazon S3 while keeping the data accessible through Salesforce when needed. Automated Backups and Data Archiving . Companies need to regularly back up Salesforce information to prevent its loss and meet compliance requirements. With Amazon S3, businesses can automate backups, store historical records securely, and ensure long-term data retention without Salesforce\u2019s storage limits. Analytics and Business Intelligence. Salesforce\u2019s built-in reporting works well for basic insights, but businesses often need deeper analysis and large-scale data processing. By syncing Salesforce data to Amazon S3, companies can use AWS analytics tools like Athena, Redshift, and QuickSight to uncover trends, predict outcomes, and gain more advanced business insights. Customer Data Synchronization Across Platforms . For companies using multiple cloud applications, syncing Salesforce data to S3 enables centralized data storage. This integration helps businesses merge Salesforce data with ERP, finance, or marketing platforms, ensuring a single source of truth across all departments. Compliance and Regulatory Data Storage . Finance, Healthcare, Legal services, etc., require strict data retention policies. Keeping Salesforce records in Amazon S3 ensures secure, long-term storage, supports audit trails, and helps businesses comply with GDPR, HIPAA, and other regulations. Conclusion If you\u2019re looking to optimize storage, automate data management, and improve scalability, integrating Salesforce with Amazon S3 is a smart deal. People may need to offload attachments, back up critical records, or enhance analytics; in any case, this integration ensures cost-effective and secure data handling. Tools like Amazon AppFlow offer a quick, no-code setup, while Skyvia\u2019s cloud data integration capabilities may additionally provide greater flexibility and customization for complex data needs. By syncing Salesforce data with S3, companies can: Reduce storage costs. Improve accessibility. Maintain compliance with industry regulations. No matter the approach, this integration helps businesses streamline workflows, ensure data consistency, and unlock deeper insights. FAQ for Salesforce and Amazon S3 Why should I integrate Salesforce with Amazon S3? Integrating Salesforce with Amazon S3 helps businesses offload large files, automate backups, and reduce Salesforce storage costs. It also enables secure long-term data storage, advanced analytics, and compliance with industry regulations. What are the best methods for integrating Salesforce with Amazon S3? Two common approaches are Amazon AppFlow, a no-code automation tool, and Skyvia\u2019s Cloud Data Import Tool, which offers more customization, transformation options, and scheduling flexibility. Can I automate data transfers between Salesforce and Amazon S3? Yes. Amazon AppFlow and Skyvia allow for scheduled or event-triggered data transfers, ensuring data moves automatically without manual intervention. How does this integration help with compliance and data retention? Amazon S3 provides secure, long-term storage with encryption, audit trails, and lifecycle policies, helping businesses meet GDPR, HIPAA, and other regulatory requirements while keeping historical Salesforce data accessible. Will this integration impact Salesforce performance? Yes, in a positive way. Businesses can free up Salesforce storage by offloading large files and historical data to S3, improving system performance and cost efficiency. Can I use this integration for analytics and reporting? Absolutely. Storing Salesforce data in Amazon S3 allows integration with AWS analytics tools like Athena, Redshift, and QuickSight, enabling advanced reporting, trend analysis, and forecasting. How do I choose the right integration method? If you need a simple, no-code solution, Amazon AppFlow is a great choice. If you require more flexibility, customization, and control over data transformation, Skyvia\u2019s Cloud Data Import Tool is a better option. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-s3-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+and+Amazon+S3+Integration%3A+Implementation+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-amazon-s3-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-amazon-s3-integration/&title=Salesforce+and+Amazon+S3+Integration%3A+Implementation+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-to-postgresql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) PostgreSQL Salesforce Integration: The Complete Guide By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/salesforce-to-postgresql/#respond) 321 April 2, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=PostgreSQL+Salesforce+Integration%3A+The+Complete+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-postgresql/&title=PostgreSQL+Salesforce+Integration%3A+The+Complete+Guide) Do you need to sync PostgreSQL with Salesforce? If so, you\u2019re leveling up your business with tailor-made reports, automation, analytics, or simply saving on Salesforce storage costs. Your Salesforce edition may limit you to what you can do with the raw information available. Even with Salesforce Unlimited, you may need reports and functionalities unavailable at the moment. If you have someone in your team who knows PostgreSQL and Salesforce, this guide will provide all the methods available to them. Here\u2019s what we\u2019re going to discuss: Table of Contents Quick Overview of Salesforce Quick Overview of PostgreSQL What is PostgreSQL and Salesforce Integration What Sort of Data Can Salesforce Share With PostgreSQL? Considerations Before Data Integration Summary of Methods to Connect Salesforce and PostgreSQL Method 1. Integration of PostgreSQL and Salesforce Using Third-Party Tools Method 2. Integrate PostgreSQL and Salesforce using Custom Coding and Libraries Method 3. Native Salesforce Features Conclusion Quick Overview of Salesforce [Salesforce](https://www.salesforce.com/) is the #1 Customer Relationship Management (CRM) system. It is good for small businesses and big enterprises alike. You can handle prospective and existing customers with confidence using various Salesforce products. It started in 1999, and today, more than 150,000 companies trust it for marketing, sales, services, and more. On the technical side, Salesforce has an open architecture. Using Application Programming Interfaces (APIs), you can share its data with other applications. Quick Overview of PostgreSQL [PostgreSQL](https://www.postgresql.org/) is a powerful, open-source object-relational database. It has been in active development for over 35 years. You can use it alongside your app for storage and [data warehouse](https://skyvia.com/learn/what-is-data-data-warehouse) . It\u2019s a favorite among developers, according to the [StackOverflow Survey 2024](https://survey.stackoverflow.co/2024/technology#most-popular-technologies-database) . It\u2019s also among the top 5 databases according to the [March 2025 DBEngines](https://db-engines.com/en/) rankings. PostgreSQL runs on all major operating systems. It is also offered as a database service by major cloud providers. Querying using Standard Query Language (SQL) is its major feature. However, extensibility makes it stand out among other databases. What is PostgreSQL and Salesforce Integration PostgreSQL and Salesforce integration is the process of linking a PostgreSQL database and Salesforce CRM to share data and extend functionality. You can [export data from Salesforce](https://skyvia.com/blog/export-data-from-salesforce/) CRM to a PostgreSQL database. Or Salesforce can import data from a PostgreSQL database. You may be wondering when this will be applicable to you. 7 Reasons to Integrate PostgreSQL and Salesforce Here are seven compelling reasons why you would want to integrate the two. Centralized Data for Analytics & Reporting Salesforce holds valuable customer and sales data, but your reports within it can be limited. By syncing data to PostgreSQL, businesses can use BI tools like Tableau, Power BI, or custom SQL queries to get deeper insights. Merging Salesforce Data with Other Business Systems Companies often store data in multiple systems\u2014ERP, marketing platforms, or financial software. This is a major gap in Salesforce reporting. But a PostgreSQL database can act as a central hub to make it easier to join Salesforce data with other sources for a complete business view. Historical Data Storage & Compliance Salesforce may not retain long-term historical records due to storage limits and costs. So, offloading older data to PostgreSQL helps ease these limits. It also helps with compliance (GDPR, HIPAA) and long-term analytics while keeping Salesforce lean. Real-Time Data Synchronization for Automation Some business processes require real-time updates. Syncing Salesforce data with PostgreSQL enables faster automation, such as triggering actions in external apps based on Salesforce updates. Scalability for High-Volume Transactions PostgreSQL is built for handling large datasets efficiently. For businesses with high transaction volumes (e.g., e-commerce, finance), offloading data from Salesforce to PostgreSQL improves performance and scalability. Data Backup & Disaster Recovery Salesforce has its own backup options, but they can be expensive and limited. A PostgreSQL integration allows companies to maintain a separate, regularly updated backup for disaster recovery. Reducing Salesforce Storage Costs Salesforce storage can be costly, and data archiving is often necessary. By transferring older or less frequently used records to PostgreSQL, companies save on Salesforce storage fees while still retaining access to important data. While the list above is an exciting journey to extend Salesforce, you need to know what Salesforce can share with PostgreSQL. Let\u2019s discuss them next. What Sort of Data Can Salesforce Share With PostgreSQL? If you\u2019re wondering what you can get out of Salesforce to PostgreSQL and vice versa, this is the place. The following are some of them: Standard and Custom Objects : Objects like Contacts , Accounts , and Leads are some of these. If you link Salesforce with QuickBooks, you can store invoices and other financial information in Salesforce custom objects. You can also integrate with these custom objects as well. Audit and Log Data : These include Login History (Who accessed Salesforce and when), field history tracking (Changes made to key records), and events log . Activities and Interaction Data : These includes Emails, Calls, Meetings (Logged activities from Salesforce) and Tasks (Pending follow-ups and completed actions). Sales & Marketing Data : You can export Campaigns (Marketing efforts and responses), Products and Price Books (Items, pricing, and discounts), and Quotes and Orders (Sales transactions and fulfillment). User and Role-Based Data : Salesforce can share Users & Roles (Salesforce users and their permissions) and Sharing Rules (Access control details). Integration & External Data : These include Connected Apps \u2013 Data from third-party tools linked to Salesforce (e.g. QuickBooks) and Custom Reports & Dashboards (Data pulled from multiple objects for deeper insights). Considerations Before Data Integration Things can go wrong during Salesforce to PostgreSQL integration. So, it\u2019s recommended that you consider factors like data compatibility, mapping, error handling, performance, and security. These and more are in the following sections. 1. Data Selection & Scope Choosing what data to integrate needs a few considerations. Here are some of them: What data do you need? \u2013 Standard objects (Accounts, Contacts) or custom objects (Invoices, Orders)? How much data? \u2013 A few thousand records, millions, or more? How often does it change? \u2013 Real-time updates or daily batch sync? Tip : Avoid syncing unnecessary data to reduce storage costs and complexity. 2. Data Compatibility Salesforce native tools accept data from CSV, XML, JSON, and Parquet files. If you\u2019ll use third-party, no-code solutions, you can extend this list even more. These tools allow integration to PostgreSQL with low or no code. Just make sure you map the correct Salesforce object column to the PostgreSQL table column. The data type and size should be compatible. Unlike CSV data, PostgreSQL imposes strict column data type and size. For example, a Salesforce column data containing 34500.01 won\u2019t fit in a PostgreSQL integer column. Even more, text data is not compatible with numeric columns. Tip : Review the target data type and size for your [data mapping](https://skyvia.com/learn/what-is-data-mapping) . Make sure whole numbers go to integer columns, text to text columns, and so forth. 3. Data Storage & Performance in PostgreSQL When a PostgreSQL database is your target and reports and other functionality will come from this database, consider the following: Schema Design \u2013 Should you mirror Salesforce tables or normalize the structure for better queries? Note that normalizing the structure requires database design skills. Indexes \u2013 Add indexes on frequently queried fields to speed up lookups. Partitioning \u2013 Helps with large tables to improve performance. Incremental Updates \u2013 Should you update only changed records or overwrite the entire dataset? Tip : Use timestamps or record IDs to track changes efficiently. Better yet, use [Change Data Capture](https://skyvia.com/learn/what-is-change-data-capture) (CDC) if your tool has it. This will efficiently handle changes and improve performance. 4. Data Security & Compliance Handling personal or sensitive data in or out of Salesforce needs special handling to avoid data theft from prying eyes. Consider the following security and compliance requirements: Encryption \u2013 Protect sensitive data like customer details. Access Controls \u2013 Limit who can read/write data in PostgreSQL. GDPR & HIPAA Compliance \u2013 Ensure regulations are followed if handling sensitive data. OAuth & API Authentication \u2013 Use secure tokens for Salesforce API access. Tip : Enable Salesforce Field-Level Security to control what data gets exported. 5. Handling Data Changes & Deletions Changes in Salesforce include updated and deleted data. Know how you will handle them by considering a few factors below: Do you need to export deleted Salesforce records into PostgreSQL? Soft Deletes: Salesforce marks records as deleted (isDeleted=true). Hard Deletes: Records are permanently removed (may cause sync issues). How will updates be handled? UPSERT (Update or Insert) \u2013 Keeps PostgreSQL in sync with new and modified records. Full Refresh or Full Load \u2013 Drops and reloads all data (costly but ensures accuracy). Tip : If using CDC (Change Data Capture) , you can track only modified records. 6. Error Handling & Logging Whether you\u2019re coding or otherwise, you need to anticipate that syncs can fail. It\u2019s better to handle this than ignore it. Here are a few suggestions: Create an error log table in PostgreSQL. Capture API errors and retry failed records. Set up alerts for failures (e.g., missing records, API rate limits). Tip : Use Salesforce Debug Logs and PostgreSQL logs to troubleshoot sync issues. 7. Cost Considerations There are cost considerations based on what you already have. Consider the following: Salesforce API Limits \u2013 Salesforce restricts daily API calls based on your editiion. If API access is not included, you will have a hard time integrating. You may consider upgrading your Salesforce edition or buying API access. Otherwise, you are limited to manual exports to CSV files. Then, use scripts to export/import to PostgreSQL. PostgreSQL Storage Costs \u2013 Large datasets may require optimized indexing and partitioning . Indexes consume storage too, so this will add to your storage cost. This is true whether your PostgreSQL instance is on-premises or deployed in the cloud. Developer Effort vs. Integration Tools \u2013 Is it cheaper to use an ETL tool instead of custom coding? [ETL tools](https://skyvia.com/blog/etl-tools/) may have subscriptions but it\u2019s faster to develop. Coding will take longer adding to your cost per hour or per month per developer. Tip : Estimate Salesforce API usage before deciding on a sync frequency. Plan for storage and compare the cost for custom coding vs. integration tool costs. Aside from the list above, you also need to consider which integration method to use for your project. The following sections will help you consider them. Summary of Methods to Connect Salesforce and PostgreSQL Each integration method has its good and not-so-good side. You have to pick the one that fits your needs. To give you a quick idea of different integration methods, below is a comparison table: Method Group Method Best For Skill Level Customizability Using 3rd-Party Tools Using No-Code integration tools (Skyvia, Matillion, Hevo, Apache Nifi) Quick implementations and organizations lacking dedicated developers Low to Medium Medium to High Custom Coding Using Scripts, Libraries, and APIs (REST, SOAP, Bulk) Organizations needing full control of the integration process and with developers skilled with languages and Salesforce libraries (e.g., Python / simple-salesforce) High Very High Apex Coding Organizations with skilled Apex developers needing complete control of the integration process. High Very High Salesforce Native Tools [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) Organizations with small datasets to link and those avoiding Salesforce storage costs. Medium Low Let\u2019s explore each of the methods in the following sections. Method 1. Integration of PostgreSQL and Salesforce Using Third-Party Tools Third-party tools offer a balance between ease of use and flexibility. Some have graphical user interfaces and use components that users can drag and drop to a canvas. They sometimes add a bit of scripting or formulas within these components to extend graphical features. Some examples of third-party integration tools are Mulesoft, Matillion, Hevodata, and Skyvia. Step-by-step Guide Using Skyvia In this section, we will use Skyvia as our third-party, no-code solution for Salesforce to PostgreSQL and vice versa. Skyvia is a cloud-first data platform that offers easy integration using a graphical user interface. It\u2019s capable of integrating hundreds of data sources including Salesforce and PostgreSQL. You can import, export, migrate, backup, replicate, and more with Skyvia. Skyvia is also rated as one of the easiest ETL tools by users, [according to G2](https://www.g2.com/products/skyvia/reviews#reviews) . As you will see later, doing integrations with Skyvia involves a few steps only with very straightforward interfaces. To [integrate PostgreSQL and Salesforce with Skyvia](https://skyvia.com/data-integration/integrate-salesforce-postgresql) , you need two connections, one for Salesforce and the other for PostgreSQL. Then, depending on your objective you choose either an Import or [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) integration. If you need fine control involving several Data Flows, you can use a [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) . How to Create Skyvia Connections Skyvia connections are like your keys to enter a data source. To create a connection, sign in to your Skyvia account and follow the steps below: Click the + Create New button at the top of the page, then click Connection . Select from the list of connectors. Filter the list by typing \u2018sales\u2018 or\u00a0\u2018salesforce\u2019\u00a0into the filter box. Then, click Salesforce . Fill out the connection form and sign in to Salesforce to get an OAuth token. Click Test Connection . You will see a\u00a0Connection is successful\u00a0in the upper right once the test is good. Rename the\u00a0Untitled\u00a0connection name with your desired connection name (e.g., Salesforce prod, Salesforce test, etc.). Below is my Salesforce connection: Finally, click the Create Connection button to save your new connection. Click Save Connection when modifying an existing connection. Repeat steps 1 to 6, but choose the PostgreSQL connector and fill out the connection form. Have your valid PostgreSQL credentials with you. Below is my successful Supabase cloud PostgreSQL connection: This is an easy configuration, but if you have connection problems, consult your system administrator. How to Import Data from PostgreSQL to Salesforce Using Skyvia Skyvia Import is a simple way to get data from a source to a destination. In this case, we\u2019re going to get some data from PostgreSQL and write it to Salesforce. It\u2019s plain and simple to make. The following are the simple steps: Click + Create New , and under INTEGRATION, select Import . Choose Data Source as the Source Type . Then, select the PostgreSQL connection we created earlier (Supabase.com) from the dropdown list as the Source . Choose the Salesforce connection we created earlier (salesforce-developer) as the Target . Click Add New to add a new task, then choose the PostgreSQL contacts table on the next page. Choose a State Filter . Choose between All , Inserted , and Update . All is the default, which means all rows in the source table. Click Next step and select the target Salesforce object from the dropdown list. In this case, we will use\u00a0Contacts. Choose the Operation (Insert, Update, Upsert, Delete). Map the source columns (PostgreSQL table) to target columns (Salesforce Contacts). Source column names with the same name as Target column names will map automatically. See a sample below: Click Save Task . Repeat steps 5 to 10 for more PostgreSQL tables and Salesforce objects, if applicable. In our sample, we only have 1 table, so it ends after saving the first task. Give your Skyvia Import a name and click Create to save it. See the final configuration below: If you want to run this import regularly, create a schedule by clicking Schedule in the upper-left corner of the page. Otherwise, you can run the Import integration by clicking Run in the upper-right corner of the page. Below is a sample of setting up a 12-midnight schedule that runs daily. How to Export Data from Salesforce to PostgreSQL Using Skyvia We\u2019re going to export the Contacts data from Salesforce into the PostgresSQL\u00a0contacts\u00a0table. Logically, you can use the Skyvia import and reverse the source and target connections. So, the\u00a0Source\u00a0becomes Salesforce, and the Target becomes PostgreSQL. However, in our next example, we will use Skyvia Data Flow. Check out the following steps: Click + Create New and under INTEGRATION , select Data Flow . You will be redirected to a blank canvas. Drag the Source component to the canvas. Configure the source by giving it a name, setting the connection to Salesforce, and choosing the Execute Query action. See a sample below: Click Open Query Builder to set the Salesforce object to Contacts and select the fields. You can optionally set filters and sort order here too. Then, click Apply . See the sample below: Drag the Target component into the canvas and connect the Source to the Target . To connect, click the small circle below the Source component and drag it into the Target component. Configure the Target by specifying the name, connection, action, and table. See a sample below: Then, under the Parameters , click the pencil icon to map the source to target columns. Another window will open. Click the Auto Mapping icon to map the same name columns. You can manually set the mapping by clicking a PostgreSQL column and then clicking a Salesforce column under Properties . Do this to all PostgreSQL and Salesforce columns. Then, click Apply . See a sample below: Click Save and name your Data Flow integration. You can also set a schedule by clicking the Schedule icon in the upper-right corner of the page. Then, set up your preferred schedule. Then, click Save . The schedule configuration is the same as the Skyvia Import Schedule. This is a simple example of a Data Flow. Consult the [documentation](https://docs.skyvia.com/data-integration/data-flow/) for more information. Pros of Using Third-Party Tools Ease-of-Use . Third-party tools commonly use a graphical user interface. Some add drag-and-drop of visual components to diagram the data flow. Pre-built connectors, templates, and components reduce development time compared to custom coding. Error Handling and Logging . Tools allow you to handle errors and logging through visual components. This will make it easier to troubleshoot failed syncs. Scalable . Advanced tools can handle increasing data volumes as your business grows. Advanced Scheduling . Third-party tools have advanced scheduling for running batch processes unattended. This will reduce manual processes and human errors. Cons of Using Third-Party Tools Cost : Tools with high subscription fees are not good for businesses with limited budgets. Learning Curve : You may be unfamiliar with the tool at first, so training is required. Customization Limits : Tool features may not cover every unique business requirement. So, users may resort to workarounds that may not be efficient. Vendor Dependency : Vendors of third-party tools have their own update schedules, support policies, and downtime that may disrupt your operations from time to time. Method 2. Integrate PostgreSQL and Salesforce using Custom Coding and Libraries Another method to integrate PostgreSQL into Salesforce is by using custom coding. You can either use Apex from Salesforce or another programming language like Python. We\u2019re going to use Visual Studio Code alongside PostgreSQL. So, you will need the following installed on your computer to make this work: [Visual Studio Code](https://code.visualstudio.com/) [Salesforce CLI](https://developer.salesforce.com/tools/sfdxcli) [PostgreSQL](https://www.postgresql.org/download/) database engine Then, you need the following VS Code extensions and tools: Salesforce Extension Pack A PostgreSQL tool like pgAdmin or [dbForge Studio for PostgreSQL](https://www.devart.com/dbforge/postgresql/studio/main-overview.html) . Note that we will also need a PostgreSQL table to export the Salesforce Contacts. Below is a sample using dbForge Studio for PostgreSQL: The above table needs the Salesforce Contact ID, last name, first name, email, and phone. If you have installed everything we need, we are ready to code. Let\u2019s start with Apex. Using Apex to Integrate PostgreSQL and Salesforce In addition to the above VS Code extension, you also need to install the Apex VS Code extension. Then, follow the steps below: 1. Setup the Project in Visual Studio Code Create a folder where you will place the Apex code, then open it in Visual Studio Code. Open a Terminal and run the following command: sfdx force:project:create -n SalesforcePostgresIntegration This will create the Apex project named\u00a0SalesforcePostgresIntegration. See a sample from my Terminal below: Then, change the path to your new Apex project folder and authenticate with Salesforce. From the Terminal: cd SalesforcePostgresIntegration\nsfdx force:auth:web:login A browser will open to let you log in to Salesforce using your credentials. Now, retrieve your Salesforce metadata using the following Terminal command: sfdx force:source:pull 2. Create the Apex Class Apex cannot connect directly to PostgreSQL. So, you need a middleware API or a web service. Then, you need the API URL that the Apex class will use. To start creating the class, from the VS Code Explorer, right-click force-app/main/default/classes and select New File . Name the file\u00a0PostgresContactSync.cls. See the illustration below: The logic flow in the code should go like this: Query the Salesforce Contacts using SOQL and retrieve the\u00a0Id, FirstName, LastName, Email,\u00a0and\u00a0Phone. Store the results in a List. Serialize the List using a JSON format for web service transport. Using an HTTP request, send the list to the web service to write the Contacts list to PostgreSQL. Wait for the result and display it using HTTP response. Below is the sample code in Visual Studio Code: Don\u2019t forget to replace the POSTGRES_API_URL with the correct URL from your web service configuration. Below is a sample middleware API using Node.js. Note that this is a separate project from the Apex one. const express = require('express');\nconst { Pool } = require('pg');\n\nconst app = express();\napp.use(express.json());\n\nconst pool = new Pool({\n user: 'salesforce_user',\n host: 'localhost',\n database: 'testdb',\n password: 'StrongPassword123',\n port: 5434\n});\n\napp.post('/syncContacts', async (req, res) => {\n try {\n const contacts = req.body;\n const client = await pool.connect();\n\n for (const contact of contacts) {\n await client.query(\n `INSERT INTO contacts (salesforce_id, first_name, last_name, email, phone)\n VALUES ($1, $2, $3, $4, $5)\n ON CONFLICT (salesforce_id) DO UPDATE\n SET first_name = EXCLUDED.first_name,\n last_name = EXCLUDED.last_name,\n email = EXCLUDED.email,\n phone = EXCLUDED.phone;`,\n [contact.salesforce_id, contact.first_name, contact.last_name, contact.email, contact.phone]\n );\n }\n\n client.release();\n res.status(200).json({ message: 'Contacts Synced' });\n\n } catch (error) {\n console.error(error);\n res.status(500).json({ error: 'Error syncing contacts' });\n }\n});\n\napp.listen(3000, () => console.log('Server running on port 3000')); The pool includes the PostgreSQL credentials. Replace this with your own credentials to your PostgreSQL database. The code above retrieves the body of the HTTP request, which includes the list of contacts. Then, it attempts to insert the row, but if the\u00a0salesforce_id\u00a0exists, an update is performed. 3. Deploy the Apex Code to Salesforce Once the code is ready, deploy the Apex project to Salesforce. From the Terminal, issue the command: sfdx force:source:push Then, assign permissions using the following command: sfdx force:user:permset:assign -n YourPermissionSet 4. Run the Middleware API and the Salesforce Apex Code To run the API, issue the following command from the Terminal: node server.js Then, trigger the Apex callout in Salesforce. Using the Developer Console in Salesforce, run: PostgresContactSync.sendContactsToPostgres(); To check the results, open your PostgreSQL tool and query the\u00a0contacts\u00a0table: SELECT * FROM contacts; That\u2019s how this is done in Apex and a middleware API. Using Python Scripts and simple-salesforce to Integrate PostgreSQL and Salesforce You can also use Python scripts with the\u00a0simple-salesforce\u00a0library that uses Salesforce REST API. But unlike the Apex code, you don\u2019t need a middleware because Python also has a library for PostgreSQL called\u00a0psycopg2. And\u00a0simple-salesforce\u00a0simplifies the API access so you don\u2019t have to take care of REST API formatting and sessions. But first, you need to install these libraries if you don\u2019t have it yet using the following Terminal command: pip install simple-salesforce psycopg2-binary Using\u00a0pip, it will install both\u00a0simple-salesforce\u00a0and\u00a0psycopg2. Below are the steps: 1. Create a Folder for the Python Project and Open it in VS Code Create a new folder where you will place this Python project. You won\u2019t see any file in the VS Code Explorer. Then, create an empty Python script file called\u00a0export_sf_to_pg.py. 2. Create the Python Script Creating the Python script includes connecting to both Salesforce and PostgreSQL, and processing each record. See the steps below: Connect to Salesforce and PostgreSQL. Make sure you replace the Salesforce and PostgreSQL credentials with your own. Then, query the Salesforce Contacts and process each contact by doing an Upsert to PostgreSQL. Finally, commit the changes and close the connection to Salesforce and PostgreSQL. 3. Run the Script Open the Terminal in VS Code and run: python export_sf_to_pg.py This will export the Salesforce Contacts to PostgreSQL. We discussed two code examples for integrating Salesforce into PostgreSQL using Apex and Python. You can also use other programming languages, such as C#, Java, or JavaScript. Pros of Custom Coding The most notable advantages of custom coding are the following: Very high customization . You are free to define the logic and flow of the integration, unlike a third-party tool\u2019s preset steps and templates. Solution is tailored for your unique business needs. Scalable and extendable for future use cases. Cons of Custom Coding Extensibility and customizability have a cost. Consider the following disadvantages before diving into custom coding: Requires coding skills with Apex or another programming language (e.g. Python). Custom coding takes longer to finish than using third-party tools. Requires maintenance and updates when Salesforce changes API rules and structure. API access may not be available for lower Salesforce editions . Consider upgrading to Salesforce Enterprise or Unlimited editions. Method 3. Native Salesforce Features Salesforce has native integration tools catering to different integration scenarios. You may opt to choose this rather than look elsewhere for integration solutions. One integration solution is using Salesforce Connect. Salesforce Connect is a feature that lets Salesforce access and displays external data in real time without storing it inside Salesforce. You need [OData](https://www.odata.org/) to let Salesforce Connect communicate with PostgreSQL. However, PostgreSQL does not natively support OData. So, you need another software to make PostgreSQL OData-compatible. CData and [Skyvia](https://skyvia.com/connect/odata-endpoint) can help you make your PostgreSQL OData-compatible. How to Configure Salesforce Connect for PostgreSQL Follow the steps below to let your PostgreSQL tables appear in Salesforce. 1. Setup OData for your PostgreSQL Tables Use a tool like Skyvia or CData to allow viewing your PostgreSQL table(s) using REST API. Without this, your PostgreSQL data won\u2019t integrate into Salesforce Connect. 2. Create External Data in Salesforce You can create an External Data Source that uses OData-compatible PostgreSQL data by doing the following: Log in to Salesforce. Click on the Gear Icon ( \u2699\ufe0f ) \u2192 Setup . In the Quick Find box, search for \u201cExternal Data Sources\u201d . Under Integrations , click External Data Sources . Click New External Data Source . See a sample below: 3. Configure External Data Source Configure the External Data Source after creating a blank one. Follow the steps below: Name :\u00a0PostgreSQL External Data Type :\u00a0Salesforce Connect: OData 2.0\u00a0or\u00a0OData 4.0\u00a0(depending on your PostgreSQL OData provider). URL : The OData service URL of your PostgreSQL database. If you don\u2019t have an OData service for PostgreSQL, you\u2019ll need a middleware API (e.g., PostgREST, Teiid, or OData.NET ). Identity Type : Anonymous or Named Principal (for authentication). Click Save , then Validate & Sync to create External Objects . 4. Access External Data in Salesforce You can access the new external data by following the steps below: Go to App Launcher ( \ud83d\udd0e ) \u2192 Find External Objects . You should see the tables from PostgreSQL (if your OData is correctly configured). Create custom reports or list views using this data. Pros in Using Salesforce Connect The following are the advantages of Salesforce Connect: No Data Duplication . You don\u2019t store external data in Salesforce, so you save storage costs . Good for large databases like PostgreSQL where storage inside Salesforce would be expensive. Real-Time Data Access . Fetches data on demand instead of syncing periodically. It also ensures you always see up-to-date data from PostgreSQL Easy to Use with External Objects . Can be used in reports, list views, SOQL queries, and even relationships . No Heavy API Calls . Reduces API usage compared to traditional REST API syncs. Good for Salesforce editions with API limits . Cons in Using Salesforce Connect Below are the not-so-good stuff about Salesforce Connect: Read-Only for Most Setups . Standard Salesforce Connect only supports reading external data. OData Requirement is a Barrier . Salesforce Connect doesn\u2019t support direct SQL or REST API. PostgreSQL doesn\u2019t natively support OData , so you need a middleware. Limited in Lower Salesforce Editions . Salesforce Essentials and Professional don\u2019t support Salesforce Connect. Developer Edition has limited external objects and only OData read access. Performance Issues with Large Datasets. Queries fetch live data, so slow external systems will slow down Salesforce. There\u2019s no caching \u2013 Every query pulls fresh data, which can be slow. Conclusion We discussed PostgreSQL and Salesforce integration \u2013 what it is, when do you need one, what factors to consider, and a few ways to integrate. We used third-party tools like Skyvia, custom coding using Apex and Python, and native methods like Salesforce Connect. You may have varied requirements to integrate. Choose any of the methods presented here considering their pros and cons. FAQ for PostgreSQL Salesforce Integration Can I integrate PostgreSQL with Salesforce without writing any code? Yes, using\u00a0Skyvia, MuleSoft, or Zapier, you can set up integration without coding. What are the advantages of using a cloud-based integration platform like Skyvia for PostgreSQL and Salesforce? \u2013 No coding required \u2013 Automated data sync & scheduling \u2013 Prebuilt connectors for easy setup \u2013 Secure cloud-based processing. What types of data can I transfer between PostgreSQL and Salesforce? \u2013 Leads, Contacts, Accounts \u2013 Opportunities, Cases, Custom Objects \u2013 Orders, Product Data Is the data transfer between PostgreSQL and Salesforce secure when using Skyvia? Yes, Skyvia uses encryption, OAuth authentication, and secure cloud processing. What kind of support does Skyvia offer if I have trouble with the integration? \u2013 Help Center & Documentation \u2013 Email & Ticket Support \u2013 Paid plans offer priority support Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=PostgreSQL+Salesforce+Integration%3A+The+Complete+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-postgresql/&title=PostgreSQL+Salesforce+Integration%3A+The+Complete+Guide) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-to-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Salesforce to Salesforce Integration: Implementation Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/salesforce-to-salesforce-integration/#respond) 12179 November 28, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+to+Salesforce+Integration%3A+Implementation+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-salesforce-integration/&title=Salesforce+to+Salesforce+Integration%3A+Implementation+Guide) In this article, you will learn about Salesforce to Salesforce integration, its importance, and who can benefit from it. Explore four key methods, including the Standard Connector, REST API, Skyvia, and [Salesforce Connect](https://skyvia.com/blog/salesforce-connect-guide/) , along with challenges and best practices for seamless data integration. Discover use cases and solutions to optimize data sharing and collaboration between Salesforce orgs. Table of contents What Is Salesforce to Salesforce Integration, and Who May Need It? Why is Salesforce to Salesforce Connection Crucial? Four Methods for Salesforce to Salesforce Integration Salesforce to Salesforce Connector Using the Standard Connector Salesforce to Salesforce Integration Using REST API Salesforce to Salesforce Integration Using Skyvia Salesforce to Salesforce Integration Using Salesforce Connect Challenges Businesses May Face During Salesforce to Salesforce Integration Salesforce to Salesforce Data Integration Best Practices Salesforce to Salesforce Data Integration: Overview of Use Cases Summary FAQ What Is Salesforce to Salesforce Integration, and Who May Need It? Salesforce to Salesforce integration means connecting two (or more) Salesforce orgs to share data and sync it effortlessly. It\u2019s like building a bridge between different Salesforce systems that allows businesses to sync records like leads, accounts, contacts, and opportunities between systems in real-time. Not every business will need Salesforce to Salesforce integration, but it\u2019s essential for: Firms dealing with external vendors who rely on Salesforce to track orders or manage contracts can benefit from seamless data exchange. Companies that have separate Salesforce instances for different business units or regions. Businesses in mergers and acquisitions often bring separate Salesforce orgs that need to be unified. Companies that work with partners or resellers who also use Salesforce. Integrating data helps with smoother communication and collaboration. Click here to see the use cases detailed overview. Why is Salesforce to Salesforce Connection Crucial? Without [integration](https://skyvia.com/blog/salesforce-to-salesforce-integration/) , different departments, partners, or regions would operate with their own data that doesn\u2019t easily flow between systems and face duplicated efforts, outdated information, and data errors. Salesforce to Salesforce connection creates a unified view of their data across all teams and partners. It\u2019s crucial because it: Sharing data between Salesforce orgs can significantly reduce costs, especially when considering the impact of improved efficiency, reduced manual work, and more accurate data. Improves collaboration and communication across the organization. Provides real-time insights by keeping data up-to-date and accessible. Reduces duplication of effort and minimizes manual data entry. 4 Methods for Salesforce to Salesforce Integration There are several ways to integrate Salesforce org, depending on businesses\u2019 needs, budgets, and [data](https://skyvia.com/blog/export-data-from-salesforce-to-excel/) complexity. These methods offer different features and levels of customization. Let\u2019s briefly review them before describing each one in detail. 1. Standard Salesforce to Salesforce Connector Salesforce\u2019s out-of-the-box solution for connecting two Salesforce orgs allows users to share records like leads, accounts, and opportunities between orgs. The standard connector is easy to set up and works well for basic data sharing. Best for companies that only need to share a basic subset of data, like Accounts or Contacts, between two Salesforce orgs without frequent updates. Since the Standard Connector has limited customization, it\u2019s also best for companies whose shared data relies on standard Salesforce fields and objects. Click here for a detailed method overview. 2. REST/SOAP API Integration Salesforce provides robust REST , BULK , META , and SOAP APIs (including Apex) for custom [integration](https://skyvia.com/blog/salesforce-quickbooks-integration/) between Salesforce orgs. You can use these APIs to build custom integrations, sync specific data points, automate processes, and create highly tailored workflows. Apex REST and SOAP APIs: provide real-time integration, allowing data synchronization and logic processing between orgs. Best for businesses that need data updates to occur almost instantly between Salesforce orgs: large organizations or enterprises with multiple Salesforce instances, enabling a scalable integration approach that supports various departments or regions. Click here for a detailed method overview. 3. Middleware/Third-Party Integration Tools Middleware solutions like Skyvia , MuleSoft , Jitterbit , Dell Boomi , and Zapier act as connectors between multiple Salesforce orgs (and other systems). These tools offer pre-built connectors, automation, and data transformation, making it easy to integrate without custom coding. Best for organizations looking for ease of use and the flexibility to handle complex, customized data tasks across multiple Salesforce environments. Click here for a detailed method overview. 4. Salesforce Connect with External Objects This Salesforce Connect allows businesses to access data in another Salesforce org (or even external systems) without moving the data itself. You may create external objects that act like a \u201cwindow\u201d into another org\u2019s data to view and use the data without importing it. Best for companies that need real-time access to data across multiple systems without the hassle of moving or duplicating that data into Salesforce. It\u2019s perfect for keeping Salesforce lightweight while still giving users access to important external information. Click here for a detailed method overview. Let\u2019s consider the most popular methods in detail. Salesforce to Salesforce Connector Using the Standard Connector Salesforce has built its own connector that can be used to make it easy for businesses to connect and share records between multiple Salesforce orgs. Unfortunately, this functionality has not been maintained very well in recent years. As such, you must configure this functionality within the Salesforce Classic UI, as it is not available in Lightning. However, let\u2019s review how this method works. Key Benefits Simple Setup. The Standard Connector allows two Salesforce orgs to connect and share data without extensive configuration, making it accessible for users with minimal technical expertise. Scheduled Data Sync. Users can set scheduled data synchronizations, allowing periodic updates between the two Salesforce orgs. Basic Field Mapping. Basic field mapping allows users to align standard fields across shared objects in both organizations, although customization options are limited. Data Ownership Control. Each Salesforce organization retains control of its own data, and only specified records are shared with the connected organization. Manual Maintenance. Users can manually update field mappings and object relationships to adjust to any changes in data requirements. Configuring Salesforce to Salesforce Integration Using the Standard Connector As mentioned above, you must ensure you have switched to the Salesforce Classic UI to configure the Standard Connector. Ultimately, 4 key areas need to be configured when using the Standard Salesforce to Salesforce Connector: Turning the Salesforce to Salesforce Connector On Creating the Salesforce to Salesforce Connection Publishing Relevant Objects and Fields Subscribing to Relevant Objects and Fields To read more about the guidelines and considerations when using the Standard Connector, please feel free to read [Salesforce\u2019s official documentation](https://help.salesforce.com/s/articleView?id=sf.business_network_intro.htm&type=5) ; otherwise, you can read a subset of the key considerations below. TURNING THE SALESFORCE TO SALESFORCE CONNECTOR ON After switching over to the Salesforce Classic User Interface, head over to Setup. Search \u2018Salesforce to Salesforce\u2019 in Quick Find in Setup. Click Salesforce to Salesforce Settings. Once you have enabled the feature, you should see a screen like the one below. Once you\u2019ve enabled it in the first org, you must repeat the process in your second org before you can start sharing data between them. CREATING THE SALESFORCE TO SALESFORCE CONNECTION Now that both your orgs (we\u2019ll refer to them as Org 1 and Org 2) have had the Salesforce to Salesforce feature enabled, it\u2019s time to build a connection between them. The process here is not what you\u2019d expect \u2013 you need to create a Contact record and send them an invite through a new Connection record. Create the Contact record with a valid Email Address. This is where you will send the Connection Invite. Once you\u2019ve created your Contact record, go to the Connections tab and click New Connection. Ensure you\u2019ve set the new Contact record as the Contact, and also set the relevant Connection Owner on your end. Then click Save & Send Invite . Receive the email on the other side and set up the Connection record in Org 2. Congratulations, you\u2019ve built your Connection between your two Salesforce orgs! The next step is to configure what data should be sent between them. PUBLISHING RELEVANT OBJECTS AND FIELDS This step must be performed in Org 1 and Org 2. Click your new Connection record and navigate to the Published Objects Related List. This is where you\u2019ll be able to select Objects that you want to send to the other org. In this example, we will sync the Account records between orgs. To do so, check the Account object on the next page and click Save . Now, you need to select the fields that you want to send between orgs. In the Published Objects Related List , click Edit next to the Account row. Note:\u00a0 the Required Fields (Account Name and Last Name in this case, because I have Person Accounts enabled in this org) are automatically enabled and cannot be undone. You can optionally add other fields to be published to the other org. In this case, we will keep things simple, leave the default values selected, and click Save . Remember to repeat this in both orgs to ensure a two-way data flow. SUBSCRIBING TO RELEVANT OBJECTS AND FIELDS The last thing you need to do is to subscribe to the published fields \u2013 once again, this should be done in both orgs. Navigate one last time to the Connection record you created in Step 2 and click Subscribe/Unsubscribe on the Subscribed Objects Related List . Similar to the above, check the box next to any object you want to subscribe to and click Save (in this case, the Account Object). Finally, click Edit next to the Account row and make sure the fields you want to pull in are selected (once again, the required fields should be populated by default). Salesforce to Salesforce Standard Connector Considerations and Limitations Salesforce to Salesforce integration using Salesforce\u2019s standard Connector is a convenient way to get started, but like any tool, it has its quirks. Let\u2019s explore some key considerations and limitations users should keep in mind before diving in. Ownership-Based Record Sharing . While System Administrators can opt to share all records, most users will only be able to forward/share records that they or their subordinates have ownership over. Works Only in Salesforce Classic . The Salesforce to Salesforce connector only works in Salesforce Classic. This is usually not a great sign for a feature, as it\u2019s been quite several years since Salesforce Lightning came out, and many of the features that were never migrated were completely replaced. Impact of Hierarchical Sharing . Hierarchical sharing requires careful attention to avoid unintended consequences during synchronization: Clicking Stop Sharing in the External Sharing Related List halts record sharing. To stop sharing Case Comments or Attachments , you must set them to Private. Edited related records may lose sharing connections if the parent record is unshared. Only a maximum of 100 Tasks (Open and Closed) can be shared per related record. Limited Object Support . The standard Salesforce to Salesforce connector doesn\u2019t support all Salesforce objects. It might be a limitation if you\u2019re working with custom objects or specific types of records (like cases or tasks). Double-check which objects you need to share and ensure the standard connector supports them. If you\u2019re dealing with more complex data models, you might need to explore custom integrations or third-party solutions. Asynchronous Updates . Update records are processed asynchronously, so there may sometimes be a small delay in record changes appearing across orgs. Asynchronous Updates . The Salesforce to Salesforce connector requires users to configure the [data](https://skyvia.com/blog/best-data-pipeline-tools/) sharing for each record type manually. This means setting up connections, field mappings, and sharing rules for every object you want to integrate. The manual setup is manageable if you\u2019re only sharing a few objects. But if your integration needs involve many different record types, be prepared to automate parts of the process with custom development if the manual workload becomes overwhelming. Salesforce to Salesforce Integration Using REST API There are other alternatives to share data between multiple Salesforce orgs if you\u2019re not a fan of the native Salesforce to Salesforce tool. You can use Salesforce\u2019s REST API to build a connection between two orgs (or other sources) and transfer data between them. Key Benefits Customizable Data Flow. REST API offers flexibility to define and control specific data transfer processes, allowing tailored integrations between Salesforce orgs to meet unique business needs. Real-Time Data Sync. It enables near-instantaneous data synchronization, ensuring that both Salesforce orgs have up-to-date and consistent information for critical operations. Enhanced Control : Granular control over which objects, fields, and records are shared, allowing for secure and selective data exchange. Seamless Workflow Automation. REST API integrations can be paired with Salesforce automation tools like Process Builder or Flow to trigger workflows and streamline operations across orgs. Scalable for Business Growth. REST API supports dynamic data interactions, making it suitable for evolving business requirements and scalable as data needs increase. Multi-System Integration. Beyond Salesforce-to-Salesforce connections, REST API enables integration with external systems, creating a central hub for broader business workflows. Setting Up Salesforce to Salesforce Integration Using REST API There are four key steps when setting up a Salesforce to Salesforce connection using REST API: Building the REST API Endpoints (Org 1) Creating the Connected App (Org 2) Configuring the Auth Provider (Org 2) Creating Named Credentials (Org 2) BUILDING THE REST API ENDPOINTS (ORG 1) You\u2019ll need to build out an Endpoint using an Apex Class before you are able to connect two Salesforce orgs using REST API. Below is an example of how to do so (Mine was called \u2018SFtoSFEndpoint\u2019) \u2013 build this out in the first org. @RestResource(urlMapping='/Account/*')\nglobal with sharing class AccountAPI {\n \n //this section handles deletions through the API\n @HttpDelete\n global static void doDelete() {\n RestRequest req = RestContext.request;\n RestResponse res = RestContext.response;\n String accountId =\nreq.requestURI.substring(req.requestURI.lastIndexOf('/')+1);\n Account account = [SELECT Id FROM Account WHERE Id = :accountId];\n delete account;\n }\n \n //this section handles queries through the API\n @HttpGet\n global static Account doGet() {\n RestRequest req = RestContext.request;\n RestResponse res = RestContext.response;\n String accountId =\nreq.requestURI.substring(req.requestURI.lastIndexOf('/')+1);\n Account result = [SELECT Id, Name, Phone, Website FROM Account WHERE Id = :accountId];\n return result;\n }\n \n //this section handles inserts through the API\n @HttpPost\n global static String doPost(String name, String phone, String website) {\n Account account = new Account();\n account.Name = name;\n account.phone = phone;\n account.website = website;\n insert account;\n return account.Id;\n }\n} CREATING THE CONNECTED APP (ORG 2) Once configured your endpoints, create a Connected App within App Manager ( Search \u2018 App Manager \u2019 in Quick Find in the Setup Menu ). Click New Connected App . Give your Connected App a Name and API Name (I used \u2018SalesforceToSalesforce\u2019). Set the Contact Email relevant to your contact details. Then check the \u2018 Enabled OAuth Settings \u2019 box, and set a callback URL (Use [https://www.login.salesforce.com/services/authcallback](https://www.login.salesforce.com/services/authcallback) as a placeholder until later on in the process). Select your relevant OAuth scopes (in this case, I\u2019ve enabled full, API, and refresh_token/offline_access). Leave the rest how it is, and click Save . Once you\u2019ve saved your Connected App , copy your Consumer Key and Consumer Secret \u2013 you will need these later (remember to keep your Secret a secret!). CONFIGURING THE AUTH PROVIDER (ORG 2) Now, from your other org, you need to create a new Auth Provider (Search \u2018Auth. Provider\u2019 in Quick Find in Setup). Click New to create a new Auth Provider, and select Salesforce as the Provider Type. This will display the full form for you to populate. The key fields you need to populate are as follows: Provider Type = Salesforce Name = I used \u2018SalesforceToSalesforce\u2019 in this example URL Suffix = for example, \u2018Salesforce-to-Salesforce\u2019 in this example Consumer Key = Paste your key from the last step Authorize Endpoint UR L = Leave blank, Salesforce will automatically set this Token Endpoint URL = Leave blank, Salesforce will automatically set this Then click Save . At this point, you will have a new Callback URL generated at the bottom of the page in the \u2018 Salesforce Configuration \u2019 section. Copy this Callback URL and update your Callback URL value in the Connected App you created in the above step. Here you can check your connection between the two orgs and test that it is working as expected so far. Use the \u2018 Test-Only Initialization URL \u2019 to make sure that everything is working as expected. If you don\u2019t see a login screen when you navigate to the Test-Only URL , you should double-check that your Callback URL was updated in the Connected App and you have waited 2\u201310 minutes for the changes to take effect. CREATING NAMED CREDENTIALS (ORG 2) Finally, you need to create a Named Credential so that the org that is making the callouts can connect to the org with the Connected App with ease. To do this, search for \u2018 Named Credentials \u2019 in Quick Find in Setup and click New . The key values you need to populate are as follows: Give your Named Credentials a Label and Name (I used \u2018IntegrationUser\u2019) URL : Set your URL according to your own Salesforce org (The same one you used in the Connected App ) Identity Typ e: Named Principal Authentication Protocol : OAuth 2.0 Authentication Provider : Set this to the Auth. Provider you configured in the previous step Scope : \u2018full refresh_token\u2019 (remember, additional scope items must be separated by a space and you can only use the same scopes that you set in the Auth Provider) Allow Merge Fields in HTTP Header : TRUE Leave everything else the way it is by default, and click Save . TESTING YOUR CONNECTED APP (ORG 2) Finally, you can create a new Apex Class in your Org 2 and make sure that everything is working as expected. Here\u2019s a sample Apex Class , and a few lines of anonymous Apex, that you can use to make sure everything is functioning as expected. Class public with sharing class AccountWebService {\n \n public static Http http = new Http();\n public static HTTPResponse response;\n public static HttpRequest request;\n \npublic class NewAccountRequestWrapper {\n public String name {get; set;}\n public String phone {get; set;}\n}\n \n//test querying an Account record\npublic static void getAccount(Id accId) {\n request = new HttpRequest();\n request.setMethod('GET');\n \nrequest.setEndpoint('callout:SalesforceAccount/services/apexrest/Account/' + accId);\n response = http.send(request);\n System.debug(response.getBody());\n}\n \n//test creating an Account record\npublic static void addAccount(NewAccountRequestWrapper newAccount) {\n request = new HttpRequest();\n request.setMethod('POST');\n \nrequest.setEndpoint('callout:SalesforceAccount/services/apexrest/Account');\n request.setHeader('Content-Type',\n'application/json;charset=UTF-8');\n request.setBody(JSON.serialize(newAccount));\n response = http.send(request);\n System.debug(response.getBody());\n}\n \n//test deleting an Account recoord\npublic static void deleteAccount(Id accId) {\n request = new HttpRequest();\n request.setMethod('DELETE');\n \nrequest.setEndpoint('callout:SalesforceAccount/services/apexrest/Account/' + accId);\n response = http.send(request);\n System.debug(response.getBody());\n }\n} Class //Add a new Account\nAccountWebService.NewAccountRequestWrapper newAccount = new AccountWebService.NewAccountRequestWrapper();\nnewAccount.name = 'Test Account';\nnewAccount.phone = '61412345678';\nAccountWebService.addAccount(newAccount);\n \n//get Account details based on Id\nAccountWebService.getAccount('61412345678');\n \n//delete Account based on Id\nAccountWebService.deleteAccount('61412345678'); Salesforce to Salesforce Integration Using REST API: Considerations and Limitations Integrating Salesforce orgs using the REST API gives businesses much flexibility. However, it also comes with its own set of considerations and limitations. Let\u2019s break them down. Flexibility for Custom Integrations. The REST API allows for highly customizable integrations. Users can define specific endpoints, set custom workflows, and integrate unique business logic tailored to their needs. However, setting up custom REST API integrations requires significant development skills and time to design and implement properly. Real-Time Data Syncing. REST API allows real-time data synchronization between Salesforce orgs, meaning data can be shared and updated immediately. At the same time, real-time syncing can pressure system performance, especially when dealing with large volumes of data. Rate limits imposed by Salesforce may restrict how often you can make API calls. Rate Limits. Salesforce sets API call limits based on your edition (e.g., 15,000 API calls per day in Enterprise Edition). If your integration requires frequent or large data transfers, hitting these rate limits can result in sync delays or failures. You may need to optimize your API calls or purchase additional API capacity. Requires Authentication and Security Setup. REST API integrations require secure authentication methods, such as OAuth 2.0, to ensure that data transfers between Salesforce orgs are protected and that only authorized users and systems can access sensitive data. Setting up authentication and managing security tokens can be tricky. You\u2019ll need to refresh tokens regularly and ensure the security protocols are robust to avoid unauthorized access or failed integrations. Scalability. While REST API is scalable, and you can add more functionality, connect additional Salesforce orgs, or integrate external systems as needed, performance may decline when dealing with large data sets or complex queries. Additionally, optimizing API calls for efficiency is crucial to maintain system performance. Error Handling and Monitoring. Unlike some plug-and-play tools, API-based integrations don\u2019t come with built-in monitoring and error resolution. So, users need to set up custom monitoring tools or integrate with third-party monitoring services to track API health and resolve errors. Data Transformation. REST API allows custom transformations or data enrichment before sending it to the destination org. Custom data transformations add to the complexity of the integration. You\u2019ll need to define clear rules for how data is transformed and ensure both Salesforce orgs can handle the updated data formats without issues. One-Way or Bi-Directional Integration. REST API can be used for one-way and bi-directional integration, allowing data to flow both ways for continuous updates. However, bi-directional syncing increases complexity and requires careful handling of potential conflicts, such as overwriting data or duplicate records, to ensure data consistency across orgs. Maintenance and Updates. REST API integrations allow users to control maintenance and updates without relying on external tools. However, regular maintenance and updates on the integration may require ongoing developer support., especially when Salesforce releases new features or API changes. Salesforce to Salesforce Integration Using Skyvia Using third-party [tools](https://skyvia.com/blog/connect-salesforce-to-sql-server/) , like Skyvia, MuleSoft, Zapier, or Boomi, for Salesforce-to-Salesforce integration provides a more agile and reliable integration framework. These tools manage complex data mappings, transformations, and workflows with minimal custom code, enabling faster setup, simplified maintenance, and reduced development costs. Let\u2019s consider such an integration on the [Skyvia](https://skyvia.com/) example. It\u2019s no-code, provides a user-friendly interface, and supports [200+](https://skyvia.com/connectors/) connectors and offers flexibility and control suited for more sophisticated data integration needs. Configuring a Salesforce to Salesforce Connection Using Skyvia To create a Salesforce to Salesforce connection with [Skyvia](https://skyvia.com/blog/salesforce-integration-best-practices/) , you must first authorize and correctly set up an integration with one Salesforce org. CREATE A CONNECTION FOR EACH SALESFORCE ORG Navigate to Skyvia and click + Create New , then Connection . Find the Salesforce Connector from the list, and configure your org details. Make sure to use OAuth 2.0 and click Sign In with Salesforce . Name the Connection accordingly (in this example below,\u2018Salesforce Production MASTER\u2019). We are going to sync its data to another Salesforce org. Repeat this for the other Salesforce org (and in the future, you can also connect to any other org you\u2019d like, which may or may not be a Salesforce org. Skyvia has many connectors for different apps and features for synching data between multiple sources). CREATE AN IMPORT SCENARIO TO SYNCHRONIZE DATA ONE-WAY Once you have created a Connection for each org you want to sync, you need to create an [Import scenario](https://skyvia.com/data-integration/import) between them. Click + Create New at the top of the Skyvia app, and then click [Import](https://docs.skyvia.com/data-integration/import/configuring-import.html) . You\u2019ll see a page where you can specify source and target settings. Under Source , click Data Source database or cloud app . Then select the two Salesforce Org Connections that you created in Step 1. Once you\u2019ve selected the two orgs to synchronize data between, you need to create a Task. Click the Add New button on the right-hand side of the page to do so. Select the Account object and then click Next Step . On the next page, select the Account objec t as a target . We also select the UPSERT operation in order to avoid creating duplicates. We will use the native Salesforce External Id method for this operation and select the corresponding External Id field. On the next page, you\u2019ve to configure mapping ( how the fields from each side should align). This will be different for everyone, but for our example, you need to ensure the Name columns are aligned, as per the screenshot below. You must also map the selected external ID field to the corresponding field. Note: Skyvia automatically maps columns with the same name. This means that Skyvia is going to sync the fields mapped on this step (Account records and specifically, the Account Name field on the Account records). After saving a task, validate your Job by clicking the Validate button. Once validated, click Save to save your Job. You can also click Run Now to perform the first sync manually. SET UP AUTOMATIC SYNCHRONIZATION You need to create a schedule for the Import Package to allow your Synchronization to occur automatically in the background. This can be found at the top-left of the Import page you created in Step 2. Depending on the complexity of your Import Package, your business rules, and your business requirements, you must configure the [Schedule](https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html) yourself. That being said, it\u2019s not overly complicated. As you can see in the screenshot below, the schedule is set up to run every 15 minutes, starting now. Configure this page according to your own needs. If you don\u2019t want to load all the records each time but sync only new and changed records, edit the job, edit its task(s), and select the Updated button on the source definition page of the editor. Considerations and Limitations When Integrating Salesforce to Salesforce Using Skyvia User-Friendly, No-Code Interface. Skyvia is designed with ease of use in mind, offering a wizard-based setup that allows non-technical users to configure data integrations without writing code. Flexible Scheduling and Automation. Users can schedule data syncs frequency, allowing automated, hands-free data transfers between Salesforce orgs. Such an approach is ideal for businesses needing regular data updates. Powerful Mapping Options. Standard or direct mapping means that fields in the source are directly mapped to fields in the target with no transformations. Data transformation mapping includes transforming data from one format or value to another as it moves to the target. Skyvia supports both Salesforce standard and custom objects, enabling companies to sync basic and custom Salesforce data fields seamlessly. Support for Complex Data Workflows. With Skyvia\u2019s visual designer, users can configure multi-step workflows with conditional logic, allowing advanced data Real-Time Connectivity. Skyvia Connect simplifies integrating external data into Salesforce by providing OData endpoints that Salesforce Connect can use, enabling seamless and real-time access to external data within the Salesforce environment. Affordable, Scalable Pricing. Skyvia\u2019s [pricing model](https://skyvia.com/pricing) offers various subscription plans, including a free tier, making it accessible for small businesses while scalable for larger organizations needing higher volumes and advanced features. Error Handling and Logging. While Skyvia provides error logs, interpreting and resolving errors, especially in complex integrations, can be challenging and may require additional expertise. API Limitations. Skyvia relies on Salesforce\u2019s API limits, meaning high-frequency data syncs or large data volumes could potentially exceed Salesforce\u2019s quotas, impacting performance and accessibility. Salesforce to Salesforce Integration Using Salesforce Connect [Salesforce Connect](https://help.salesforce.com/s/articleView?id=platform.platform_connect_about.htm&type=5) allows Salesforce to integrate with external data sources, including other Salesforce orgs, without needing to store the data inside Salesforce. Instead of importing the data, Salesforce Connect allows the creation of external objects , which act as virtual representations of the data stored in another system. So, users can view, update, and use the data in real-time within Salesforce, but the data remains in the original system. Key Benefits Real-time data access. You can work with data from other Salesforce orgs or external systems in real time. No data duplication. Since the data is accessed remotely, storing or duplicating it in multiple systems is unnecessary. Supports multiple data sources. Besides Salesforce orgs, it can connect to other external systems like databases or ERP systems. Performing Salesforce integration using Salesforce Connect Launch the Org and go to Setup > Integrations > External Data Source > New External Data Source. In the New External Data Source Screen , enter the appropriate data as shown below. Enter an External Data Source name. Select Salesforce Connect type : Cross Org. Select \u201c Connect To: Production URL \u201c (for Developer edition & Production orgs). Enter URL: https://login.salesforce.com. Set up Search and Writable External Objects . Choose Identify Type > Per User. Select Authentication Protocol >Password Authentication. Add\u00a0the Administration Username of the target Org Integration Use. Enter < Password >/ < Security Token >. Click \u201c Save \u201d. Open the just created External Data Source and click \u201c Validate and Sync \u201d Choose appropriate objects, like \u201cAccount,\u201d and click \u201c Sync \u201d. To access the external object, create the External Object tab. Salesforce Connect Considerations and Limitations Compatibility. Salesforce Connect is available for Classic and Lightning Experience users, offering flexibility for different Salesforce environments. Availability and Cost. Salesforce Connect is an add-on feature at an extra cost in Enterprise, Performance, and Unlimited Editions . Real-Time Data Access. Allows users to access and interact with external data in real-time without storing or duplicating it within Salesforce. This approach makes it ideal for syncing with third-party solutions and other Salesforce environments. Data Fetching Not Required. There\u2019s no need to import external data into Salesforce to read, write, view, or report on it. External objects allow interaction with remote data as if it were native to Salesforce. Search Capabilities. It offers real-time search for external data and supports free text search, record viewing , and list views for external objects. Cost of Integration. Each Salesforce Connect integration costs approximately [$4,800 monthly](https://www.salesforce.com/content/dam/web/en_au/www/documents/pricing/all-add-ons-au.pdf) , depending on the number of data sources connected. Limit on External Objects. You can establish up to [100 external objects](https://help.salesforce.com/s/articleView?language=en_US&id=sf.platform_connect_general_limits.htm&type=5) per organization (can be extended to 200 upon request). Query Limitations. A maximum of 4 joins are allowed per query across external objects and other object types, limiting complex queries. OAuth Token Limit. OAuth tokens issued by external systems can have up to 4,000 characters , which may limit specific integrations that require longer token lengths. Paging Limit. The upper limit for page size for server-driven paging is 2,000 rows , affecting how large datasets are handled during queries. Challenges Businesses May Face During Salesforce to Salesforce Integration Like any big project, integrating Salesforce systems can come with its own set of challenges. Let\u2019s walk through some common hurdles businesses might face and how to tackle them. Data Mapping Confusion Imagine moving into a new house. You\u2019ve got boxes of stuff (data) from the old place (Salesforce org), but things don\u2019t always fit perfectly in the new rooms. During integration, mapping data fields between the two orgs can get tricky. Sometimes, fields in one Salesforce instance don\u2019t have a clear match in the other, leading to misaligned or missing data. Solution Plan out data mapping in advance. Work with the teams to make sure fields are standardized, and if needed, create custom fields that will allow for smoother mapping between the systems. Data Duplicates One of the biggest challenges in Salesforce to Salesforce integration is dealing with duplicate records. If two orgs are working with the same customer or lead, you might end up with multiple versions of the same record, cluttering your database and causing confusion. Solution Before diving into the integration, run a data deduplication process. You can also set up rules during the integration to prevent duplicates from being created by merging similar records automatically. Permission and Security Issues Security is always a big deal, especially when sharing sensitive customer data between systems. Different Salesforce orgs might have different permission settings, leading to access issues when sharing data. Solution Ensure both orgs have aligned security and permission settings. If certain records or fields shouldn\u2019t be shared, set up sharing rules or field-level security to protect sensitive information. Integration Complexity Depending on your Salesforce orgs\u2019 complexity, the integration can feel like trying to solve a giant puzzle. Multiple objects, custom fields, and different processes add up and can become overwhelming. Solution Begin by integrating a few key objects like accounts or leads, and expand the integration once you\u2019ve ironed out any initial challenges. A phased approach helps you avoid major headaches. Data Syncing Delays In an ideal world, data would sync instantly between Salesforce orgs, but sometimes, there can be delays. If one team makes changes in real-time while another works on outdated data, it can lead to miscommunication and missed opportunities. Solution Set clear expectations with your teams about how often data will sync. Real-time syncing might only be necessary for some data types, depending on the integration setup. Schedule periodic syncs for things that don\u2019t require immediate updates to keep everything running smoothly. Managing Multiple Stakeholders If you\u2019re working with multiple departments, regions, or even partner companies, getting everyone aligned on handling the integration can be challenging. Stakeholders may have different goals or requirements, which can complicate the process. Solution Bring everyone to the table early and ensure they\u2019re all on the same page about what the integration will look like and what they can expect. Regular check-ins throughout the process will help avoid any surprises later on. Salesforce to Salesforce Data Integration Best Practices It\u2019s important to plan out your Salesforce to Salesforce integration before connecting the orgs together. Here are a few key things you need to take into account. 1. Define Object and Record Scope Make sure to make a definitive plan as to the exact scope of the integration. If only a subset of data is being synced between Salesforce orgs, or only a single team\u2019s data will be impacted, this must be communicated and planned for accordingly. Similarly, larger Salesforce-to-Salesforce integrations can take a lot of time and effort \u2013 especially when cleansing larger data sets or merging data from multiple origins and ensuring no duplicate records are created. Keeping a Salesforce-to-Salesforce integration project within a set budget is important, and this budget should be defined from the outset. 2. Tidy Your Data Plan for where your data is going, not where it\u2019s coming from \u2013 clean it before setting up a sync to your other Salesforce environment. Ensuring your data is clean and tidy and the data models of your two orgs are aligned before it is synced means your Salesforce data integrity remains intact across both orgs and your users can still gain value from your data. 3. Create Integration Plan There\u2019s no point setting up a Salesforce-to-Salesforce integration and spending time and effort preparing for it and setting it up if it\u2019s going to fall apart shortly after kickoff. It\u2019s critical to create an integration plan to ensure that the data is validated and sync errors are resolved so as to continue providing value to your business users. The integration plan should detail how data should be organized and structured in both orgs going forward. These include things like which data should be kept in which environment, how data should be validated before being stored in either org, who is responsible for data validation, etc. These things should all form part of the integration plan. 4. Create Contingency Plan There\u2019s a high chance you will run into some kind of roadblock while setting up your Salesforce-to-Salesforce integration. This will likely be due to data not being organized properly, data not being formatted correctly, or a Validation Rule being hit inside one of your Salesforce orgs. Salesforce\u2019s Best Practice is to record the error that you receive . This way, you can continue your original integration plan as much as possible and tidy up any leftover data at the end rather than getting stuck at a single point. 5. Test the Integration in a Sandbox Don\u2019t risk breaking anything important in your live Salesforce environment. Always test the integration in a sandbox first. This approach allows for catching any issues before they affect real data. Sync a small sample of records and check for errors. Once you\u2019ve confirmed everything works as expected, you can confidently roll it out to your production environment. 6. Set Up Monitoring and Alerts Once the integration is up and running, you don\u2019t want to set and forget it. Use Salesforce\u2019s built-in monitoring tools to track syncs and identify errors in real-time. Set up alerts to notify your team if any sync failures or issues occur. This way, you can fix problems before they become larger headaches for the users. 7. Plan for Ongoing Maintenance Salesforce isn\u2019t static, and neither should your integration be. Over time, your data needs may change, or new processes may be added. Ensure you have a maintenance plan for updating the integration as needed. Regularly review the integration\u2019s performance and adjust as your business grows or changes. 8. Communicate With Stakeholders A successful integration relies on more than technical know-how; it requires good communication. Keep all stakeholders (from managers to users) in the loop about the scope, timeline, and potential disruptions during the integration process. The video below shows the best practices of the Salesforce to Salesforce connection. Salesforce to Salesforce Data Integration: Overview of Use Cases When discussing Salesforce-to-Salesforce data integration, we\u2019re diving into connecting different Salesforce orgs to share data seamlessly. Whether it\u2019s merging data from multiple regions, syncing sales records between partner businesses, or keeping track of customer interactions across different departments, there are plenty of real-life scenarios where Salesforce-to-Salesforce integration comes in handy. Connecting Multiple Salesforce Orgs for Better Collaboration . Imagine a large company with different business units using its own Salesforce org. One division manages product development, while another focuses on customer service. To create a seamless flow of information between these teams, Salesforce-to-Salesforce integration allows syncing relevant data (like customer details, cases, or product updates) across orgs. This way, everyone can access the latest information, and teams can collaborate more effectively without manually exchanging data. Partner Collaboration for Joint Sales Efforts . Many businesses work with external partners or resellers who also use Salesforce. Instead of manually sending reports or data files back and forth, Salesforce-to-Salesforce integration allows for real-time sharing of leads, opportunities, and accounts. This setup helps partners keep track of shared sales efforts, ensuring that both parties are working with up-to-date data. For instance, if a reseller identifies a promising lead, they can instantly sync the information with your Salesforce instance so your sales team can follow up quickly. Mergers and Acquisitions . When companies merge or acquire other businesses, it\u2019s common to find each entity running its own Salesforce instance. Merging all that data into a single Salesforce org can be a headache without the right tools. With Salesforce-to-Salesforce integration, companies can integrate and migrate data incrementally, syncing contacts, opportunities, and cases between the two orgs while ensuring data accuracy. This use case is precious during transitions when both entities need to maintain business continuity. Regional Salesforce Orgs Sync . Large organizations operating in multiple regions often have separate Salesforce orgs for each area. For example, a company might have different Salesforce instances in the U.S., Europe, and Asia. Data from each regional org can be synced into a central org to get a holistic view of global sales and operations. This way, leadership can monitor performance across all regions while the individual teams keep their local data in sync with the global system. Vendor or Supplier Management . If a company relies on external vendors or suppliers who also use Salesforce, integrating their systems can make managing orders, contracts, and communications a breeze. For instance, whenever an organization updates product availability or order status, its vendor\u2019s Salesforce org is automatically updated with the latest information. This approach reduces miscommunication, ensuring all the partners have accurate data. Summary When your business has a requirement to connect two Salesforce orgs together for any reason, you need to carefully consider exactly why it is required, what data is required to be synced between them, the frequency in which it needs to be synced, and carefully craft a plan (and contingency plan) so that you\u2019re not met with any nasty surprises while setting up the connection. If you decide to use Skyvia Data Integration to connect your Salesforce orgs, just keep in mind that you\u2019ll receive a convenient and easy-to-use solution for an honest price that saves you time and resources. To learn more about Skyvia Data Integration, click [here](https://skyvia.com/connectors/salesforce) . FAQ for Salesforce to Salesforce Integration How does Salesforce to Salesforce integration work? Salesforce to Salesforce integration allows two separate Salesforce orgs to share data like leads, accounts, opportunities, and cases. You can set up connections between the orgs, configure which data records are shared, and ensure that updates made in one system are automatically reflected in the other. This integration works through native Salesforce tools, like Salesforce Connect, or third-party apps and APIs. Can I control which data is shared between Salesforce orgs? Yes, Salesforce to Salesforce integration gives you complete control over what data you want to share. You can choose to share specific records or only certain fields within records. Permissions, rules, and filters help ensure that only relevant information is shared between orgs, keeping sensitive data private while sharing useful information with your partner or other departments. What are the most common use cases for Salesforce to Salesforce integration? Salesforce to Salesforce integration is commonly used for: 1.\u00a0Partner collaboration. Sharing leads or opportunities with external partners. 2.\u00a0Mergers and acquisitions. Combining data from two separate companies or business units. 3.\u00a0Regional teams. Syncing data across different Salesforce orgs for teams in various regions. 4.\u00a0Vendor or supplier management. Syncing data with [*](https://skyvia.com/blog/salesforce-to-salesforce-integration/#5a180835-87ae-4c8c-86b6-a600b83f60e2) vendors or suppliers using Salesforce. Do I need technical expertise to set up Salesforce to Salesforce integration? Setting up basic Salesforce to Salesforce integration can be done with minimal technical expertise, mainly if you use the native Salesforce features. However, for more complex integrations (e.g., involving multiple orgs or advanced data synchronization), some technical knowledge or support from Salesforce admins or developers may be required. Using third-party integration tools, such as Skyvia, can simplify the process further. Can Salesforce to Salesforce integration handle real-time data updates? Yes, Salesforce to Salesforce integration supports real-time data sharing. Once the integration is set up, updates made in one org are automatically reflected in the other. This approach ensures that both teams are working with the most current data, improving collaboration and decision-making across departments or businesses. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Salesforce+to+Salesforce+Integration%3A+Implementation+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-to-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-to-salesforce-integration/&title=Salesforce+to+Salesforce+Integration%3A+Implementation+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/salesforce-workbench-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Selecting the Replacement Tools for Workbench By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-workbench-alternatives/#respond) 1926 April 25, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-workbench-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Replacement+Tools+for+Workbench&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-workbench-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-workbench-alternatives/&title=Selecting+the+Replacement+Tools+for+Workbench) Salesforce is a popular CRM tool that allows organizations to store customer data. To effectively manage large amounts of data and understand relations between objects, administrators rely on Workbench for Salesforce to: Manipulate data Query data Test APIs Retrieve metadata Even though it\u2019s trendy, Workbench has some limitations, such as its availability only under the enterprise plan or the instability of the newest version. Those restrictions might be tangible for some businesses, so a need for other similar solutions arises. This article presents the best alternatives to Workbench for Salesforce and explains how they could benefit your business. Table of Contents What Is Salesforce Workbench? Pros and Cons of Using Salesforce Workbench Popular Ways to Use Salesforce Workbench Best Salesforce Workbench Alternatives Skyvia Data Loader SoqlX Salesforce Extension Pack for Visual Code Gearset Salesforce Inspector Key Takeaways What Is Salesforce Workbench? Workbench for Salesforce is a web-based solution designed for Salesforce administrators, business analysts, and developers in mind. It\u2019s a web-based tool with a rather intuitive and simple interface. Workbench allows users to perform such operations: Reset user password. Query and manage data using SOQL queries. Work with metadata. Perform troubleshooting and API testing. Interact with Force.com APIs. There\u2019s also an extension for Mozilla and Google Chrome web browsers. Once installed, it helps you automatically sign into Workbench directly from the Salesforce interface. Pros and Cons of Using Salesforce Workbench Many tools developed by Salesforce or partner organizations aim to extend Salesforce\u2019s native functionality. Workbench is one such solution that assists Salesforce developers and administrators in their daily tasks. However, it imposes certain limitations and lacks some features. Below are the advantages of Workbench for Salesforce developers and administrators, as well as drawbacks that impede some operations. Pros Here are some notable advantages of Workbench Salesforce: It enhances data management by ensuring data integrity and accuracy. Various DML operations allow users to manipulate over 5 million data records simultaneously. The metadata retrieval option enhances the understanding of system configuration and provides a better overview of organizational Salesforce data structure. Sandbox environment empowers testing and troubleshooting , helping developers detect any issues in a timely manner before implementing changes in the live Salesforce instance. Ensures support of the following actions : Undelete Program, Deploy, Rest Explorer, Apex Execute, and Retrieve. Cons It often happens that the newest Workbench version is still under development. Official announcements also recommend not using the recently released Workbench version with production data. Ignoring this caution might result in data loss or unpredictable configuration failure in Salesforce. Another drawback of Workbench is that it\u2019s available only under the Enterprise plan . Professional and starter users won\u2019t be able to use the Workbench tool for free but can integrate it as an add-on for an extra cost. One more thing is that Salesforce supports only SOQL for data querying. Even though it originates from SQL, it\u2019s still different and might cause some difficulties for users. To sum up, the major drawbacks of Workbench for Salesforce are: Instability of the latest Workbench version performance. Easily accessible only for the Enterprise plan users. Popular Ways to Use Salesforce Workbench There\u2019s no doubt that Salesforce developers and administrators can benefit from Workbench. So, let\u2019s see some popular scenarios and use cases of Workbench for Salesforce. Password Management It may happen that a user forgets a password and can\u2019t log into Salesforce anymore. Luckily, Workbench can force-reset passwords by typing the secret combination of your choice and then providing it to the corresponding user. In case a password set or reset is required, follow these steps: Go to Utilities in the top menu and select Password Management from the list. Specify the User ID for which the password needs to be set. Create a new password and confirm it by typing again. Click Change Password to implement changes. Share the new password with the appropriate user. Even though this method can be a real lifesaver in some cases, it lacks security and confidentiality . Ideally, according to the latest security regulations, a password must be available only to a user. Run SOQL Queries Salesforce relies on its query language, [Salesforce Object Query Language](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm) , or simply SOQL. It\u2019s relatively easy to use because Salesforce provides a query builder, which is convenient even for those who don\u2019t know SOQL semantics. Using SOQL query works fine for extracting data objects considering certain conditions and filtering criteria for reports. Go to the Queries menu in Workbench and select SOQL Query from the list. Choose the object from the drop-down menu and specify the fields of interest. Specify any filtering criteria to pick up only specified objects. Select the desired option under the View As section to decide how the query results appear. Click Query to get the results on the screen or in the file. Mass Records Create Before creating new records in bulk, make sure you have prepared a file with all the needed information. That\u2019s usually a CSV file with a list of records for new leads. To create new records in bulk, proceed as follows: Go to the Data menu and click Insert . Select the needed option from the Object Type drop-down list. Select the From File option and click Choose File to load the CSV file with records stored on your computer. Click Next and map the fields accordingly. You may also consider a Single Record option for creating a new record in Workbench rather than in Salesforce. Workbench allows adding or editing fields not visible on the Salesforce page layout. Mass Records Update Similar to the record creation, updating Salesforce records in bulk or individually is possible. For mass updates, you\u2019ll also have to prepare the CSV file and add the necessary information for the records update in Salesforce. To update records in bulk, proceed as follows: Go to the Data menu and click Update . Select the needed option from the Object Type drop-down list. Select the From File option and click Choose File to load the CSV file with records stored on your computer. Click Next and map the ID of the records you want to update. Mass Records Delete Workbench also allows users to delete records in bulk but with a maximum number of 250 items in one operation. Similarly to mass update and creation, you must prepare a CSV file with the records for deletion beforehand. To mass delete records, proceed as follows: Go to the Data menu and click Delete . Select the From File option. Map the ID of the records in a CSV file and Salesforce. Click Map Fields and proceed with the record deletion. Best Salesforce Workbench Alternatives Given that any recently released Workbench version usually shows unstable performance, the data management and querying appear under risk. So, it\u2019s worth familiarizing yourself with alternative services. So, we have prepared some decent tools that have valuable features analogous to Salesforce Workbench ones and effectively respond to the above-mentioned limitations. Skyvia [Skyvia](https://skyvia.com/) is a multifaceted cloud data platform suitable for various data-related tasks. Its Data Integration product addresses data organization and manipulation. Meanwhile, Skyvia\u2019s Query product allows users to manage and query Salesforce data. Query The [Query](https://skyvia.com/query/) product is an online SQL client and query builder. It allows users to: Query and manage Salesforce data with SQL statements via a code editor or a visual query builder. Unlike Salesforce Workbench, Skyvia relies on standard SQL language, making developers\u2019 lives easier. View the query result directly in the browser or export it as PDF or CSV files. Perform [mass updates](https://skyvia.com/blog/top-3-ways-to-mass-update-salesforce-records/#Salesforce-Mass-Update-via-Query-Using-SQL) of Salesforce data. Overall, Skyvia Query is a decent alternative to SOQL Query operations in Workbench. It also offers an intuitive interface and can be used entirely for free. If you need to execute many queries per day, consider the paid version of Query, which has the same functionality but an unlimited number of queries. Skyvia\u2019s [Data Integration](https://skyvia.com/data-integration/) product is a good data management, transfer, and consolidation solution. Sometimes, it can be more efficient than Workbench or Skyvia Query for mass creation, update, and deletion of Salesforce records. It uses CSV files as a data source or can connect to preferred cloud tools and databases instead. Moreover, Data Integration has a range of other integration capabilities: You can export Salesforce data into CSV files. You can replicate Salesforce data into a data warehouse for further storage and analysis. You can synchronize Salesforce data with other cloud apps or databases. You can build complex ETL data pipelines to refine your data management processes. Here\u2019s a real-life example of how companies have benefited from Skyvia: Data Loader Another service for Salesforce data management is [Salesforce Data Loader](https://developer.salesforce.com/tools/data-loader) . It\u2019s a desktop application for Windows OS that can be downloaded from the official Salesforce online portals. Salesforce Data Loader is designed with large datasets in mind. Similarly to Workbench it can mass create, update, and delete records involving CSV files. Unlike Workbench, it has extra functionality for exporting data in bulk from Salesforce into a CSV file. SoqlX If you prefer desktop applications to web-based solutions, [SoqlX](https://www.pocketsoap.com/osx/soqlx/) could be a good alternative to Workbench. It\u2019s designed only for macOS and has very similar functionality to Workbench. With SoqlX, users can also build and run SOQL queries and save the resulting data in a CSV file. It can also extract the necessary metadata about Salesforce objects and system configurations. This tool can also run anonymous Apex code with the debug output available. Salesforce Extension Pack for Visual Code Developers who prefer Visual Studio Code in their daily work may consider the [Salesforce Extension Pack for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=salesforce.salesforcedx-vscode) . These extensions are designed to enhance the development experience by providing tools for coding, debugging, and deploying Salesforce applications. Before integrating this extension pack, you need to make sure that all the prerequisites are fulfilled. Those are provided on the pack download page and usually refer to the Visual Studio Code version installed, Salesforce DX project, Salesforce CLI, etc. Gearset [Gearset](https://gearset.com/) is a tool specifically designed for Salesforce deployment and release management. Its functionality is different from the Workbench\u2019s one, so it can rather be an addition rather than an alternative to it. The core feature of Gearset is automating the deployment process for Salesforce metadata changes. It also provides validation and testing capabilities to ensure that the new functionality deployment is successful and does not cause any unintended consequences. Salesforce Inspector This is a [Google Chrome extension for Salesforce](https://chromewebstore.google.com/detail/salesforce-inspector/aodjmnfhjibkcdimpodiifdjnnncaafh) that can perform similar functions as Workbench but directly inside the Salesforce org account. For instance, you can create records by importing a CSV file and downloading metadata from Salesforce. This tool also allows users to query and export Salesforce data, explore APIs, and check the organizational API limits. Key Takeaways Workbench is rather convenient and provides essential features for Salesforce developers and administrators. However, it allows users to mass create, update, and delete records only by uploading the CSV file. Workbench alternatives go beyond these limitations, providing a larger spectrum of data manipulation functions as well as sources other than CSV files. Skyvia could be a decent alternative to Workbench for Salesforce because it: Provides the same set of core functions as Workbench. Also accessible via a browser. Offers a user-friendly interface. Can be used for free. Uses standard SQL statements (not SOQL). Offers a broader data-related functionality, such as replication, synchronization, and complex pipeline building. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-workbench-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Replacement+Tools+for+Workbench&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-workbench-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-workbench-alternatives/&title=Selecting+the+Replacement+Tools+for+Workbench) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/salesforce-zendesk-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to Connect Zendesk with Salesforce By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/salesforce-zendesk-integration/#respond) 4567 May 23, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-zendesk-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Zendesk+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-zendesk-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-zendesk-integration/&title=How+to+Connect+Zendesk+with+Salesforce) Sales and support teams exist in nearly every company, from small local businesses to large international organizations. Salesforce and Zendesk are tools that help those teams keep track of their daily operations. Very often, the work of sales and support managers intersects. So, how do they share information if each has its own data management system? They might create meetings or share the information of interest via chat, but when another support agent\u2019s turn comes, the same details need to be requested again, don\u2019t they? Zendesk Salesforce integration adds clarity to the interaction between sales and support teams to streamline their workflows. If you feel that such processes need to be optimized, explore the two ways to connect Zendesk and Salesforce provided in this article. The first one is the native method available in the Zendesk Admin Center, and the second one relies on a no-code integration platform, Skyvia. Table of contents About Zendesk About Salesforce Benefits of Zendesk and Salesforce Integration Use Cases for Zendesk and Salesforce Integration Method 1: Salesforce Zendesk Integration via Zendesk Admin Center Method 2: Integration via Skyvia Comparison of Integration Methods Common Integration Issues Conclusion About Zendesk [Zendesk](https://skyvia.com/connectors/zendesk) is a world-renowned CRM system for customer support, sales, and other customer communication services. With Zendesk, businesses can build a comprehensive customer support solution using features such as Tickets, Chats, Knowledge base, Community Portal, and others. Recent Zendesk updates have equipped this platform with AI that offers smart suggestions to agents and even fully automates some processes. The companies employing those innovations report faster resolution rates and higher customer satisfaction rates. Main Features of Zendesk Zendesk provides all the necessary functionality for effective customer support management. This includes the following options: Ticketing system. It allows support teams to receive, track, organize, and solve tickets within a single workspace. Live chat. It enables support agents to provide assistance in real time. Help center. Companies can create knowledge bases for customer self-service. AI agents. Virtual assistants automate interactions with customers and thus facilitate the daily workflows of support agents. Analytics and reporting. Zendesk provides a complete overview of the customer management systems in a single window. Thus, you can track the average time to close a ticket, discover the most popular articles on the knowledge base, or apply AI analysis on calls. Tool integrations. This feature enables companies to connect their favorite tools with Zendesk. About Salesforce [Salesforce](https://skyvia.com/connectors/salesforce) has become one of the world\u2019s famous CRM systems over the last decade. It fully covers the needs of sales, commerce, marketing, and service business functions. Salesforce also offers built-in opportunities for reporting, data visualization, and analytics. Some time ago, Salesforce implemented Einstein AI, a built-in intelligence layer across the platform, that helps businesses automate their tasks and personalize customer experiences. Main Features of Salesforce Salesforce is a multifaceted CRM system with dozens of features for comprehensive customer management. The most popular ones are: Account & contact management. This is the core functionality of any CRM that allows business users to access complete information about their existing or potential customers. Reporting and analysis. Salesforce has robust reporting capabilities that can shed light on individual customer preferences and purchasing behavior. Mobile version. Even if you are on a business trip or attending a conference, you will always be able to view data, create tasks, and communicate with colleagues using the mobile version of this CRM. Sales forecast. Due to in-built capabilities, Salesforce can generate predictions on future sales that could greatly assist managers in building future strategies. File sharing. This CRM allows you to attach files to customer profiles and share them with colleagues. Those could be agreements, contracts, and other official documents. Benefits of Zendesk and Salesforce Integration Even though each system is powerful enough on its own, Zendesk for customer support and Salesforce for sales, their integration can further benefit businesses in their initiatives. Connecting Zendesk and Salesforce brings a number of advantages for both startups and large enterprises. Productive Collaboration between Support and Sales Teams Support and sales departments work closely together to ensure effective business operations. Thanks to the integration, support agents and sales representatives will be on the same page . A unified view of the information about prospective customers ensures efficient assistance from the support side. Data Enrichment Zendesk Salesforce integration complements customer profiles in both systems. Such data enrichment enhances business processes and customer management as a result. Personalized Interactions Sales managers obtain past conversations with prospective customers with the support team. This helps the sales team analyze the preferences and interests of each client and elaborate on a personalized approach . Scalability & Flexibility With business expansion, the integrated systems can scale up to meet the growing demands. Despite the increasing volume of support and sales interactions, teams can ensure a high level of service to customers. Use Cases for Zendesk and Salesforce Integration Connecting Zendesk and Salesforce introduces a new approach to customer management across departments. The real magic of such integration is how it optimizes daily tasks. Here are some use cases that evidence an increase in productivity and operational efficiency due to bringing together Zendesk and Salesforce. Ticket Auto-creation When a lead freezes or results in negative feedback in Salesforce, an automated ticket can be generated in Zendesk to trigger follow-up from support. Targeted Outreach When syncing specific Zendesk tags (e.g., \u201cpricing confusion\u201d or \u201cfeature request\u201d) into Salesforce to promote a personalized approach in marketing or sales actions. Fresh Information When a support case is resolved in Zendesk, the corresponding Salesforce opportunity or case can be automatically updated through the integration. The same thing can be done in the opposite integration direction. Holistic Customer Profile Upon integration, it\u2019s possible to track all active Zendesk conversations, open opportunities, and support tickets in a single Salesforce dashboard. This information is essential to prevent churns and send timely updates on subscription renewals. Method 1: Salesforce Zendesk Integration via Zendesk Admin Center Let\u2019s explore how to use Zendesk Admin Center to integrate the customer support system with Salesforce. This method requires performing some preparation activities: Check your [Salesforce security](https://skyvia.com/blog/salesforce-security-best-practices/) settings. To do this, go to Salesforce Setup -> Securit y (the SETTINGS block in the left menu). Make sure the Lock sessions to the IP address from which they originated checkbox is NOT selected. Create a separate record type in Salesforce for your Zendesk tickets. It\u2019s not required, but it can still be helpful to distinguish Zendesk ticket records if you have several record types in Salesforce. Connect Zendesk to Salesforce. Zendesk offers three types of integration: Ticket view Data sync Ticket sync Zendesk Marketpace apps Ticket View [Ticket view](https://support.zendesk.com/hc/en-us/articles/4408834115738-Setting-up-a-ticket-view-in-Salesforce) lets users view Zendesk tickets directly from Salesforce\u2019s Account, Contact, Lead, and Opportunity pages. This method implies field mapping, sorting, filtering, and viewing of ticket information in Salesforce tables. Data Sync [Data Sync](https://support.zendesk.com/hc/en-us/articles/4408828539290) offers several scenarios for Zendesk and Salesforce integration: Synchronizing Salesforce Accounts to Zendesk Organization Syncing Salesforce Contacts or Leads to Zendesk Users Syncing Zendesk Tickets to Salesforce The data transfer from Salesforce to Zendesk is quite simple in this case. You just need to select the scenario from the list, set filters, map the fields, specify other integration options,\u00a0 and save your integration. Ticket Sync The [Ticket sync](https://support.zendesk.com/hc/en-us/articles/4408828449050-Setting-up-Ticket-Sync-from-Zendesk-to-Salesforce) scenario is more complicated than the previous two described above. To use this integration flow, process with the following steps: Create a new Record Type in Salesforce if needed. Install the ticket sync package in Zendesk Admin Center. Configure the Zendesk tickets to Salesforce cases synchronization. Make sure to map the fields correctly, set filters, etc. Configure and activate Salesforce triggers if needed. For more detailed information, refer to the [Zendesk documentation](https://support.zendesk.com/hc/en-us/articles/4408828449050-Setting-up-Ticket-Sync-from-Zendesk-to-Salesforce) portal. Zendesk Marketplace Apps Applications and extensions on Zendesk Marketplace introduce other methods to connect to Salesforce. Those are mainly third-party solutions that offer various feature sets related to the Zendesk Salesforce integration. Review the number of options available on the marketplace and select the one that best meets your requirements. Zendesk Salesforce Native Integration Pros Offers a quick and easy way to integrate both systems. The Ticket View scenario allows viewing data on the account, contact, lead, and opportunity pages in Salesforce without performing actions. This method supports triggers for workflow automation. Zendesk Salesforce Native Integration Cons Limited by Salesforce edition: available only in certain editions: Performance, Unlimited, Enterprise, Developer Edition (or other editions with Salesforce API rights). Administrator permissions of both the Zendesk Support area and Salesforce are required to set up the integration. Native integration offers only a few scenarios. Currently, only Zendesk-to-Salesforce integration is available. Ticket view does not support the execution of data-related tasks. Method 2: Integration via Skyvia Another Zendesk Salesforce integration method is powered by [Skyvia](https://skyvia.com/) \u2013 a no-code cloud platform that provides solutions for various data-related tasks: Data integration \u0421loud data backup Data management with SQL CSV import/export OData service creation Skyvia offers a range of data integration scenarios \u2013 find more details [here](https://skyvia.com/data-integration/integrate-salesforce-zendesk) . As an example, we describe the simple way of integrating Zendesk Tickets data into the Salesforce Case table.\u00a0 Zendesk Tickets and Salesforce Cases are the functional equivalents of each other, making them the most logical pair for syncing support-related data across platforms. Zendesk Tickets: represents a customer support request or issue; Salesforce Case: tracks customer problems, inquiries, or service needs. Prerequisites You need to have an account on the Skyvia platform to implement the integration. If you don\u2019t have it yet, feel free to [create a Skyvia account](https://app.skyvia.com/register) . Another step is to create connections to both tools. For that, see the instructions on how to [create a Zendesk connection](https://docs.skyvia.com/connectors/cloud-sources/zendesk_connections.html) and how to [establish a connection to Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) in Skyvia. You can also create a record type in Salesforce to keep the records created by Skyvia separately. Creating the Import Integration Sign in with Skyvia, click +Create New, and select the Import option under the Integration section. Click Data Source database or cloud app. Select your Zendesk connection as a Source and your Salesforce connection as a Target . Don\u2019t forget to rename the scenario. Creating the Task After having selected the source and target connections, click Add new on the right to open the Task Editor. Select the Simple Task Editor mode. More details about available task editor modes are available [here](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html) . On the Source definition tab, select the Zendesk Ticket table. You can also add filters or select the related objects for import. On the Target definition tab, select the Salesforce Case Table . Select the action you are about to perform. In our case, we select the Insert action. Proceed to the Mapping definition tab to map the source and target fields. The fields with the same names will be mapped automatically. You can map the remaining fields manually using available [mapping types](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . In our case, we import the tickets, associating them with existing contacts in Salesforce. To do this, we use mapping by lookup and retrieve the contacts from the Contacts table in Salesforce. In the same way, it\u2019s possible to associate the records with Salesforce\u2019s existing account, another case, or others, depending on your scenario. It\u2019s not mandatory to create a separate record type in Salesforce, but if you did it for your Zendesk tickets in Salesforce, you have to map the RecordTypeId value by constant. If you don\u2019t know how to obtain the RecordTypeId value, refer to Salesforce [documentation](https://help.salesforce.com/s/articleView?id=000321696&type=1) . Complete the mapping and save the package task. Running the Package and Observing the Results After the integration task is saved, click Create the package and proceed to its execution by clicking Run . The integration results are available in the Monitor and Log tabs on the Skyvia dashboard. Zendesk Salesforce Integration with Skyvia Pros Integration variety. This method supports many more integration scenarios than the native one does. Customizability. You can choose what exact objects to integrate and what operation to perform. Bi-directional sync. It\u2019s possible to carry out the integration in both directions. Accessible to anyone. No deep technical knowledge or coding skills are required. Zendesk Salesforce Integration with Skyvia Cons No real-time data streams. Skyvia does not support real-time integration, though you can set the scheduled data updates for each minute. Advanced feature use. SQL skills are needed when switching to the advanced mode on the integration task setup. No error notifications by default. If the integration isn\u2019t successful, there will be no notification. You either need to tune this in the account settings or keep an eye on the Monitor tab. Comparison of Integration Methods We have explored the two main ways to integrate Salesforce and Zendesk \u2014 using the built-in Zendesk Admin Center and a no-code solution, Skyvia. Here\u2019s a side-by-side comparison of these two approaches and their characteristics. We hope it will help you evaluate each option and understand which one could be a better choice for you. Feature Zendesk Skyvia Technical Skills Required Basic admin knowledge No coding needed Integration direction One-way integration Bidirectional sync Field mapping Basic Advanced Filtering Basic Advanced Real-time updates Partial No, but a 1-minute interval for data refresh is possible Software installation Salesforce package required 100% cloud-based Objects available for import Limited All standard and custom objects Common Integration Issues Even with the newest advancements in the data integration field, you need to put some effort into setting everything up. This might take some time at the very beginning, but it will pay off in the long run. What\u2019s more, some issues might take place after the initial runs, so you might consider starting the test integration first, and then switching to the production implementation. To help you overcome all those obstacles, we present some common issues that take place during the Zendesk Salesforce integration and explain how you can minimize their impact or avoid them altogether. Issue Description Remedy Data Conflicts Updates in one system overwrite more recent data in another system. \u2013 Create conflict resolution rules with conditional logic or time-based filters.- Configure synchronization direction properly (In Skyvia, define sync direction per field and not just per object).- Educate teams on source-of-truth ownership. Duplicate Records Identical tickets, user profiles, or other instances appear in the system. \u2013 Make sure that unique identifiers are present.- Define clear data mapping rules.- Choose the right DML operations (in Skyvia, INSERT adds a new record to the system if there is no ID that matches it, and UPDATE updates the existing record with new information when the ID is found).- Make regular audits to check for duplicates. Partial Data Sync Not all new fields or updates appear in the target system upon the integration run. \u2013 Double-check the access levels and permissions for each field in both systems.- When using Skyvia, check the Monitor tab to verify the integration result.- Verify that the mapping is set correctly and apply format conversion if needed.- Avoid syncing formula fields or calculated values that may not have static output. Integration doesn\u2019t run on a schedule At the expected time, no new or updated information appears in the target system. \u2013 Check the time zone indicated in the scheduling settings window.- Review the integration logs to see whether any errors have occurred during the data transfer.- Split large datasets into smaller chunks for batch integration. Conclusion The integration of Salesforce and Zendesk supports the optimization of company-wide processes. In particular, it enriches datasets in both systems and adds transparency to sales-finance interconnected processes. It\u2019s possible to connect the two services through different integration approaches, but we\u2019ve focused on these two: Using Zendesk Admin Center as a native method Using Skyvia as a third-party data-integration tool The first approach is pretty straightforward, but it performs data transfer only in one direction (from Zendesk to Salesforce) and supports a limited number of fields. While Skyiva offers a wide variety of integration methods, supports all standard and custom fields, and enables bidirectional connection on Zendesk Salesforce integration. Feel free to try Skyvia! F.A.Q. for Zendesk to Salesforce How to connect Zendesk to Salesforce? There are several ways to connect these tools and exchange data between them. Two of those are presented in this article: native integration and Skyvia-based integration. How to set up a reliable Salesforce Zendesk sync without coding? To sync data between these two systems without programming, feel free to use Skyvia. It offers a range of integration scenarios for sending data from Salesforce to Zendesk and vice versa. Is Salesforce better than Zendesk? Salesforce and Zendesk share similar features, but each of them has its own purpose. While Salesforce is focused on sales lead management, Zendesk is primarily designed for support teams. Is Zendesk a CRM or ERP? Zendesk is considered to be a CRM system with the primary focus on customer service and support. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-zendesk-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to+Connect+Zendesk+with+Salesforce&url=https%3A%2F%2Fblog.skyvia.com%2Fsalesforce-zendesk-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/salesforce-zendesk-integration/&title=How+to+Connect+Zendesk+with+Salesforce) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) How to: Shopify and Hubspot Integration By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/#respond) 2862 September 28, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-and-hubspot-integration-ultimate-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to%3A+Shopify+and+Hubspot+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-and-hubspot-integration-ultimate-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/&title=How+to%3A+Shopify+and+Hubspot+Integration) Modern businesses benefit from e-commerce platforms and CRM systems integrations that boost sales and improve the customer management process. Such integrations allow sales and support teams to better understand their clients and offer personalized content and shopping experiences to increase customers\u2019 loyalty. CRM-generated reports help to perform data-driven forecasts. Moreover, the integration between the two helps to automate sales stages such as lead generation and order processing to lower the effort spent on administrative work. In this article, we provide a detailed overview of the integration between two globally spread systems \u2014 Shopify and Hubspot. Table of Contents About Shopify What is Hubspot? The Importance of Integrating Shopify with Hubspot Shopify and HubSpot Integration through Plugins Easy Way to Connect HubSpot and Shopify with No Coding Conclusion About Shopify Shopify is a comprehensive e-commerce platform that does pretty much everything you need to sell online. Millions of merchants use it to run their online stores. With Shopify, you can set up your store, manage inventory, process payments, and track orders all in one place. Shopify users admit that while in some cases Shopify may lack some custom functionalities, its core strength lays in its simplicity. It requires little to no technical knowledge to get things up and running. What is Hubspot? Hubspot is an all-in-one cloud-based solution that helps marketing, sales, and customer service teams attract, engage, and please their future and existing clients. All these are done with the help of various tools offered by Hubspot. The Best Features of Hubspot HubSpot helps you to automate marketing campaigns, social media posting, and email marketing. It provides personalized content to the end users based on their characteristics or behaviors. With HubSpot, you can create forms, landing pages, pop-ups, and other lead magnets. Afterward, you can score leads based on their level of engagement. Hubspot keeps track of all the customer data and interactions. The customer support functionality is broad and includes tickets, knowledge bases, live chats, and chatbots. The Importance of Integrating Shopify with Hubspot In the modern business world, the vast majority of decisions are data-driven. Integrating Shopify and Hubspot brings access to comprehensive analytics and reporting capabilities. When you are ready to implement your decisions for the integration between those two, helps you to automate most of your actions so you can save time for future thinking instead of spending it on the working routine. Content personalization becomes an easy thing when you can easily compare and group your future and existing customers. And the right customer support tools will only improve their user experience. Shopify and HubSpot Integration through Plugins You can connect a Shopify store with Hubspot by using native plugins. To do so, follow the instructions listed below: Login to Hubspot. Click the Marketplace icon and select App Marketplace . Search for Shopify integration and click Install app . You will be asked to enter your store\u2019s URL. Enter it and click Connect app . Log in with your Shopify credentials when asked. In this way, you will be able to sync contacts, products, and orders from Shopify with deals in Hubspot. This is a one-way sync, meaning that data will only come from Shopify to Hubspot. You can enable a two-way sync for contacts. To do so, open Marketplace, click on Connected apps, and select Shopify. Then, on the Contacts tab. Select Sync limited HubSpot updates to Shopify. Limitations of Integration via Plugin While the integration process seems nice and easy, it has several drawbacks and limitations: Pricing. While the usage of the plugin is free, make sure to check the subscription plans required for Hubspot and Shopify and what integration options are included there. Not all the product-related information can be transferred. Customer-related data often fails to transfer. Multiple problems with associating orders with the account have been reported, which is causing the duplicate account records creation. Easy Way to Connect HubSpot and Shopify with No Coding As an alternative to a built-in plugin, you may use a third-party solution such as [Skyvia](https://skyvia.com/) that allows you to build integrations between more than 170 different cloud applications and databases, including Hubspot and Shopify. This method assumes that you already have a Skyvia account. You can always create a Skyvia account for free [here](https://app.skyvia.com/) . Skyvia is a universal cloud-based data platform with a comprehensive set of components to implement integration scenarios: The Import component facilitates the transfer of data between two sources, utilizing powerful mapping settings for data transformations. It\u2019s designed to accommodate both ETL and reverse ETL scenarios. The Replication scenarios are applied to create and automatically update a copy of Hubspot or Shopify data. Skyvia\u2019s Replication tool aligns with the ELT scenario. Synchronization , meanwhile, manages one- or bi-directional data synchronization between data sources. It\u2019s important to note that, ideally, one of the data sources should be empty to prevent data duplication. For more complex integration tasks involving more than two data sources and advanced data mapping with multi-stage transformations, the Data Flow builder comes into play. Additionally, the [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) feature creates logical workflows and executes data flows based on specified conditions, enabling the design of intricate ETL data pipelines. In this article, we\u2019ll show the Import scenario, as it can also be used for both one- and two-way data sync. To connect Hubspot and Shopify using Skyvia, you need to create a connection to each of them. Creating a Connection to HubSpot To create a connection to Hubspot: Click +New , and choose Connection . Select Hubspot from the list. Click Sign in with Hubspot . Log in with your Hubspot credentials. Click Create Connection. Creating a Connection to Shopify Click +New , and choose Connection . Select Shopify from the list. Specify your Shopify Store. You need to specify the full domain. For example, [my-store.myshopify.com](http://my-store.myshopify.com/) . Click Sign in with Shopify . Log in with your Shopify credentials. Click Create Connection. Connecting Hubspot and Shopify Skyvia provides a range of variety of data integrations such as Import, Export, Replication, Synchronization, and others that can be used with Hubspot and Shopify to meet your data-related needs. Let\u2019s check how to import data from Shopify to Hubspot: Click +New , and choose Import . Select Shopify as the Source and\u00a0Hubspot\u00a0as the Target. Click Add New to add a new task. Select an object you want to import data from and click Next Step . Select an object you want to import data to and click Next Step . Here, you can also choose what operation you want to perform. For example, Update or Insert. Map the required fields and click Save to save your task. You can add any number of tasks to the integration. Once your integration is ready, you can run it manually or based on schedule. Why Skyvia is Beneficial for Your Needs While Skyvia may require a few extra steps before launching the integrations, it offers a variety of integration tools alongside automation and backup services that cover most data-related needs and provide a high level of customization and configuration. You can always create a free account with [Skyvia](https://app.skyvia.com/) and apply for a free trial for any product provided by Skyvia. That integration process is easy and requires no coding. Conclusion Nowadays, if you are using Hubspot and Shopify \u2014 the integration between them is more of a must than a question, as it brings a long list of benefits. There are several methods to integrate two systems: a native plugin and third-party solutions such as [Skyvia](https://skyvia.com/) . Which solution to choose depends on your needs and budget. We recommend using this guide and trying both methods to see which one works best for you. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-and-hubspot-integration-ultimate-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=How+to%3A+Shopify+and+Hubspot+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-and-hubspot-integration-ultimate-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-and-hubspot-integration-ultimate-guide/&title=How+to%3A+Shopify+and+Hubspot+Integration) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/shopify-quickbooks-online-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) QuickBooks and Shopify Integration: A Full Guide By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/shopify-quickbooks-online-integration/#respond) 2941 September 20, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-quickbooks-online-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=QuickBooks+and+Shopify+Integration%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-quickbooks-online-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-quickbooks-online-integration/&title=QuickBooks+and+Shopify+Integration%3A+A+Full+Guide) Selling online is a benefit noticed and appreciated even by seasoned brick-and-mortar shops. The number of online stores is growing each year and currently constitutes [12 million](https://www.tidio.com/blog/online-shopping-statistics/) in total, with 4 million managed by [Shopify](https://www.shopify.com/) . This platform is incredibly popular because it\u2019s specifically designed for easy design and management of online stores. To maintain billing and transaction data of online stores, the [QuickBooks](https://quickbooks.intuit.com/) tool is there. It has millions of users all over the world due to its simplicity and ample functionality. Both these tools combined together create a perfect technological basis for small businesses. Consider Shopify and QuickBooks integration to get better control over your sales and inventory management. In this article, we explain how to set up [Shopify QuickBooks integration](https://skyvia.com/data-integration/integrate-quickbooks-shopify) with a native connector and via a 3rd party cloud app. Based on that, you\u2019ll decide which method works better for your specific business needs. Table of Contents Benefits of Shopify and QuickBooks Integration Overview of QuickBooks Online and Desktop Versions How to Connect Shopify to QuickBooks Online Using Native Connector QuickBooks and Shopify Data Integration Using Cloud Tool Advanced Integration Capabilities: An Accountant\u2019s Perspective Conclusion Benefits of Shopify and QuickBooks Integration Even if you\u2019re already using both tools for a while, you also need to consider Shopify integration with QuickBooks. Merging data between these cloud-based applications isn\u2019t demanding at all and brings multiple advantages. Below, we provide some of the tangible benefits that come along with Shopify QuickBooks online integration. Streamlined accounting process. QuickBooks allows registering, sorting, and storing all the accounting data in one place. As all income, expenses, and payroll data are kept together, users obtain better control and accuracy over it. Real-time insights for sales and inventory. When all sales-related data is gathered in QuickBooks, it\u2019s easy to observe how the business is growing based on the sales metrics. Automated data synchronization. With automated integration between two apps, users no longer need to input data manually. This saves a lot of human resources working hours and improves the accuracy of calculations. Quickbooks Shopify sync is easy to set up, though some parameters will depend on the versions of software tools. Overview of QuickBooks Online and Desktop Versions With more than 40 years of history, the QuickBooks application has passed through different forms and stages of development. There\u2019s still an option to work with this accounting software as a desktop app on Windows or Mac. Otherwise, users could enjoy a cloud-based version, which also provides extra features and advantages. So, let\u2019s have a look at the core differences between [online and desktop versions](https://quickbooks.intuit.com/online/move-to-online/) provided in the table below. Understanding them will help you set up the integration between Shopify and QuickBooks correctly. Parameter Online version Desktop version Cloud access Provided by default Requires extra fee Number of users Up to 25 users per account One user per device with extra payment for additional users Feature set Full functionality Limited functionality Data backups Automatic Manual Integrations With more than 700+ apps With more than 200+ apps User interface The latest version Simplified Even though the desktop app is convenient to use, it\u2019s not as modern and powerful as the online version. The latter grants better integration capabilities, allows for better collaboration among team members, offers a larger feature set, and automates many processes. How to Connect Shopify to QuickBooks Online Using Native Connector The first way to connect Shopify and QuickBooks is by using the native integration tools. In particular, we have a look at the case of the [Shopify connector](https://quickbooks.intuit.com/app/apps/appdetails/shopify_connector/en-us/) developed by Intuit for synching data between applications. Follow these step-by-step instructions to integrate data between Shopify and Quickbooks apps using a native connector: Go to [this page](https://quickbooks.intuit.com/app/apps/appdetails/shopify_connector/en-us/) . Click Get app now . Click Connect . You are redirected to the connectors management page. Select Shopify and click Connect next to it. Type your Shopify store domain and click Connect to Shopify . Even though the native application for connecting Shopify and QuickBooks has everything in place, customers admit some drawbacks and inconveniences about it. For instance, this connector isn\u2019t capable of extruding correct customer data, which appears essential for collecting invoices and binding them to concrete consumers. Integration with third-party cloud apps addresses such issues and has enough potential to beat them. QuickBooks and Shopify Data Integration Using Cloud Tool Before moving data between cloud-based apps, it might make sense to migrate from the desktop to the online version of QuickBooks in advance. This procedure is not obligatory, so it\u2019s up to you to decide whether optimizing business operations makes sense. Connect QuickBooks Online and QuickBooks Desktop If you need to move to the online version of QuickBooks, take the steps below. Go to the Help menu in the top bar. Select Update QuickBooks Desktop . Click Update Now . Go to the Company menu and select Export Your Company File to QuickBooks Online . Click Get Started to set up the data migration parameters. Set up all the needed parameters. NOTE: If you transfer data from the desktop version to the existing company account online, all the data will be replaced with the desktop app data. Guide on Integrating QuickBooks Online with Shopify using Skyvia Now, let\u2019s get back to the point of this article \u2013 integration of Shopify and QuickBooks. In particular, this section observes Skyvia as a third-party tool assisting in the data transfer process. [Skyvia](https://skyvia.com/) is a cloud-based data integration platform that enables users to move and synchronize data across multiple apps, including Shopify and QuickBooks. The service meets essential business needs in today\u2019s digital world: Easily set up, integrate, and maintain. Scalable and flexible (capable of meeting load and providing flexible pricing). Here, we\u2019ll look at two typical integration scenarios between Shopify and QuickBooks using Skyvia: data import and synchronization. The first one is applicable when you want to move invoices, payments, and other accounting data from Shopify to QuickBooks. Synchronization is necessary to keep consistent specific data fields over time in both apps. Setting Up Import Scenario for Data Integration Skyvia designs the Import scenario for simple and effective data integration between data sources. In this case, we import Shopify data (Payment Transactions in this example) into QuickBooks online. Click +NEW in the top menu and select Import scenario in the Integration category. Set up the needed parameters for creating an import package. Define Shopify in the Source section. If not added yet, [see how to connect Skyvia to Shopify](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html) . Define QuickBooks Online in the Target section. If not added yet, see how to connect [Skyvia with QuickBooks Online](https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html#connection) . Select the necessary checkboxes with the options Preserve task order , Nested Objects , and Source values in the error log . Determine the batch size for data records transfer. Click Add task to define which exact data should be moved and define data mapping settings. In the Source Definition tab, define the exact data field in Shopify to be moved. In this case, the Payment Transactions field is selected. Click Next Step . In the Target Definition tab, select the data field in QuickBooks to which the data will be loaded. In this case, the Payment field is selected. Map data columns if required. [See more details about mapping rules here](https://docs.skyvia.com/data-integration/common-package-features/mapping/) . Click Save . Click Run to start data integration immediately. Set scheduling parameters to perform data integration on a regular basis at a specific time. Setting Up Synchronization Scenario The Synchronization scenario designed by Skyvia is particularly suitable for bi-directional data sync between apps or databases. In this case, we focus on synchronizing the clients\u2019 data between Shopify and QuickBooks to keep the customer base and contact data actual. NOTE: During the first sync session, all the data is copied from one source to another without checking for duplicates. The system adds only new or updated data fields during subsequent sync sessions. So, if you need to sync data in both directions and your systems are not empty, there is a better alternative \u2013 create two different import packages (one is for importing data from Shopify to Quickbooks, and another is for importing data from Quickbooks to Shopify) and run them one by one on schedule. Click +NEW in the top menu and select Synchronization scenario in the Integration category. Set up the needed parameters for creating a synchronization package. Define Shopify in the Source section. If not added yet, [see how to connect Skyvia to Shopify](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html) . Define QuickBooks Online in the Target section. If not added yet, see how to connect [Skyvia with QuickBooks Online](https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html#connection) . Click Add task to define which exact data should be moved and define data mapping settings. In the Tables Definition tab, set the direction and data fields for sync. Click Next Step . Set up the needed parameters in the Columns Definition tab. Click Save . Click Create in the upper panel to create a package. Set scheduling parameters to perform data integration on a regular basis at a specific time. Click Run to start data integration immediately. Guide on Integrating QuickBooks Desktop with Shopify using Skyvia Data integration scenarios for the desktop version of QuickBooks are similar to that of the online one. The only significant difference implies the [connection of QuickBooks Desktop with Skyvia](https://docs.skyvia.com/connectors/cloud-sources/quickbooksdesktop_connections.html#connection) . Also, the data fields available for transfer or synching might slightly differ. Advanced Integration Capabilities: An Accountant\u2019s Perspective As integration works in both directions, sending data from QuickBooks to Shopify is possible. In particular, accountants appreciate advanced QuickBooks features such as tax setup, the data from which could be integrated into Shopify. Tax Setup This QuickBooks setting determines how much tax to charge customers and which amount businesses must pay for local or state tax. Provide necessary business details plans for collecting taxes around the areas and indicate corresponding tax agencies. To integrate this data from QuickBooks to Shopify, proceed with the [above](https://docs.google.com/document/d/1eDasfwrOiLUIeAIVoge_JHZzswlRrY8ND5Mzfgha9Lw/edit?pli=1#heading=h.moriwpnqpdms) import package considering the following details. Select QuickBooks as a source and Shopify as a target. Select TaxCode in the Source Definition tab. Select Countries in the Target Definition tab. Map the data fields correspondingly. Now, the amount of tax is calculated automatically when a customer is charged for the order in Shopify. Similar integration packages could be created for other advanced features of QuickBooks, which you can benefit from in Shopify. Conclusion Bringing accounting power from QuickBooks and selling potential from Shopify together provides businesses with skyrocketing opportunities. In particular, QuickBooks and Shopify integration streamlines the accounting process, highlights insights for sales and inventory, and automates data synchronization. Moreover, it reveals the mode to bring exclusive advanced features from QuickBooks, such as tax setup, to Shopify for correct tax charging. To take advantage of the combination of these tools, a third-party cloud application \u2013 Skyvia \u2013 comes in handy. It\u2019s a perfect solution for transferring all the needed data between apps and mapping it correctly to boost sales and improve account management in the company. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-quickbooks-online-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=QuickBooks+and+Shopify+Integration%3A+A+Full+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-quickbooks-online-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-quickbooks-online-integration/&title=QuickBooks+and+Shopify+Integration%3A+A+Full+Guide) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/shopify-salesforce-integration-3-best-methods/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Shopify Salesforce Integration: 3 Best Methods By [Brenna Scheving](https://skyvia.com/blog/author/brennasch/) [0](https://skyvia.com/blog/shopify-salesforce-integration/#respond) 4443 September 16, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Shopify+Salesforce+Integration%3A+3+Best+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-salesforce-integration/&title=Shopify+Salesforce+Integration%3A+3+Best+Methods) The ecommerce market offers an incredible amount of possibilities for entrepreneurs worldwide. An idea can easily be translated into a profitable business with the assistance of new business applications like Shopify and Salesforce. What\u2019s more, the integration of these two systems increases their individual power through the capacity of immediate data exchange. When Salesforce and Shopify are integrated, data is shared across both platforms. This makes it easier to efficiently manage inventory, strengthen customer relationships and implement more efficient workflows. For companies that use both Salesforce CRM and Shopify, this article will help you understand how to integrate the two services. Table of contents What is Salesforce? What is Shopify? Why Do You Need Shopify Salesforce Integration? Easiest cloud integration between Shopify and Salesforce How to Connect Shopify to Salesforce using API How to Use Native API Tools to Integrate Salesforce with Shopify Summary What is Salesforce? Salesforce is a powerful tool that simplifies customer relationship management for companies around the globe. The platform makes it simple for businesses to improve relationships with customers and potential customers. What\u2019s more, Salesforce is a cloud-based CRM. This means that the Salesforce platform is accessible to all users, regardless of location. Salesforce allows companies to manage and track customer relationships from the first conversation to ongoing customer service. The easy-to-navigate Salesforce dashboard shows customizable data, which makes it simple for users to find what they need. What\u2019s more, the Salesforce platform can generate meaningful reports and analytics. As a result, Salesforce gives companies the power to strengthen customer relationships. A huge benefit of Salesforce is the ability to easily integrate with other 3rd party tools and business applications. By utilizing the data from other applications, Salesforce allows companies to discover new insights that enhance their business. Shopify is a 3rd party tool that offers great benefits when integrated with Salesforce. What is Shopify? Shopify is a cloud-based platform that enables users to easily launch an ecommerce business. In fact, Shopify manages and tracks not only web and mobile based sales, but also in-person sales at brick-and-mortar locations. The Shopify platform is incredibly user-friendly, which makes it easy and fast for users to create their own ecommerce stores. Shopify makes it simple to manage inventory. What\u2019s more, inventory can be managed and synced between both webshops and physical stores. This makes it easier to possess accurate knowledge of business inventory across a variety of storefronts. In addition, Shopify provides unique reporting tools for business owners. Shopify generates reports about average inventory sold per day, sell through rate and more. The Shopify platform allows users to quickly create an ecommerce storefront with templates that are fully customizable. Users can drag and drop photos, add copyright and more. Shopify storefronts are also easy to navigate for potential customers. What\u2019s more, Shopify can be integrated with other platforms such as Salesforce. Why do you need Shopify Salesforce Integration? The biggest benefit of Shopify Salesforce integration is the ability to track, analyze and manage ecommerce and customer relationship management data simultaneously. For example, data points such as orders, customers, collections and products can be synced between Shopify and Salesforce in real time. Such synchronization facilitates inventory management. What\u2019s more, Shopify to Salesforce integration easily converts customers to accounts and contacts. Consequently, these accounts and contacts can be analyzed, tracked and followed on the Salesforce CRM platform. Salespeople are able to use this data to follow up with leads and opportunities. In addition, multiple Shopify stores can be managed in the Salesforce CRM platform. This allows users to see a holistic view of data across various ecommerce storefronts. This is a huge benefit for ecommerce businesses that operate a variety of storefronts. By toggling between ecommerce storefronts in the Salesforce platform, users are able to see key data insights that can increase profitability. For example, many companies create highly specific marketing campaigns based on such data analysis. Data analysis is an important part of ecommerce. By analyzing sales and managing customer relationships, shop owners can maximize profitability and retain customers. Shopify Salesforce integration is an essential process to manage and analyze data across these two platforms. Easiest cloud integration between Shopify and Salesforce There is a simple and effective way to [integrate Shopify and Salesforce](https://skyvia.com/data-integration/integrate-salesforce-shopify) . [Skyvia](https://skyvia.com/) is cloud-based solution that helps to complete various data-related tasks without coding. Skyvia offers different Shopify Salesforce integration methods to implement various use cases. Import allows to loading data to the cloud data sources or databases from other sources and CSV files. Export helps to retrieve data from cloud sources and databases into CSV files and place these files in the available storage services for future processing. Skyvia Synchronization is a tool for the bi-directional integration of data sources with different data structures. Replication is needed for copying data from cloud sources to databases together with the data structure (schema). Skyvia also offers Data Flow and Control Flow tools for complex data integration scenarios, which allow for building complicated flows and performing conditional data transformations, splits, and other actions. In our case, we will look at the easiest method. We will create Salesforce contacts from Shopify customer data using Skyvia Import. This method is simple, time-saving, and it doesn\u2019t require any specific skills. To integrate Shopify and Salesforce, you need to be registered in Skyvia. You can [sign up with Skyvia for free](https://skyvia.com/pricing/) , if you don\u2019t have a subscription yet. Then you have to log in to Skyvia and create connections to [Salesforce](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html) and [Shopify](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html) . There are several ways to connect to data sources in Skyvia; more details are available [here](https://docs.skyvia.com/connections/) . Create an Import package. Click +NEW and select Import on top of the Skyvia page. Name the package, not to leave it untitled. Cli\u0441k the Data Source database or cloud app Source type on the left. Select your Shopify connection as a Source and Salesforce connection as a Target. Click Add task on the right to create an Import task. Specify the Source settings. In the new Task Editor window on the Source definition tab, select the Simple Task Editor mode. Select the Shopify Customers table as a source and specify the filters if needed. You can use Advanced mode. When the query or command is ready, click Next to proceed. On the Target Definition tab, select the Salesforce Contacts table as a target and choose the action you need to perform against the target table. In our case, we perform the Insert operation. Shopify and Salesforce data have different structures. Thus the source and target fields must be mapped for smooth integration. On the Mapping definition tab, specify the column mapping. You can use available mapping types to map the fields. Detailed descriptions of every available mapping type are available here. After the mapping is completed, save the task and then save the package. The package could be launched manually, or you can set the schedule for the package to run automatically. Skyvia offers several launch frequency options. After the run is completed you can check the result in the package run history on the Monitor and Log tabs. With Skyvia, you can implement integration in the opposite way. Integrate Salesforce Leads into Shopify Customers. Repeat the steps from point 1, selecting Salesforce as a Source and Shopify as a Target instead. Proceed to the package task editor. Select the Lead table on the Source definition tab. Use filters if needed. Select the Customers table as a Target. Map the source and target fields, and set the schedule if needed. Run the package Check the results. As you can see, the no-code cloud integration with Skyvia is a simple and convenient method to integrate Salesforce and Shopify. You don\u2019t need to be technical savvy and no developing experience is needed to set up such an integration. Skyvia offers absolutely free data integration opportunities so you can start implementing your integration right now. How to Connect Shopify to Salesforce using API Shopify has application programming interfaces (API) that allow different software programs to interact with one another. Furthermore, Salesforce also has APIs which can be used to create a Shopify to Salesforce integration. Below, you can find instructions about how to combine Shopify and Salesforce APIs to create Shopify to Salesforce integration. In Shopify, navigate to Settings and click \u2018Apps and sales channels.\u2019 Click \u2018develop apps.\u2019 Click the name of the app that you want the credentials for. Then, click API credentials. Make a note of the API key and secret key. Access the list of customers with the following CURL command. Format the JSON into a CSV with a command-line JSON processor. Next, use the Salesforce Bulk API to load the data. Create the configuration file to start the sequence. Use the below example to create a file with the name login.txt. Then, you will log in with the previous configuration file. The command response will show a session-id; make a note of it. Next, create the configuration file to initiate the Bulk API. You should save the below content as job.txt. At this point, you can start the Bulk API request with the above configuration file. A job-id will appear in response to the command. Make a note of this. Add a batch to the above job to insert data using the CSV file from step 3. Use the job-id from step 7. If you need to split the job into multiple batches, use the same command with multiple files to add more batches. The status of the job can be confirmed with the below CURL command. The biggest benefit of integrating Salesforce with Shopify via application programming interfaces is direct data sharing. Business owners can use shared data to more efficiently manage Shopify inventory, analyze sales and strengthen customer relationships. However, the biggest limitation of this method is the lack of real time synchronization across platforms. First, it must be noted that Shopify APIs are paginated. Pagination takes large amounts of data and chunks them into smaller pieces \u2013 not unlike how a book is composed of pages. Consequently, this method requires a programmer to develop more logic to handle the paginated Shopify APIs. Another limitation is that many use cases in Shopify must undergo transformations before they can be exported to Salesforce. Furthermore, the ongoing transfer of data between Shopify and Salesforce is only possible with additional programming. This means that data cannot be synced in real time unless more logic is created on the programmer\u2019s end. Finally, Salesforce Bulk APIs are unable to manage duplicates; the application level must control this process. How to Use Native API Tools to Integrate Salesforce with Shopify Connecting Salesforce to Shopify is also possible with the Shopify Data Export Wizard and the [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard. However, it must be noted that this method requires a manual file export from Shopify to Salesforce. Consequently, there are limitations to this method despite the availability of native API tools. Below, you will find instructions about Salesforce with Shopify integration with the Shopify Data Export Wizard and the Salesforce Data Import Wizard. In Shopify, navigate the \u2018Customers\u2019 tab and then click \u2018Export.\u2019 Next, choose \u2018Plain CSV\u2019 as the file format. Click \u2018Export customers.\u2019 You will receive a CSV file of selected customers in your email inbox. Open Salesforce and enter Data Import Wizard in the Quick Find box, then select Data Import Wizard. Click Launch Wizard. In the Standard Objects tab, under \u2018What kind of data are you importing?\u2019 click Accounts and Contacts. Under \u2018What do you want to do?\u2019 select \u2018Match Contact by Email.\u2019 Then, select the CSV file from Shopify to import and hit \u2018Next.\u2019 Next, you have to map the appropriate fields from the CSV file fields to the Salesforce Contact object fields. The Shopify CSV file includes first and last names, phone numbers and email addresses. As a result, you can directly map these fields to the corresponding fields in Salesforce. However, Shopify does not include the Salutation and Title fields. Therefore, you are not able to map those fields directly to Salesforce. This method of using native API tools to execute a Shopify integration with Salesforce is very simple. Shopify allows users to easily export data, such as customer information. Furthermore, Salesforce offers users the ability to quickly import data via its Data Import Wizard. However, there are some limitations to this method. First, the method requires the administrator to manually export data from Shopify and import it into the Salesforce platform. The lack of instant automation and synchronization is a big limitation. Second, the Salesforce Data Import Wizard can only import 50,000 records at one time. Consequently, if you need to import a larger amount of records, you will have to load them in multiple batches. Third, the administrator is not able to transform the data from Shopify before it can be exported into Salesforce. This is often necessary in some organizations. Summary Shopify to Salesforce integration is an incredible tool that can enhance business operations. From facilitating data management to data synchronization and more, integration between Shopify and Salesforce offers a variety of benefits. As discussed, Shopify to Salesforce integration is possible via API integration or using native API tools to manually integrate data. However, these two methods have many limitations. That is where Skyvia offers unique advantages. Skyvia cloud data platform makes it easy and user-friendly for organizations to conduct Shopify to Salesforce integration. Most importantly, Skyvia cloud platform offers no-code integration. Therefore, Shopify and Salesforce can be integrated without the need to connect APIs or export files manually. What\u2019s more, Skyvia offers real time data synchronization between Shopify and Salesforce. This immediate data synchronization is a key benefit that facilitates inventory management, streamlines business processes and makes it easier to grow customer relationships. Furthermore, Skyvia cloud data platform provides not only [Shopify](https://skyvia.com/connectors/shopify) and [Salesforce connectors](https://skyvia.com/connectors/salesforce) but also up to 100 cloud apps and databases. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Shopify+Salesforce+Integration%3A+3+Best+Methods&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-salesforce-integration/&title=Shopify+Salesforce+Integration%3A+3+Best+Methods) [Brenna Scheving](https://skyvia.com/blog/author/brennasch/) Technical Writer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/shopify-xero-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Shopify with Xero \u2013 Direct Integration vs. Skyvia Integration By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/shopify-xero-integration/#respond) 1853 May 13, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-xero-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Shopify+with+Xero+%E2%80%93+Direct+Integration+vs.+Skyvia+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-xero-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-xero-integration/&title=Shopify+with+Xero+%E2%80%93+Direct+Integration+vs.+Skyvia+Integration) Small businesses make up the basis of the national economy in the United States. However, according to the [US Bank](https://spend.usbank.com/blog/cash-flow-monitoring-strategies-for-small-businesses-tips-and-best-practices/) , around 82% of small businesses fail due to poor cash flow management. The 18% of startups that survive have a common component in their success formula \u2014 accounting software. And Xero is one of the best bookkeeping tools designed for small and medium-sized companies. Businesses with online shops should also consider [Xero integration with Shopify](https://skyvia.com/data-integration/integrate-xero-shopify) to coordinate incomings and outgoings, keep inventory updated, and ensure proper taxation. This article presents the best Shopify Xero integration methods: native connection and Skyvia. It also discusses each approach\u2019s advantages and helps you decide which method works best. Table of Contents Xero Shopify Integration Preamble Direct Integration to Connect Shopify to Xero Xero Integration with Shopify Using Skyvia Setting Up Shopify and Xero Integration with Skyvia Benefits of Skyvia for Shopify Xero Integration Comparison of Methods Conclusion Xero Shopify Integration Preamble Each of these tools is a leader in the respective industry with a considerable market share. Creating an alliance of these two solutions can reveal new business opportunities and benefits. Inventory accuracy. Synchronizing inventory data between tools provides the actual number of products in stock and helps businesses prevent overselling. Real-time insights . Transferring data between tools is possible almost in real-time, which makes the most recent data available across different apps. Taxation on-the-beat. Xero has all the necessary information about taxation in different countries. Integrating this data with Shopify helps update the prices for each region accordingly. Refined financial management. Shopify daily sales sent to Xero help organizations align accounting data. Moreover, transaction data can quickly be matched to bank accounts. Direct Integration to Connect Shopify to Xero A native integration app is a popular way to connect Shopify to Xero. It\u2019s available on both the [Shopify](https://apps.shopify.com/xero) and [Xero app](https://apps.xero.com/!h75Qv/app/shopify) stores. In fact, it\u2019s the same app with the setup performed on the Xero side. Let\u2019s have a look at the native integration app installment and configuration. Initial Setup Log into your Shopify and Xero accounts using the same browser. In Shopify, click Settings and copy the store address in the upper-left corner. Go to [Shopify integration by Xero](https://apps.xero.com/!h75Qv/app/shopify) and click Get this app . Click Connect app . Paste the store URL from Step 2 and click Connect . Click Install and follow the on-screen instructions on the Shopify side. Select the organization from the drop-down list if you have several organizations on Xero. Otherwise, the system automatically connects to the only organization available. Click Start set up . Integration Setup First, specify the account and tax rate for each transaction type. Click Continue . Set payment gateways and all the requested details related to them. Click Continue . Decide whether there\u2019s a need to import Shopify invoices for the last 90 days. Click Finish & Proceed . Note! The integration app is only available under the Xero Standard, Premium, and Ultimate pricing plans. Even though native Xero Shopify integration seems to be a seamless solution, there are certain LIMITATIONS associated with it: The many-to-one on connection is NOT available : Multiple Shopify stores cannot be connected to one Xero account. Different currency sets are NOT supported : Shopify and Xero using different base currencies can\u2019t be connected. NO invoices with customer data: It\u2019s not possible to integrate invoices showing customer payment information. NO sync for inventory data: The Inventory and Cost of Goods Sold management isn\u2019t supported. Xero Integration with Shopify Using Skyvia Another way to connect Xero to Shopify is to use third-party [data integration tools](https://skyvia.com/blog/data-integration-tools/) . This article focuses on Skyvia as a universal SaaS data platform for versatile data-related tasks without code. [Skyvia](https://skyvia.com/) offers quick and easy solutions to complex data integration tasks. It also contains products for cloud data backup, data management with SQL, creating OData services, and workflow automation. Let\u2019s focus on the [Data Integration](https://skyvia.com/data-integration/) and [Automation](https://skyvia.com/automation/) products since they are the most suitable solutions for [Shopify integration with Xero](https://skyvia.com/data-integration/integrate-xero-shopify) . Data Integration Below, find the Data Integration tools and see which use cases they are suitable for. [Import](https://skyvia.com/data-integration/import) . It\u2019s a wizard-based ETL tool to design data integrations between tools. You can automate data loading from Xero to Shopify or vice versa. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . It\u2019s a solution for designing complex ETL data pipelines with compound data transformations to correlate the data structures between different tools. You can send actual data from Shopify to Xero and other targets. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) . It\u2019s usually used in conjunction with Data Flow when it\u2019s necessary to execute integrations in a specific order under specific conditions. For example, you can perform the inventory update after importing sales transactions. With Skyvia tools, you can integrate invoices, processed sales, contacts, items, and many other data. The full list of objects available for transfer appears when setting up an integration. Note that the above-mentioned operations are also available when [integrating Shopify with other bookkeeping software](https://skyvia.com/blog/shopify-quickbooks-online-integration/) . Skyvia supports [QuickBooks, FreshBooks, and Zoho Books](https://skyvia.com/connectors/#accounting) . Automation [Automation](https://docs.skyvia.com/automation/) connects your favorite apps and builds complex workflows to automate repetitive manual tasks. The initial point of each automation is a trigger that defines the process start. It\u2019s followed by an action component that actually makes the automation happen. It executes an operation over data from the chosen data source, each of which has its own set of available actions. Sample use cases for automation involving Xero and Shopify: Once a new purchase is completed in Shopify, send the payment transaction data to Xero. Transfer the inventory data from Shopify to Xero every hour. Discover whether there are any purchases from a specific country. Setting Up Shopify and Xero Integration with Skyvia Now, let\u2019s look at the example of how to set up [Xero and Shopify integration](https://skyvia.com/data-integration/integrate-xero-shopify) with Skyvia Data Integration. We\u2019d like to demonstrate how to send Shopify data to Xero. In our example, we import items and transactions from Shopify to Xero, which has all the payments for online orders and the actual inventory state. Depending on your objectives for Shopify Xero integration, you may send any other objects. That way, data integration with Skyvia can turn Xero into an accounting source of truth. Before setting up the data transfer options, check the following prerequisites: [Xero connector](https://docs.skyvia.com/connectors/cloud-sources/xero_connections.html) is tuned up. [Shopify connector](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html) is tuned up. In fact, configuring everything will take less than 2 minutes, as both connectors require only login credentials for authentication. Then, proceed with the following steps to set up the Xero Shopify integration with Skyvia: Log into your [Skyvia account](https://app.skyvia.com/) or create a new one. Note that no credit card is required for that. In the top menu, select +Create new -> Import . Choose Data Source under the source type and Shopify under Connection . Select Xero as a target. Click Add task to set up the import of payment transaction data from Shopify to Xero. Select the PaymentTransaction source object and click Next Step . Depending on the use case, you can select any other source object of interest. Select Payment from the list of target objects. Configure the [mapping settings](https://docs.skyvia.com/data-integration/common-package-features/mapping/) for the integration. This is necessary to import data correctly. Make sure to put corresponding options on the source side to the required target objects. In our example, the data format for the date differs between the source and target sides. So, we use the [Expression mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/expression-mapping.html) functionality to convert the date format in Shopify to the format needed in Xero. Note that Expression mapping offers multiple conversion possibilities and powerful data transformations. This feature and many other advanced features are available only under paid plans. Click Save task . Click Add task to set up import of the inventory items from Shopify to Xero. Select InventoryItems source object and Item from the list of target objects. Set up mapping and perform any data transformations if needed. Save the task. To automate the payment and inventory data transfer to Xero, click Schedule in the upper-left corner. Set Recurring and specify the frequency at which data needs to be integrated. Click Save . In the upper-right corner, click Create to preserve the import integration. As you see, it\u2019s possible to create several tasks in the same import scenario, each for a definite object. Note that the free Skyvia version allows you to automate imports only once per day. If you need to transfer Shopify data to Xero more often, consider one of the [paid plans](https://skyvia.com/pricing/) . Benefits of Skyvia for Shopify Xero Integration Using Skyvia for connecting Xero with Shopify simplified the data integration processes and goes along with other advantages for businesses: Setting up complex integration scenarios with no coding . Using advanced features, such as complex transformations and mappings within an intuitive visual wizard. Possibility to include multiple data sources and destinations in the integration scenario. Scheduling data import to automate data transfer. Web-based access for instant login to Skyvia from anywhere. Flexible plans are available for any business. A broad set of connectors. Multiple integration scenarios. See how Skyvia has helped numerous companies synchronize accounting software with various apps. Here are some of the real-world case studies: [United Home Experts uses Skyvia for custom QuickBooks Online reporting](https://skyvia.com/case-studies/united-home-experts) [Fortium Partners leverages MySQL and QuickBooks integration](https://skyvia.com/case-studies/fortiumpartners) And finally, let\u2019s also recall some common benefits companies obtain with the Xero Shopify integration: Streamlined financial reporting and administration. Modernization and switching from spreadsheets. Better tax management. Near real-time financial and accounting data synchronization. Cash flow management. Refined inventory management. Comparison of Methods It\u2019s time to refer to the drawbacks of the native integration between Xero and Shopify and see how Skyvia bypasses them. Limitation of native method Skyvia response 1 Not possible to connect multiple Shopify stores to the same Xero account. You can create different Shopify connectors for each online store and create a dedicated integration scenario for each Shopify source with the same Xero target. 2 Different currency sets aren\u2019t supported. You can use expression mapping to align currencies between Shopify and Xero. 3 Inventory and Cost of Goods Sold management isn\u2019t supported. It\u2019s possible to send all inventory items from Shopify to Xero. Anyway, the native integration app might work well in case you need to: Automatically sync Shopify sales with Xero on a daily basis. Track pending sales from Shopify payments across all invoices. Transfer processed sales on Shopify to corresponding nominated bank accounts on Xero. Import sales transactions on Shopify for the last 90 days. Handle multiple sales tax rates for New Zealand, Australia, and the UK regions. Skyvia can do all of that as well, and without limitations for 90-days historical data \u2013 and without restrictions on the number of data transfer operations per day. You can import any historical data needed and send Shopify data to Xero every 15 minutes with Skyvia. Conclusion Aligning online shop financial data with accounting software is a common use case across modern businesses. Proper financial management ensures businesses\u2019 longevity and success. Consider Skyvia for Xero and Shopify integration, if you use both these tools in your business operations. Skyvia helps to set up the integration with the aim to enhance cash flow management and overall financial performance. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fshopify-xero-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Shopify+with+Xero+%E2%80%93+Direct+Integration+vs.+Skyvia+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fshopify-xero-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/shopify-xero-integration/&title=Shopify+with+Xero+%E2%80%93+Direct+Integration+vs.+Skyvia+Integration) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/skyvia-achieves-google-cloud-ready-bigquery-designation/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Skyvia Achieves Google Cloud Ready \u2013 BigQuery Designation By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/skyvia-achieves-google-cloud-ready-bigquery-designation/#respond) 2952 September 4, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-achieves-google-cloud-ready-bigquery-designation%2F) [Twitter](https://twitter.com/intent/tweet?text=Skyvia+Achieves+Google+Cloud+Ready+%E2%80%93+BigQuery+Designation&url=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-achieves-google-cloud-ready-bigquery-designation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/skyvia-achieves-google-cloud-ready-bigquery-designation/&title=Skyvia+Achieves+Google+Cloud+Ready+%E2%80%93+BigQuery+Designation) For many years, Skyvia has offered its clients a convenient and easy way to integrate data within Google Cloud BigQuery with no coding. Since Skyvia is dedicated to delivering top-notch integration solutions, our team has recently received the Google Cloud Ready \u2013 BigQuery designation. Why is Google Cloud Ready \u2013 BigQuery designation important for our customers? By graduating as part of the initiative, Skyvia has proven that its platform meets all operational and compatibility requirements from Google Cloud BigQuery. This designation provides our customers with the assurance that Skyvia operates harmoniously with BigQuery. [Google Cloud Ready \u2013 BigQuery](https://cloud.google.com/bigquery/docs/bigquery-ready-overview) is a partner integration validation program that intends to increase customer confidence in partner integrations into [BigQuery](https://cloud.google.com/bigquery) . During this program, Google Cloud engineering teams validate partner integrations into BigQuery in a three-phase process: run a series of data integration tests and compare results against benchmarks, work closely with partners to fill any gaps, refine documentation for our mutual customers. Being part of the program, Skyvia gets more opportunities to collaborate closely with Google Cloud partner engineering and BigQuery teams. \u201cThe Google Cloud Ready \u2013 BigQuery designation gives customers confidence that solutions have gone through a formal certification process and will deliver the best possible performance with BigQuery,\u201d said Ritika Suri, Director of Technology Partnerships at Google Cloud. \u201cWith Skyvia, customers can connect all of their data and metrics with BigQuery to more easily optimize their business performance.\u201d To learn more about Skyvia\u2019s expertise with Google Cloud, visit [https://skyvia.com/connectors/google-bigquery](https://skyvia.com/connectors/google-bigquery) .\u00a0 To learn more about Google Cloud Ready \u2013 BigQuery Designation and its benefits, visit [https://cloud.google.com/bigquery/docs/bigquery-ready-overview](https://cloud.google.com/bigquery/docs/bigquery-ready-overview) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-achieves-google-cloud-ready-bigquery-designation%2F) [Twitter](https://twitter.com/intent/tweet?text=Skyvia+Achieves+Google+Cloud+Ready+%E2%80%93+BigQuery+Designation&url=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-achieves-google-cloud-ready-bigquery-designation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/skyvia-achieves-google-cloud-ready-bigquery-designation/&title=Skyvia+Achieves+Google+Cloud+Ready+%E2%80%93+BigQuery+Designation) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/skyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Skyvia Enhances Collaboration with Google Cloud Ready \u2013 AlloyDB and Cloud SQL Designation By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/skyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation/#respond) 2956 September 5, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation%2F) [Twitter](https://twitter.com/intent/tweet?text=Skyvia+Enhances+Collaboration+with+Google+Cloud+Ready+%E2%80%93+AlloyDB+and+Cloud+SQL+Designation&url=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/skyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation/&title=Skyvia+Enhances+Collaboration+with+Google+Cloud+Ready+%E2%80%93+AlloyDB+and+Cloud+SQL+Designation) Skyvia offers its customers a no-code cloud solution for integrating Google Cloud SQL and AlloyDB data for several years. Being dedicated to providing exceptional solutions, the Skyvia team has recently achieved the Google Cloud Ready \u2013 AlloyDB and Google Cloud Ready \u2014 Cloud SQL designations. Why Google Cloud Ready \u2013 AlloyDB and Cloud SQL designations are important for our customers? By earning these designations, Skyvia has proven that it meets a core set of functional and interoperability requirements when integrating with AlloyDB and Cloud SQL. Our team has closely collaborated with Google Cloud to tune the existing functionality and become validated by Google Cloud engineering teams. This designation provides our customers with the assurance that Skyvia operates harmoniously with Google Cloud products. [Google Cloud Ready \u2013 AlloyDB](https://cloud.google.com/alloydb/docs/cloud-ready/overview) is a designation for the solutions of Google Cloud\u2019s technology partners that integrate with AlloyDB, while [Google Cloud Ready \u2013 Cloud SQL](https://cloud.google.com/sql/docs/cloud-ready/overview) is a new designation for the solutions of Google Cloud\u2019s technology partners that integrate with Cloud SQL. These partner integration validation programs intend to increase customer confidence in Skyvia integrations. Being part of the programs, Skyvia gets more opportunities to collaborate closely with Google partner engineering teams to develop joint roadmaps for better customer experiences. \u201cThe Google Cloud Ready-AlloyDB designation gives customers confidence that solutions have gone through a formal certification process and will deliver the best possible performance with AlloyDB,\u201d said Ritika Suri, Director of Technology Partnerships at Google Cloud. \u201cWith Skyvia, customers can save time on evaluating new tools, and focus on building solutions using partner products that have been proven through a rigorous validation process to work optimally with Cloud SQL.\u201d To learn more about Skyvia\u2019s expertise with Google Cloud SQL, visit [https://skyvia.com/connectors/google-cloud-sql-mysql](https://skyvia.com/connectors/google-cloud-sql-mysql) , and for AlloyDB, visit [https://skyvia.com/connectors/alloydb](https://skyvia.com/connectors/alloydb) . To learn more about Google Cloud Ready \u2013 AlloyDB designation and its benefits, visit [https://cloud.google.com/alloydb/docs/cloud-ready/overview](https://cloud.google.com/alloydb/docs/cloud-ready/overview) To learn more about Google Cloud Ready \u2013 Cloud SQL designation and its benefits, visit [https://cloud.google.com/sql/docs/cloud-ready/overview](https://cloud.google.com/sql/docs/cloud-ready/overview) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation%2F) [Twitter](https://twitter.com/intent/tweet?text=Skyvia+Enhances+Collaboration+with+Google+Cloud+Ready+%E2%80%93+AlloyDB+and+Cloud+SQL+Designation&url=https%3A%2F%2Fblog.skyvia.com%2Fskyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/skyvia-enhances-collaboration-with-google-cloud-ready-alloydb-and-cloud-sql-designation/&title=Skyvia+Enhances+Collaboration+with+Google+Cloud+Ready+%E2%80%93+AlloyDB+and+Cloud+SQL+Designation) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/snowflake-amazon-s3-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Loader](https://skyvia.com/blog/category/data-loader/) Snowflake and Amazon S3 Integration By [Amanda Claymore](https://skyvia.com/blog/author/amandac/) [0](https://skyvia.com/blog/snowflake-amazon-s3-integration/#respond) 4205 December 8, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-amazon-s3-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Snowflake+and+Amazon+S3+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-amazon-s3-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-amazon-s3-integration/&title=Snowflake+and+Amazon+S3+Integration) There is a high demand for storing and analyzing large volumes of data. This demand is being satisfied by numerous data storage and data warehouse providers. In this article, we talk about two of the most popular services \u2014 Amazon S3 and Snowflake. We discuss why people choose them, highlight their benefits, and check their native connectivity options and possible alternatives. Table of Contents What is Amazon S3? Amazon S3 Benefits What is Snowflake? Snowflake Benefits Amazon S3 Snowflake Integration Manual Connection of Amazon S3 to Snowflake Integration of Amazon S3 with Snowflake Using Skyvia Conclusion What is Amazon S3? Amazon S3 is one of the most popular cloud storage services. It stores data as objects within buckets. S3 earned its popularity due to extremely high scalability and availability. Amazon S3 Benefits Amazon S3 is a complex system that provides numerous benefits. We recommend you check the [S3 official documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) to learn about them in detail. For the overview purpose, we are pointing out the elephant in the room. Scalability In Amazon S3, you can store an almost unlimited number and volume of objects. It allows you to scale storage services on the go according to your current needs. Availability Amazon [states](https://aws.amazon.com/s3/faqs/) that the service provides a 99.9% availability for the data you store. That\u2019s an extremely high number, and according to our user experience and overall feedback over the internet, that statement is pretty close to reality. Cost Efficiency With Amazon S3, you don\u2019t need to guess how much storage you will need in the future. You can store as much data as you want and access it when needed. You only pay for the storage you use. Security Amazon S3 ensures you don\u2019t lose valuable data by automatically making copies of your objects on multiple devices across multiple facilities. It also lets you preserve, retrieve and restore every version of every object in an Amazon S3 bucket so you can easily recover data if users accidentally delete something or application failures. Bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points help you precisely control the access to your objects and buckets. What is Snowflake? Snowflake is a cloud-based data warehouse. Like all data warehouses, Snowflake collects and aggregates data from one or many sources so data can be used to produce business insights. Complex queries usually require stopping the database updates for the time of query execution, which is only sometimes possible. That\u2019s why businesses use Snowflake as a repository for grouping up the data and executing queries on it (separately from the main database). Snowflake Benefits Snowflake is known for becoming the [biggest software IPO in history](https://edition.cnn.com/2020/09/16/investing/snowflake-ipo/index.html) in 2020. And that\u2019s for a reason. Company holders were able to properly share the benefits that Snowflake can provide to the masses. Here you can find the list of the main ones. Effective Data Sharing Snowflake\u2019s architecture enables smooth data sharing between users. You can provide access to your data in Snowflake to the users who aren\u2019t Snowflake customers by creating client accounts for them. You can also use [Snowflake ETL platforms](https://skyvia.com/blog/snowflake-etl/) for that. Accessibility and Performance There are no concurrency-related delays or failures. Multi-cluster architecture makes sure all your concurrent queries run smoothly. Security Snowflake is HIPAA, HITRUST CSF, PCI DSS, FedRAMP, and IRAP Protected [compliant](https://www.snowflake.com/product/security-and-trust-center/) and is trusted by many governmental, health, and banking institutions. Simplicity It requires no on-premise installation or management and provides several built-in features such as speed optimization, security, easy data restoration, and others. Amazon S3 Snowflake Integration Amazon S3 stores vast volumes of data. You can use this data to achieve valuable insights by uploading it to a purpose-built cloud data warehouse such as Snowflake. To integrate Snowflake and Amazon S3, you can use native methods or custom third-party solutions. Let\u2019s check both options in detail, so you can decide what fits your needs best. Manual Connection of Amazon S3 with Snowflake You can set up a manual connection between Snowflake and Amazon S3 by using built-in tools to access S3 data by using LIST ; command. The process consists of 6 steps. Note! Completing the instructions in this paragraph requires permissions in AWS to create and manage IAM policies and roles. Step 1. Creating IAM policy Snowflake needs the following permissions on an S3 bucket to access files in the folder (and sub-folders): s3:GetBucketLocation\n\ns3:GetObject\n\ns3:GetObjectVersion\n\ns3:ListBucket As the best practice for this, Snowflake suggests creating an IAM policy. Afterward, you can attach a policy to a certain role and use credentials generated by AWS. Log into the AWS Management Console . Go to Dashboard > Identity & Access Management . Open Account settings on the left. Activate your AWS region by expanding the Security Token Service Regions list and choosing Activate next to your region. Open Policies on the left. Click Create Policy . Click the JSON tab. Add a policy document that allows Snowflake to access the S3 bucket and folder. Copy and paste the text into the policy editor: Note! Make sure to replace bucket and prefix with actual names. \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:GetObjectVersion\",\n \"s3:DeleteObject\",\n \"s3:DeleteObjectVersion\"\n ],\n \"Resource\": \"arn:aws:s3::://*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:ListBucket\",\n \"s3:GetBucketLocation\"\n ],\n \"Resource\": \"arn:aws:s3:::\",\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"/*\"\n ]\n }\n }\n }\n ]\n} Click Review policy . Enter the policy name and click Create Policy . Step 2. Create the IAM Role in AWS While in AWS Management Console , choose Identity & Access Management (IAM): Choose Roles on the left. Click Create Role . Select Another AWS Account as the trusted entity type. In the Account ID field, enter your own AWS account ID. You will modify this value later. Select Require External ID . Enter any value as your ID. You will modify it later. Click Next . Select the policy you created in Step 1. Click Next . Enter a name and description for the role, and click Create Role . Save the Role ARN value from the role summary page. In the next step, you will create a Snowflake integration that references this role. Step 3. Create a Cloud Storage Integration in Snowflake Create a storage integration using the [CREATE STORAGE INTEGRATION](https://docs.snowflake.com/en/sql-reference/sql/create-storage-integration.html) command. CREATE STORAGE INTEGRATION \n TYPE = EXTERNAL_STAGE\n STORAGE_PROVIDER = 'S3'\n ENABLED = TRUE\n STORAGE_AWS_ROLE_ARN = ''\n STORAGE_ALLOWED_LOCATIONS = ('s3:////', 's3:////')\n [ STORAGE_BLOCKED_LOCATIONS = ('s3:////', 's3:////') ] And replace the following parameters with your values: integration_name iam_role Bucket path Step 4. Retrieve the AWS IAM User for your Snowflake Account Execute the [DESCRIBE INTEGRATION](https://docs.snowflake.com/en/sql-reference/sql/desc-integration.html) command to retrieve the ARN for the AWS IAM user that was created automatically for your Snowflake account: DESC INTEGRATION ; Replace with the name of the integration you created in Step 3. Save the following values for the next step: STORAGE_AWS_IAM_USER_ARN\n\nSTORAGE_AWS_EXTERNAL_ID Step 5. Grant the IAM User Permissions to Access Bucket Objects Log into the AWS Management Console . Choose Identity & Access Management (IAM). Choose Roles from the left-hand navigation pane. Click on the role you created in Step 2. Click on the Trust Relationships tab. Click the Edit Trust Relationship button. Modify the policy document with the DESC STORAGE INTEGRATION output values you recorded in Step 4, where snowflake_user_arn is STORAGE_AWS_IAM_USER_ARN and snowflake_external_id is STORAGE_AWS_EXTERNAL_ID . {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"\"\n },\n \"Action\": \"sts:AssumeRole\",\n \"Condition\": {\n \"StringEquals\": {\n \"sts:ExternalId\": \"\"\n }\n }\n }\n ]\n} Click Update Trust Policy . Step 6. Create an External Stage Create the stage using the\u00a0CREATE STAGE\u00a0command. create stage my_s3_stage\n storage_integration = s3_int\n url = 's3://bucket1/path1'\n file_format = my_csv_format; Finally, you can access your S3 data by using LIST ; command where is the name of the created stage. Integration of Amazon S3 with Snowflake Using Skyvia Suppose you don\u2019t want to spend much time creating and manually configuring a connection between Amazon S3 and Snowflake. In that case, you can use Skyvia to create packages and run them manually or on a schedule based on your needs. To do so, you\u2019ll need to follow three simple steps: Create Amazon S3 Connection. Create SnowFlake Connection. Create and run a package of choice. We create an Import package as an example of importing a CSV file from Amazon S3 to Snowflake. The instructions below assume that you have already created a Skyvia account. If you haven\u2019t, you can do that for free by visiting the [Skyvia](https://app.skyvia.com/) app page. Create Amazon S3 Connection To create an Amazon S3 connection, go to +New > Connection , select Amazon S3, and do the following: Enter Access Key ID \u2014 the first part of your Amazon Web Services access key. Enter Secret Key \u2014 the second part of your Amazon Web Services access key. Enter Security Token (optionally) \u2014 a session token used with temporary security credentials. Set the Region \u2014 an AWS region, where your S3 storage is hosted. Enter the Bucket Name \u2014 the name of your S3 bucket to load CSV files from or to. Click Create Connection . Create Snowflake Connection To create an Amazon S3 connection, go to +New > Connection , select Snowflake, and do the following: Enter your Snowflake Domain name. Enter your Snowflake User name. Enter the user Password . Enter the Database name. Click Create Connection . Create Integration Package Once connections to Snowflake and Amazon S3 are created, you can create a package of choice. For this example, we create a CSV import package: Go to New > Import . Choose CSV from storage service option. Choose Amazon S3 as Source and Snowflake as Target and click Add New . Select your CSV file and click Next Step . Choose the table you want to import data to and the type of import action. Map columns if needed and click Save . The Import package is ready to use. You can run it manually or on schedule. Conclusion Amazon S3 and Snowflake are great cloud tools for storing and working with data. S3 Snowflake integration can benefit you with a deeper understanding of the data you store on the S3. You can use built-in and third-party solutions to make Amazon S3 and Snowflake share data. The integration process with Skyvia requires no more than the knowledge of your credentials. Additionally, you can combine data imported from S3 with data from other sources. However, it\u2019s still an option if you prefer classical integration. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-amazon-s3-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Snowflake+and+Amazon+S3+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-amazon-s3-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-amazon-s3-integration/&title=Snowflake+and+Amazon+S3+Integration) [Amanda Claymore](https://skyvia.com/blog/author/amandac/) Content Marketer Continue Reading [Data Loader](https://skyvia.com/blog/category/data-loader/) [How to Connect MySQL to FTP: 4 Simple Methods to Automate Data Transfers](https://skyvia.com/blog/connecting-mysql-to-ftp/) [Data Loader](https://skyvia.com/blog/category/data-loader/) [Choosing the Right Salesforce Data Tool](https://skyvia.com/blog/salesforce-data-import-wizard-vs-data-loader/)" }, { "url": "https://skyvia.com/blog/snowflake-etl/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 Snowflake ETL Tools for 2025: Choosing the Right Solution By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/snowflake-etl/#respond) 4974 April 16, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Snowflake+ETL+Tools+for+2025%3A+Choosing+the+Right+Solution&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-etl/&title=Top+10+Snowflake+ETL+Tools+for+2025%3A+Choosing+the+Right+Solution) Snowflake is a cloud-based data warehouse that uses innovative approaches to managing big data. It\u2019s often used for data consolidation, BI and analytics, data sharing, and machine learning. To maximize these features\u2019 effect, adding Snowflake [ETL tools](https://skyvia.com/blog/etl-tools/) to your data stack would be a plus. Such solutions bring data from disparate sources into a data warehouse and establish connections between the data ecosystem elements, like neural connections in the brain. This article explores several reputable Snowflake ETL solutions and discusses their business benefits. It also provides tips on selecting the best solution for your particular needs. Table of \u0421ontents Understanding ETL for Snowflake Benefits of Using Third-Party Snowflake ETL Tools 4 Things to Consider When Selecting ETL Tools for Snowflake Top 10 Snowflake ETL tools Skyvia Integrate.io Apache Airflow Matillion Stitch Fivetran HevoData Airbyte StreamSets Astera How to Import Data into Snowflake Using Skyvia in Minutes? Conclusion FAQ Understanding ETL for Snowflake What Is ETL? ETL is the abbreviation for the \u2018 E xtract, T ransform, L oad\u2019 approach for dealing with data. The Extract stage involves the collection of data from multiple sources, from [legacy systems](https://skyvia.com/learn/legacy-system) to modern SaaS applications. The Transform step applies [data transformation techniques](https://skyvia.com/learn/what-is-data-transformation) , basic or advanced, depending on the current requirements. The Load step transfers data to the indicated destination, usually a database or a [data warehouse](https://skyvia.com/learn/what-is-data-data-warehouse) . You must create Snowflake [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) to move data from dispersed sources (flat files, databases, SaaS apps, etc.) into this data warehouse. What Is ELT? In certain configurations, data loading takes place before the transformation stage. This approach is known as ELT (Extract, Load, Transform), and it\u2019s also used with Snowflake. With ELT, data is transformed and processed on the data warehouse side using [dbt](https://www.getdbt.com/) , for instance, which makes this approach faster and more adapted to the current data volumes. Here are the main differences between ELT and ETL. ETL ELT Destinations ELT is mainly used to load data into a cloud data warehouse or a data lake. ETL doesn\u2019t impose limitations on the destination \u2013 data can be loaded into any supported source. Data types ELT works with structured, semi-structured, and unstructured data. ETL supports structured data. Transformations Transformations on the destination side usually require programming skills. Transformations are made after data extraction and before its loading to the destination. Note: Apart from [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) approaches, [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) is also often used. It allows you to move data from Snowflake to other business apps for data activation. Why to Use ETL for Snowflake? It\u2019s difficult to form a general idea of the company\u2019s performance when all the organizational data is scattered across different systems. So, it\u2019s essential to consolidate it in Snowflake to obtain a unified view of business health. Both ELT or ETL approaches can help with this since they automate data collection, modify data along its transfer, and optimize its loading into Snowflake. There are other good reasons to use ETL for Snowflake: Apply multistage transformations on data intended for analytical purposes. Employ elasticity, scalability, and flexibility for changing data loads. Configure ETL pipelines and automated workflows in a visual interface. Enjoy optimized cost and performance for big data cases. Take advantage of real-time or near-real-time data processing. Obtain a centralized repository with no-code data movement. Extracting insights with the data blending possibilities. Ensure data security and privacy on transfer and storage. Native ETL Capabilities of Snowflake The founders of Snowflake also acknowledge the significance of ETL processes, so they offer built-in data integration options directly inside this data warehouse. You can use them to build and set up data pipelines. Snowpipe. This tool is designed for real-time data ingestion and loading with low latency. Streams and Tasks. This solution implements the Change Data Capture (CDC) technology and automated SQL execution, allowing you to monitor and react to data modifications effectively. Zero-Copy Cloning. This tool creates data clones without duplicates. It can be used for testing and data manipulation without the extra overhead of data replication. Stored Procedures. This option executes logic for data manipulations directly in Snowflake. Note that constructing data pipelines with these features is a complex procedure. It involves a modular approach, where creating integration dataflows will be like a LEGO puzzle. This will take much time and effort and require regular maintenance. Meanwhile, third-party [data integration tools](https://skyvia.com/blog/data-integration-tools/) deprive you of such complications, allowing you to set up everything in several clicks. It\u2019s possible to get some of these tools from [Snowflake Marketplace](https://app.snowflake.com/marketplace) . Benefits of Using Third-Party Snowflake ETL Tools Unlike the options mentioned above, the solutions by external providers don\u2019t limit you to using only Snowflake. They support multiple data sources and destinations, allowing you to build data pipelines of different configurations. Such solutions also provide a range of additional features that simplify the data integration process. Pre-built Connectors Data integration services come with pre-built connectors that allow users to connect to hundreds of data sources (apps, databases, flat files, etc.). This streamlines the integration process since non-tech business users (marketers, salespeople, etc.) can build data pipelines independently without IT experts\u2019 help. Transformation Options All ETL tools include transformation options, though their complexity varies from one solution to another. Some offer simple transformations (cleansing, standardization, etc.), while others provide [complex multistage operations](https://docs.skyvia.com/data-integration/data-flow/components/index.html) (duplicate removal, lookup, etc.). Manual Input Reduction Another benefit of modern ETL solutions is their advanced automation options. They simplify data management, eliminating the manual input and minimizing the probability of human error. Scheduling Data integration tools usually contain scheduling options for regular data transfers. That way, you can select specific intervals at which data will travel from selected sources to Snowflake. Scheduling options ensure that your centralized repository always contains the freshest data ready for reporting tasks. Scalability When businesses rapidly grow, they experience spikes in data loads as well. At the same time, companies may experience drops in data volumes due to external economic factors, for instance. Luckily, modern data platforms can handle changing requirements to data automatically, with the possibility of scaling up and down upon request. Monitoring Snowflake ETL tools embed logging and error-handling features for a better overview of the integration results. They also send notifications via email or SMS to help users address any issues quickly. 4 Things to Consider When Selecting ETL Tools for Snowflake There are various ETL tools for Snowflake in the market these days. Thus, it might be confusing and unclear which one would suit you best. In the table below, find the CUPS criteria list you need to consider when evaluating data integration tools and selecting the one for your organization. \u0421onnectivity . Verify whether the ETL solution of interest has pre-built connectors to Snowflake and other data sources you regularly use in your business workflows. Usability . Decide whether a chosen [server](https://skyvia.com/blog/connect-salesforce-to-sql-server/) is easy to use. See whether it offers a zero-code interface or requires programming expertise and compare it with your team\u2019s qualifications and needs for building data pipelines. Pricing . Explore the ETL software\u2019s pricing models and see which one matches your budget. Scalability . Investigate how a chosen tool can handle changing data volumes without impacting performance. Top 10 Snowflake ETL tools Let\u2019s see in detail what the most popular choices for the ETL Snowflake solutions are. Let\u2019s start with the comparison table. Platform G2 Rating Pros Cons Skyvia 4.8 out of 5 \u2013 no-code interface \u2013 200+ data sources \u2013 data manipulation features \u2013 automatic schema detection and mapping \u2013 no phone support Integrate.io 4.3 out of 5 \u2013 drag-and-drop, no-code interface \u2013 150+ pre-built connectors \u2013 error messages are unclear \u2013 debugging is time-consuming Apache Airflow 4.5 out of 5 \u2013 create ETL pipelines with Python \u2013 requires Python coding skills Matillion 4.4 out of 5 \u2013 data transformations can be performed with SQL queries or via GUI \u2013 lack of documentation Stitch 4.4 out of 5 \u2013 custom connectors to data sources \u2013 lacks data transformation options Fivetran 4.2 out of 5 \u2013 data transformation options with dbt Labs \u2013 data governance options \u2013 lack of data management options Hevo Data 4.4 out of 5 \u2013 150+ built-in connectors \u2013 no way to organize or categorize data pipelines Airbyte 4.5 out of 5 \u2013 400+ connectors with the possibility of customization \u2013 limited transformation capabilities StreamSets 4.0 out of 5 \u2013 50+ data systems \u2013 custom data sources and processors with JavaScript, Groovy, Scala, etc. \u2013 library dependency issues Astera 4.3 out of 5 \u2013 pre-built transformation functions \u2013 resource-intensive Skyvia [Skyvia](https://skyvia.com/) is a cloud-based data integration platform for a broad range of tasks. It provides a comprehensive solution for data integration along with tools for backup, query, and workflow automation. Let\u2019s explore the [Data Integration](https://skyvia.com/data-integration) product in detail and see how it can help you with moving data from and to Snowflake. [Import](https://skyvia.com/data-integration/import) is a wizard-based tool that allows you to construct ETL and Reverse ETL pipelines, apply data transformations, and map source and destination data structures without coding. [Export](https://skyvia.com/data-integration/export) is a zero-code visual tool that helps you [export](https://skyvia.com/blog/export-data-from-salesforce-to-excel/) data from Snowflake into a CSV file and save it either on a storage platform or on your computer. [Synchronization](https://skyvia.com/data-integration/synchronization) is a no-code solution that supports bi-directional data sync between SaaS apps and databases, ensuring that data remains consistent across platforms. [Replication](https://skyvia.com/data-integration/replication) is a wizard-based tool that allows you to build ELT pipelines to copy data from source apps to a database or a data warehouse without coding. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) is a visual pipeline builder that allows you to design more complex data pipeline diagrams with compound transformations. With this tool, it\u2019s possible to integrate multiple data sources into a pipeline and implement multistage data transformations. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/index.html) is a visual designer that allows you to orchestrate complex integration scenarios by managing the execution order of various tasks, including data import, export, replication, and synchronization. Reviews G2 Rating: [4.8 out of 5](https://www.g2.com/products/skyvia/reviews) (based on 200+ reviews). Best for Businesses of any size and any industry. Key features A no-code interface allows users to construct data pipelines visually. Simple and advanced data integration scenarios. Skyvia supports [over 200 data sources](https://skyvia.com/connectors) (CRM systems, e-commerce platforms, payment processors, databases, data warehouses, marketing automation platforms, etc.). Data manipulation features (sorting, filtering, searching, expressions, and others) to enhance data accuracy. Web-based access from any browser and platform. Automatic schema detection and mapping. Disadvantages Data volume and update frequency limitations in the Freemium plan. No phone support yet. Pricing Skyvia offers a variety of [pricing plans](https://skyvia.com/pricing) , from a free tier to complex enterprise solutions. The price for the Data Integration subscription starts at $79 per month. Integrate.io [Integrate.io](https://skyvia.com/etl-tools-comparison/integrateio-alternative-skyvia) is a no-code platform that allows users to connect with multiple data systems in the cloud. This tool allows you to visually design data pipelines on a diagram by dragging and connecting components \u2013 sources, transformations, and destinations. It supports ETL, reverse ETL, ELT, CDC, API generation, etc. Integrate.io offers advanced customization options for development. You can easily design pipelines for batch and real-time data processing, applying the needed transformations on the go. Reviews G2 Rating : 4.3 out of 5 (based on 200+ reviews). Best for Best suited for organizations with complex data integration and transformation needs. Key features The drag-and-drop, no-code interface simplifies the process of data pipeline creation. It allows users to set up connections and define transformations visually. There are 150+ pre-built connectors, including Snowflake and SaaS services. Integrate.io provides 360-degree support assistance for users via chat, email, phone, and Zoom. Disadvantages Debugging can be time-consuming since it requires a detailed check of logs. Error messages are unclear. Pricing There are four different pricing plans for this solution, and each considers cost per credit, feature set, expected data volume, and some other principal aspects. Apache Airflow [Apache Airflow](https://skyvia.com/etl-tools-comparison/apache-airflow-alternative-skyvia) is an open-source platform suitable for creating dataflows with [batch processing](https://skyvia.com/blog/batch-etl-processing/) . It allows users to set up and manage data pipelines programmatically while the system automates and monitors integration processes. Airflow implements data pipelines within Direct Acyclic Graphs, also known as DAGs. Each DAG comprises multiple tasks that can be arranged as needed. It\u2019s also possible to establish cross-task dependency by defining the execution logic. Reviews G2 rating : 4.5 out of 5 (based on nearly 100 reviews). Best for Companies with a complex business logic orchestrated in Python. Key features Allows users to create and manage ETL pipelines with Python. Supports cloud data warehouses like Snowflake and on-premises data sources. It can be easily integrated with version control systems like Git. Disadvantages Requires Python coding skills and technical expertise. It\u2019s difficult to modify data pipelines once they are launched. Pricing Since Apache Airflow is an open-source tool, it can be installed and used for free. Matillion [Matillion](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) is a cloud-native data integration tool that provides an intuitive user interface to develop data pipelines. There are two products Matillion offers: Data Loader for moving data from any service to the cloud and Matillion ETL to define data transformations and build data pipelines on the cloud. Matillion ETL is a fully-featured data integration solution for creating ETL and ELT pipelines within a drag-and-drop interface. This tool can be deployed on your preferred cloud provider. Reviews G2 rating : 4.4 out of 5 (based on 80 reviews). Best for Better for enterprises than SMBs. Key features Provides connectors to both cloud and on-premises data systems. Contains features for data orchestration and management. Data transformations can be performed either with SQL queries or via GUI by creating transformation components. Disadvantages Lack of documentation describing features and instructions for their configuration. There is no option to restart tasks from the point of failure. The job needs to be restarted from the beginning. Pricing The cost of Matillion ETL is credit-based, meaning that it depends on the data units processed. Stitch [Stitch](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) offers an extensible data integration platform used to connect to many databases and SaaS applications. It also comprises advanced and enterprise features, including alerts on Slack, notifications on DataDog, etc. This solution provides an easy-to-use orchestration tool to create and monitor your data pipelines on the go. Once data is extracted from the source, Stitch saves and encrypts it in the internal pipelines. Then, it performs basic transformations compatible with the target system and loads data to the destination. Reviews G2 Rating : 4.4 out of 5 (based on 60+ reviews). Best for Perfect for developers in small to medium-sized businesses. Key features Allows you to create custom connectors to data sources. Provides graphical UI to set up Stitch and configure ETL pipelines. Includes a dashboard for data pipeline tracking and monitoring. Schedules data loading at predefined times. Disadvantages Lacks data transformation options. Supports only several data destinations, depending on the selected subscription. Pricing Stitch offers three pricing tiers. The cost starts at $100 per month. Fivetran [Fivetran](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) is an automated data movement platform that allows users to extract data from 500+ sources and load it into a data warehouse. This service is a good option for data consolidation in Snowflake, making it a [single source of truth](https://skyvia.com/learn/single-source-of-true) . Fivetran relies on automation to effectively handle schema changes, significantly minimizing manual input. This makes it a popular choice for streamlined data replication with the ELT approach. Reviews G2 Rating : 4.2 out of 5 (based on 380+ reviews). Best for Mid-size and large enterprises in the financial industry. Key features Implements data governance options for role-based access controls and metadata sharing. Provides data transformation options with dbt Labs. Ensures advanced security with private networks, column hashing, and other approaches. Offers automated pipeline setup and monitoring with minimal maintenance. Disadvantages Limited support for Apache Kinesis and Apache Aurora data services. Lack of data management options. Pricing Fivetran provides four pricing plans, including a free tier. The price depends on the data volumes, number of users, available features, and connectors. Hevo Data [Hevo Data](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) is a [SaaS ETL tool](https://skyvia.com/blog/saas-etl-tools/) that allows users to create data pipelines without code. It helps them quickly load data from different sources to Snowflake, applying transformations on the go. This platform comprises both visually configurable transformation blocks and Python code-based transformations. Hevo Data also automatically detects and maps the source schema to the destination. It can handle schema drift, signaling that the data structure changes on the source. All this tends to reduce the manual input for schema management. Reviews G2 Rating : 4.4 out of 5 (based on 200+ reviews). Best for Companies with limited technical resources can benefit from Hevo\u2019s no-code interface. Key features Anonymizes data before loading it to the destination. Provides 150+ built-in connectors to databases, SaaS platforms, etc. Near real-time data loading to certain destinations. Disadvantages There is no way to organize or categorize data pipelines. Although Hevo Data supports real-time data streaming, latency can sometimes be an issue, especially for large datasets. Pricing Hevo Data offers four pricing plans, including a free tier with up to 1M events per month. For paid plans, the price starts at $239 per month. Airbyte [Airbyte](https://skyvia.com/etl-tools-comparison/airbyte-alternative-skyvia) is an open-source data movement platform that can be used for Snowflake ETL. This tool also has a large number of contributors who create custom connectors. Airbyte also offers flexible deployment options that are suitable for businesses with different infrastructure configurations. However, this solution primarily focuses on Extract, Load, Transform (ELT), which makes it suitable for those who need to load raw data into Snowflake. Reviews G2 rating : 4.5 out of 5 (based on nearly 50 reviews). Best for It\u2019s an open-source solution suitable for both SMBs and enterprises. Key features Offers over 400+ connectors with the possibility of customization. Supports incremental data loading. Provides detailed logging and error detection options. Disadvantages Limited transformation capabilities. Data-intensive pipelines can be resource-consuming and thus costly. Pricing Open-source deployment on your host is free. The cost of the cloud-hosted deployment can be discussed with sales. StreamSets StreamSets is a cloud-based, fully managed ETL tool for Snowflake developed by IBM. It allows users to create and manage smart streaming data pipelines using a graphical user interface. StreamSets supports integration across hybrid and multi-cloud environments. Reviews G2 rating : 4.0 out of 5 (based on nearly 50 reviews). Best for Perfect for large enterprises. Key features Provides the possibility to add custom data sources and processors with JavaScript, Groovy, Scala, etc. Has extensive documentation with a thorough description of product functionality. Supports 50+ data systems, including streaming sources Kafka and MapR. Disadvantages Lacks extensive coverage of SaaS input sources. Copying the same pipelines to different servers might cause library dependency issues. Pricing The cost of StreamSets can be discussed with IBM\u2019s sales managers. Astera Astera is a comprehensive data management platform designed for various data-related operations. It allows users to build ETL scenarios involving 50+ connectors to cloud apps, databases, and data warehouses. This tool also provides a wide range of features to automate data pipelines and improve data accuracy. For instance, Astera offers a library of built-in transformations and data mapping features. It also provides scheduling options that enable jobs to run in intervals. Reviews G2 rating : 4.3 out of 5 (based on nearly 50 reviews). Best for Best suited for SMBs and enterprises. Key features Has built-in connectors to databases, cloud storage, flat files, and data warehouses, including Snowflake. Offers pre-built transformation functions for shaping data with no code. Applies AI-powered and role-based mapping. Provides automated quality data management to prepare data before loading it into Snowflake. Disadvantages The steep learning curve for non-technical users. This tool can be resource-intensive, especially when handling large datasets or complex data transformations. Pricing The Astera prices are discussed with their sales representatives. How to Import Data into Snowflake Using Skyvia in Minutes? Let\u2019s explore an example of configuring an integration pipeline using one of the Snowflake ETL tools \u2013 the Skyvia platform. Note: Skyvia supports both ETL and ELT workflows, so you can select the scenarios that currently meet your business requirements. Step 1. Configure Source Connector Skyvia supports over 200+ data sources, including cloud apps, databases, data warehouses, storage systems, and flat files. To set up the required connectors, take the following steps: Log into your Skyvia account. Navigate to + Create New -> Connectors . Select the tool that interests you and click on it. Follow the step-by-step instructions provided within the setup screen of the selected connector. Step 2. Configure Snowflake Connector In your Skyvia account, go to + Create New -> Connectors and select [Snowflake](https://docs.skyvia.com/connectors/databases/snowflake_connections.html) from the list. You need to specify the following required connection parameters: Domain is a Snowflake account domain. User i s a username with which to log in. Password is a password with which to log in. Database is a database name. Snowflake also envisages the following optional connection parameters: Schema is a current schema name in the database. Although this parameter is optional, you need to specify it when using replication or import in the Bulk Load mode. Warehouse is the name of the warehouse used for a database. Role is a role name used to connect. Step 3. Set up integration scenario As mentioned above, Skyvia provides several tools for moving data to and from Snowflake. In this example, we use the [Replication](https://skyvia.com/data-integration/replication) tool to create an ELT pipeline and transfer data from [Salesforce and Snowflake](https://skyvia.com/blog/snowflake-to-salesforce-integration/) . In the top menu, go to +Create New -> Replication . Select Salesforce as a source connector and Snowflake as a destination. Select the fields for replication. Schedule the replication by specifying the exact time and date at which it should take place or indicate the intervals. Make sure that the Incremental Updates checkbox is selected. Save the replication and click Run to start it. Check the status of the integration scenario in the Monitor tab. Conclusion Along with the native ETL capabilities, various third-party data integration tools allow users to extract, load, and transform data into Snowflake. This [article](https://skyvia.com/blog/salesforce-to-salesforce-integration/) has presented some popular ETL tools for Snowflake. We have also reviewed and compared the ETL and ELT concepts and explored their implementation for Snowflake data integration using Skyvia. Feel free to try Skyvia to extract data from your favorite apps, apply transformations to it, and load it into Snowflake. FAQ for Snowflake ETL Tools Is Snowflake an ETL tool? No, Snowflake is not an ETL tool. It is a SaaS data warehouse that stores and manages data. Even though Snowflake has built-in ETL capabilities, you will need a separate ETL tool like Skyvia for data integration. Does Snowflake use SQL or Python? Yes, SQL and Python can be used with Snowflake. SQL is often used to query and transform data. Meanwhile, Snowflake Snowpark allows you to use Python directly within Snowflake to process data, run complex data analytics scenarios, and execute machine learning tasks. What is an ETL pipeline? An [ETL pipeline](https://skyvia.com/learn/etl-pipeline-meaning) is a set of coordinated processes for data collection, transformation, and loading. It aims to move data from sources into Snowflake or other data warehouses or databases. How to build an ETL pipeline in Snowflake? Feel free to use Skyvia\u2019s [Replication](https://skyvia.com/data-integration/replication) and [Import](https://skyvia.com/data-integration/import) to set up data pipelines in a GUI with no coding. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-etl%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Snowflake+ETL+Tools+for+2025%3A+Choosing+the+Right+Solution&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-etl%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-etl/&title=Top+10+Snowflake+ETL+Tools+for+2025%3A+Choosing+the+Right+Solution) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/snowflake-to-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Snowflake to Salesforce Integration: Step-by-Step Solutions By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/snowflake-to-salesforce-integration/#respond) 6323 May 30, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-to-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Snowflake+to+Salesforce+Integration%3A+Step-by-Step+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-to-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-to-salesforce-integration/&title=Snowflake+to+Salesforce+Integration%3A+Step-by-Step+Solutions) Imagine a growing company where sales teams rely on Salesforce to manage customer relationships, while analysts dig through Snowflake\u2019s data warehouse to generate reports and forecasts. Yet, without a smooth connection between these systems, teams often face duplicated efforts \u2014 manual exports, inconsistent data, and delayed insights. This disconnect slows decision-making and frustrates everyone involved. In this guide, you\u2019ll discover practical, code-free methods and native options to connect Salesforce and Snowflake efficiently. Whether you want the speed and scale of BYOL Data Sharing, the simplicity of manual imports, or the flexibility of a no-code platform like Skyvia, we\u2019ll walk you through step-by-step approaches to help you simplify workflows, save time, and enhance your business intelligence. Table of contents What Is Snowflake? Why Connect Salesforce and Snowflake? How to Connect Salesforce and Snowflake? Method 1: Salesforce Data Import Wizard Method 2: Salesforce Data Cloud\u2019s Bring Your Own Lake (BYOL) Data Sharing Method 3: Code-Free Data Integration with Skyvia Summary What Is Snowflake? Snowflake is a cloud-native platform built to cut through the complexity of modern data management by seamlessly uniting data warehousing, data lakes, and analytics \u2014 all under one roof across leading cloud providers like AWS, Azure, and Google Cloud. With its fully managed, hands-off architecture, Snowflake empowers organizations to pull together structured and semi-structured data into a single, scalable environment where insights flow effortlessly. What sets Snowflake apart is its unique ability to decouple storage from compute resources, allowing businesses to dial up or down performance on demand while keeping costs in check. This flexibility, paired with high-speed analytics and minimal operational overhead, makes Snowflake a go-to solution for companies eager to break free from the constraints of traditional systems. Key Features of Snowflake Separation of Compute and Storage: Independently scale compute power and storage capacity, so you only pay for what you actually use \u2014 maximizing performance and trimming costs. Multi-Cloud Support: Run your workloads smoothly across AWS, Azure, and Google Cloud without being locked into a single vendor. Elastic Scalability: Handle virtually unlimited concurrent users and workloads with a multi-cluster shared architecture that keeps queries lightning-fast. Advanced Data Types: Natively support structured data alongside semi-structured formats like JSON, Avro, and Parquet \u2014 plus handle unstructured data flexibly. Secure Data Sharing: Share securely and in real time across departments and partner organizations \u2014 all without creating copies or compromising governance. Fully Managed Service: Forget about infrastructure headaches \u2014 Snowflake automates everything from performance tuning to data compression and micro-partitioning. Integration Friendly: Connect effortlessly with popular BI tools such as Tableau and Power BI, and leverage developer frameworks like Snowpark for custom analytics. Robust Security and Compliance: Built-in encryption, multi-factor authentication, and certifications like GDPR, HIPAA, and PCI DSS ensure your information stays protected and compliant. Why Connect Salesforce and Snowflake? Imagine harnessing the full power of your customer data by bridging Salesforce\u2019s rich CRM ecosystem with Snowflake\u2019s cutting-edge cloud analytics platform. This integration pulls together your fragmented data sources into a single, unified ecosystem \u2014 tearing down silos and unlocking a panoramic, 360-degree customer view that drives smarter sales, marketing, and service strategies. By syncing information, businesses can zero in on personalized marketing, sharpen customer segmentation, and execute campaigns that hit the mark every time. For instance, retailers can blend Commerce Cloud data with Snowflake\u2019s inventory and sales analytics to fine-tune product availability and promotional efforts dynamically. Key Benefits of Salesforce to Snowflake Integration: Unified Customer Profiles: From point-of-sale systems to supply chain logistics and third-party marketplaces \u2014 creating rich, actionable customer insights. Advanced Analytics and AI: Leverage Snowflake\u2019s scalable architecture to power predictive models, sales forecasting, and customer propensity scoring, fueling precision-targeted campaigns and sharper segmentation. Real-Time Data Sharing: Take advantage of Salesforce\u2019s Bring Your Own Lake (BYOL) feature for zero-ETL, real-time syncing between Salesforce Data Cloud and Snowflake, wiping out latency and redundant data copies. Improved Decision-Making: Fuse historical Salesforce records with up-to-the-minute market trends and web analytics inside Snowflake to optimize inventory management, marketing budgets, and sales strategies. Cost Efficiency: Pay strictly for what you consume, while integration tools like Skyvia, Fivetran, and Estuary Flow take the manual toil out of pipeline management. Enhanced Customer Experiences: Real-time analytics empower teams to craft personalized customer journeys and speed up issue resolution, boosting satisfaction and loyalty. Flexible Integration Options: Choose from native methods or tap third-party solutions for scheduled or streaming data syncs tailored to your needs. How to Connect Salesforce and Snowflake? Choosing the right way hinges on your business needs, technical resources, and budget. Whether you want to manually push small batches occasionally, leverage Salesforce\u2019s latest zero-ETL BYOL Data Sharing for real-time analytics, or set up fully automated, no-code pipelines with Skyvia, each method comes with its own strengths and trade-offs. This table breaks down these key options by who they\u2019re best suited for, the technical expertise required, their automation capabilities, and pricing considerations, helping you zero in on the best fit for your goals. Method [Salesforce Data Import](https://skyvia.com/blog/importing-data-into-salesforce/) Wizard Salesforce Data Cloud BYOL Data Sharing No-Code Integration: Skyvia Best For Small to medium data volumes (up to 50,000 records); one-time or infrequent batch imports; non-technical users Enterprises needing real-time, zero-ETL, bidirectional sync; Salesforce-centric organizations with Data Cloud licenses; AI/ML use cases Businesses of all sizes wanting scalable, automated, no-code Salesforce\u2013Snowflake syncs; complex data mapping and Reverse ETL Technical Skill Required Low. No coding, browser-based wizard Medium to High. Requires Salesforce admin and Snowflake setup knowledge Low to Medium. Intuitive GUI, minimal coding needed Automation & Real-Time Support Manual process; no automation or real-time syncing Real-time, zero-copy, bidirectional syncing via Salesforce Data Cloud Supports scheduled and near real-time syncs with incremental updates Pricing Considerations Included in most Salesforce editions; no extra cost Requires Salesforce Data Cloud subscription; consumption-based pricing Freemium tier available; paid plans for higher volumes/features Method 1: Salesforce Data Import Wizard The Salesforce Data Import Wizard is a built-in, browser-based tool designed to simplify the process of importing information from external sources. It\u2019s particularly useful for businesses that need to move files occasionally or in smaller batches without setting up complex pipelines or requiring coding skills. Using this wizard, you can upload CSV files and map their fields to Salesforce objects in a guided, step-by-step manner. While this method offers an accessible and straightforward way, it\u2019s best suited for one-off or infrequent imports rather than ongoing synchronization. It lacks automation and has limitations on data volume and object types it supports, but its ease of use makes it a valuable option for non-technical users and small datasets. Best for Small to Medium Data Volumes: Ideal for importing up to 50,000 records, perfect for initial migrations or occasional updates. Non-Technical Users: Suitable for Salesforce admins or business users without coding experience, requiring no additional software. One-Time or Infrequent Imports: Great for importing campaign leads, historical data, or batch uploads without automation. Standard and Custom Objects: Supports importing to common Salesforce objects like Accounts, Contacts, Leads, and custom objects. Pros User-Friendly Interface: Step-by-step wizard guides you through object selection, field mapping, and file upload, minimizing errors. No Extra Cost: Included in all Salesforce editions except Database.com, no additional licensing needed. Flexible Field Mapping: Automatically maps common CSV columns, with manual overrides for unmapped or custom fields. Data Integrity Features: Supports deduplication and can trigger Salesforce automation rules during import. Error Reporting: Detailed import status and error logs help troubleshoot failed records efficiently. Cons Record Limitations: Only supports up to 50,000 records per import, limiting large-scale data transfers. No Automation: Requires manual CSV exports and imports, not suitable for frequent or real-time syncing. Object Restrictions: Does not support all standard Salesforce objects (e.g., Opportunities, Cases, Campaigns). Manual Mapping Required: Complex datasets require time-consuming manual field mapping. File Size Limits: Maximum 100 MB file size (32 MB zipped) and 90 fields per file may restrict large datasets. Step-by-Step Guide Prepare Snowflake Data: Export the desired information as a CSV file using a SQL client (e.g., SnowSQL). Clean it to ensure consistency, remove duplicates, and align column headers with Salesforce fields. Access the Data Import Wizard: Log in to Salesforce, go to Setup, type \u201cData Import Wizard\u201d in the Quick Find box, and launch the wizard. Select Object and Action: Choose the Salesforce object for import (Standard or Custom). Select the import action: Add new records, Update existing, or Upsert (Add and update). Upload and Map CSV: Upload the cleaned CSV file. The wizard will auto-map fields with similar names, but you must manually map any unmapped fields. Required fields must be mapped to proceed. Start and Monitor Import: Begin the import process and monitor progress. Review error logs if any records fail to import. Verify Import Results: Upon completion, check the imported records. Run reports or queries to ensure data integrity and confirm no records were skipped. Method 2: Salesforce Data Cloud\u2019s Bring Your Own Lake (BYOL) Data Sharing The native integration between Salesforce and Snowflake, specifically through [Salesforce Data Cloud\u2019s Bring Your Own Lake (BYOL) Data Sharing](https://trailhead.salesforce.com/content/learn/modules/byol-data-shares-in-data-cloud-quick-look/get-to-know-byol-data-sharing-in-data-cloud) , eliminates the need for third-party tools or manual exports. Launched in 2023, this solution enables zero-ETL, bidirectional, real-time data sharing. Best For Enterprises with Real-Time Data Needs: Ideal for companies looking to leverage real-time data analytics and AI-driven insights without the complexity of traditional ETL pipelines. Salesforce-Centric Organizations: Best for businesses already using Salesforce Data Cloud and looking to unify CRM data with external enterprise data. AI/ML Applications: Organizations aiming to use predictive modeling, machine learning, or advanced analytics. Large-Scale Cross-Cloud Analytics: Perfect for industries like retail, healthcare, and finance that need scalable, real-time data access for customer segmentation or operational reporting. Pros Zero-ETL Data Sharing: Eliminates the need for traditional ETL processes, reducing duplication and simplifying the integration workflow. Real-Time Data Access: Ensures up-to-date information is available without delay, enhancing decision-making with live data. Bidirectional Data Sync: Supports both ways, allowing for comprehensive integration. Simplified Setup: Easy to enable with a point-and-click interface, and no third-party middleware required. Scalable and Cost-Efficient: Ensures performance at scale, with pricing based on consumption, which can be more cost-effective than traditional methods. Cons Cost Considerations: While providing powerful capabilities, it requires a Salesforce Data Cloud subscription, which may be expensive for smaller businesses or startups. Latency in Bidirectional Sync (for Sync Out Connector): Although it allows real-time access, certain aspects like the Sync Out Connector may introduce delays (10-15 minutes for syncs). Setup Complexity for Smaller Teams: Requires some level of Salesforce admin expertise and Snowflake knowledge to set up databases, schemas, and permissions, which might be challenging for non-technical teams. Limited for Advanced Analytics (Sync Out Connector): The [Sync Out connector](https://help.salesforce.com/s/articleView?id=xcloud.sdp_connectors_sync_out_snowflake.htm&type=5) offers scheduled syncs and lacks the real-time syncing functionality needed for operational use cases or instant updates. Step-by-step Guide Step 1: Set Up Snowflake Environment Create Database, Schema, and Warehouse:\u00a0In Snowflake, create a database and schema that will hold your Salesforce data.\u00a0Example command: CREATE DATABASE salesforce_data; CREATE SCHEMA salesforce_data.public; \nCREATE WAREHOUSE salesforce_wh WITH WAREHOUSE_SIZE = 'XSMALL'; Grant the necessary role permissions to ensure secure access. GRANT USAGE ON WAREHOUSE salesforce_wh TO ROLE salesforce_role; GRANT CREATE, INSERT, UPDATE ON SCHEMA salesforce_data.public TO ROLE salesforce_role; Step 2: Configure Salesforce Data Cloud Enable Data Sharing: In Salesforce Data Cloud, go to Data Shares > Create New , and choose Snowflake as your target. Setup Authentication: Ensure that you have Salesforce API access and configure authentication via OAuth or Password for secure integration. Step 3: Create Data Share in Salesforce Set Up Data Share: Select the objects (e.g., Accounts, Opportunities) you want to share with Snowflake. Choose the necessary fields and configure the data share settings.\u00a0Use Data Shares to link your data to Snowflake.\u00a0Example: CREATE DATA SHARE salesforce_share \nADD ACCOUNT, OPPORTUNITY, CONTACT; Step 4: Configure Data Access Connect Salesforce Data to Snowflake: Ensure Snowflake has permissions to access shared data. Use its secure data sharing functionality: CREATE DATABASE salesforce_share FROM SHARE salesforce_share; Step 5: Enable Real-Time Data Sharing Activate Bidirectional Sync (Optional): Salesforce Data Cloud\u2019s BYOL Data Sharing feature allows bidirectional sync, so you can push Salesforce data to Snowflake and bring Snowflake data back into Salesforce. Set up data flows that suit your real-time analytics or AI models. Step 6: Verify the Integration Test Data Sync: Run a test to ensure information is syncing correctly. Verify the shared data in Snowflake: SELECT * FROM salesforce_data.public.account; In Salesforce, ensure that the data from the storage is visible as expected. Step 7: Monitor and Optimize Keep an eye on the pipelines\u00a0to ensure performance and accuracy. Use Salesforce Data Cloud\u2019s governance tools to track and manage data access and ensure compliance. Snowflake provides detailed usage statistics and performance metrics that help optimize costs and performance. Method 3: Code-Free Data Integration with Skyvia [Skyvia](https://skyvia.com/) is a cloud-based, no-code data integration platform that makes connecting Salesforce and Snowflake simple and accessible, even without technical skills. Supporting ETL, ELT, and Reverse ETL workflows, Skyvia allows you to replicate, import, and sync data bidirectionally with just a few clicks. With [over 200 native connectors](https://skyvia.com/connectors) , an intuitive visual interface, and flexible scheduling, Skyvia streamlines complex data pipelines and eliminates the need for manual coding or maintenance. This makes it an ideal choice for businesses looking to automate Salesforce-Snowflake integration while reducing reliance on IT resources. Skyvia offers powerful automation features such as incremental syncing and error notifications, ensuring your data stays fresh and accurate. Whether you want continuous data replication for real-time analytics or batch imports for one-time migrations, Skyvia\u2019s flexible tools adapt to your business needs. Best for Business Users & Admins: Those who prefer a no-code solution to build and maintain pipelines without writing any code. Small to Large Organizations: Companies needing scalable, automated integration that grows with their volumes. Real-Time and Scheduled Syncing: Teams requiring near-real-time data replication or scheduled batch imports. Complex Data Mapping: Users who need advanced field mapping, including lookups, expressions, and transformations. Reverse ETL Use Cases: Organizations that want to load enriched data from Snowflake back into Salesforce for actionable insights. Pros No Coding Required: Easy to use with a drag-and-drop interface and minimal technical expertise needed. Flexible Integration Options: Supports ETL, ELT, and Reverse ETL workflows in one platform. Incremental Syncing: Efficiently syncs only changed records, saving API calls and costs. Automated Schema Handling: Automatically creates and updates Snowflake tables based on Salesforce objects. Advanced Scheduling & Error Handling: Offers customizable schedules and sends notifications for sync issues. Wide Connector Support: Over 200 pre-built connectors, including all most popular services. Free Tier Available: Allows testing with all\u00a0features before committing to paid plans. Cons Subscription Cost: Full-featured capabilities require paid plans, which might be costly for very small teams or low-volume use. Initial Setup Time: Complex mappings or large schemas may require some setup and testing to perfect. Limitations on Extremely Large Data Volumes: For massive enterprise data lakes, more specialized tools might be required. Step-by-Step Guide: Data Replication Skyvia\u2019s Data Replication feature automates continuous syncing of Salesforce data to Snowflake (and vice versa), creating a reliable and up-to-date single source of truth for analytics. Unlike one-time data imports, replication maintains exact copies of selected objects, such as Contacts, Leads, Opportunities, and custom objects, automatically generating the necessary tables and schemas. By leveraging incremental syncing, it only transfers new or changed records, reducing API usage and processing costs, which is ideal for organizations needing real-time or near-real-time insights. How It Works: Automated Schema Creation: Skyvia automatically creates and updates Snowflake tables that correspond to Salesforce objects, including adapting to schema changes when new fields are added. Incremental Updates: Using Change Data Capture (CDC), only new or modified records are synced. For example, if a Contact\u2019s email changes, only that specific record is updated. Flexible Scheduling: Set your sync frequency from every minute to monthly, depending on your business requirements, enabling either near-real-time or batch updates. Filtering & Customization: Configure filters (e.g., sync only \u201cClosed Won\u201d Opportunities) or exclude certain fields to optimize data flow and minimize unnecessary processing. Reverse ETL: Sync enriched data, enabling sales teams to access actionable insights such as updated customer segments directly within their CRM interface. Benefits of Real-Time Syncing Ensures your analytics dashboards are always fresh with the latest data (e.g., real-time sales performance reports). Minimizes manual tasks, freeing teams from repetitive syncing processes. Supports advanced analytics and predictive modeling by keeping your warehouse current. Reduces errors with detailed logging and automated email notifications on sync failures. Step-by-Step Guide: Data Import Skyvia\u2019s Data Import feature is designed for one-time or batch data transfers, making it ideal for scenarios such as migrating historical data, refreshing specific datasets, or targeted updates without maintaining continuous syncing. Unlike replication, which focuses on ongoing, incremental synchronization, Data Import supports flexible data manipulation operations \u2014 Insert, Update, and Delete \u2014 allowing for more granular control. It also enables advanced mapping capabilities, transforming and aligning mismatched schemas to ensure your information fits seamlessly in the target system. How Data Import Differs from Replication: One-Time or Ad-Hoc Transfers: Best suited for specific, non-recurring data loads rather than ongoing syncs. Complex Mappings: Supports lookups, expressions, and constants to handle schema differences, such as filling Salesforce fields with static values when no direct Snowflake equivalent exists. DML Flexibility: Allows selective Insert, Update, or Delete operations on records, unlike replication, which copies entire objects. No Schema Creation Needed: Imports data into existing tables, giving you control over table structure. Setup Steps for Data Import: Create Import Integration: In Skyvia, click Create New+ > Integration > Import , name your integration (e.g., \u201cSalesforce_to_Snowflake_Import\u201d), and select Salesforce as Source and Snowflake as Target (or vice versa for reverse ETL). Choose the objects or fields to import. Configure Mappings: Use Skyvia\u2019s intuitive GUI to map Salesforce fields to Snowflake columns. Apply lookups for matching records, expressions for data transformation (e.g., concatenation), and constants for missing values. Optionally set filters (e.g., import only Contacts with status \u201cActive\u201d). Set Scheduling: Choose manual run or automate by scheduling imports (from once per minute to monthly). This allows regular refreshes of data without manual intervention. Run and Monitor: Validate the mappings and execute the import. Monitor progress and troubleshoot using detailed logs in Skyvia\u2019s Run History. Verify Data: Confirm imported data accuracy by querying Snowflake or checking Salesforce objects for reverse ETL results. Use Cases: Historical Data Migration: Transfer legacy Salesforce data (e.g., Opportunities) into Snowflake for archival and long-term analysis without ongoing sync requirements. Targeted Data Refresh: Import enriched customer data from Snowflake back to Salesforce to enhance sales outreach and segmentation. Batch Updates: Apply specific updates to Snowflake tables with Salesforce data for ad-hoc reporting or compliance needs. Benefits of Data Import Simplifies complex one-time data migrations with a no-code interface, reducing reliance on developers. Supports advanced data transformation and mapping for seamless integration across different schemas. Complements replication workflows by handling non-continuous, ad-hoc data loads. Boosts efficiency with automated scheduling and error notifications to keep imports running smoothly. Summary Integrating Salesforce with Snowflake opens doors to sharper analytics and smarter business decisions. We covered three main ways to connect them: The Salesforce Data Import Wizard offers a simple, manual option for small or occasional imports but lacks automation. Salesforce\u2019s native Bring Your Own Lake (BYOL) Data Sharing provides real-time, zero-ETL syncing ideal for enterprises invested in Salesforce and Tableau, though it requires licensing and setup. Skyvia\u2019s no-code integration stands out for its ease, flexibility, and automated real-time syncing \u2014 perfect for businesses seeking hassle-free, scalable data flows without coding. Choosing the right method depends on your needs and resources, but for non-technical users looking for powerful automation, Skyvia is the best fit. Connecting these platforms breaks down silos, unifies customer data, and powers better insights. Ready to simplify your Salesforce-Snowflake integration? Try Skyvia\u2019s free plan and start unlocking your data\u2019s full potential today. F.A.Q. for Snowflake and Salesforce What data types can I sync between Salesforce and Snowflake? You can sync a wide variety of data types including standard Salesforce objects (Accounts, Contacts, Leads), custom objects, and complex data types like dates, numbers, text, and picklists. Snowflake supports both structured and semi-structured data, so JSON and XML fields from Salesforce can also be integrated using the right tools. How often can data be synced automatically? The frequency of automatic syncing depends on the integration method. Manual methods require manual uploads, while n o-code platforms like Skyvia allow flexible scheduling ranging from near real-time (every minute) to daily or weekly intervals. Is Skyvia compatible with Salesforce custom objects? Yes, Skyvia fully supports syncing both standard and custom Salesforce objects. It allows flexible field mapping and can handle schema changes, making it a robust choice for complex Salesforce environments with custom data models. Can I secure data transfers between Salesforce and Snowflake? Absolutely. Most integration tools, including Skyvia, use encrypted connections such as OAuth 2.0 for authentication and support SSL/TLS encryption for data transfers. Additionally, Snowflake offers advanced security features like role-based access control, data masking, and compliance with GDPR, HIPAA, and other standards to ensure your data remains protected. What are the main challenges in integrating Salesforce with Snowflake? Common challenges include handling large data volumes, managing schema changes, ensuring data consistency, and scheduling syncs without impacting system performance. Choosing the right integration tool that supports automation and incremental updates can significantly reduce these challenges. Can I sync data bidirectionally between Salesforce and Snowflake? Yes, some platforms like Skyvia support bidirectional syncing, allowing you to [replicate data](https://skyvia.com/blog/top-data-replication-tools/) from Salesforce to Snowflake and also import enriched Snowflake data back into Salesforce for actionable insights. Do I need technical expertise to integrate Salesforce and Snowflake? It depends on the method. Manual imports require basic knowledge of CSV exports and imports, while native connectors may require some admin skills. No-code platforms like Skyvia simplify the process and require minimal technical expertise, making integration accessible for business users. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-to-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Snowflake+to+Salesforce+Integration%3A+Step-by-Step+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Fsnowflake-to-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/snowflake-to-salesforce-integration/&title=Snowflake+to+Salesforce+Integration%3A+Step-by-Step+Solutions) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) SOQL vs SQL: Best Practices to Query Salesforce Database By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/#respond) 4843 February 13, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsoql-vs-sql-best-practices-to-query-salesforce-database%2F) [Twitter](https://twitter.com/intent/tweet?text=SOQL+vs+SQL%3A+Best+Practices+to+Query+Salesforce+Database&url=https%3A%2F%2Fblog.skyvia.com%2Fsoql-vs-sql-best-practices-to-query-salesforce-database%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/&title=SOQL+vs+SQL%3A+Best+Practices+to+Query+Salesforce+Database) When it comes to working with Salesforce, we need SOQL. It\u2019s similar to SQL but with its own features. Both are about fetching data, but SOQL is tailored specifically for Salesforce\u2019s database structure, while SQL is more universal. By sticking to best practices in SOQL, like using selective filters and avoiding over-fetching, businesses can avoid performance bottlenecks and stay within Salesforce\u2019s governor limits. On the other hand, neglecting these best practices can lead to slower queries, hitting data limits, and frustrated users. Whether you\u2019re trying to streamline reporting , enhance app performance , or just keep the system running smoothly , mastering SOQL techniques can save time and headaches. Let\u2019s dive deeper into some best practices for making requests to the Salesforce database and how they tackle common business pain points. Table of contents What is SQL? What is Salesforce SQL (SOQL)? Types of SOQL Queries SOQL vs. SQL: Key Differences Syntax Comparison: SOQL vs. SQL Best Practices to Query Salesforce Database Conclusion: When to Use SOQL vs. SQL? What is SQL? [Structured Query Language](https://en.wikipedia.org/wiki/SQL) , or SQL, is like the universal ability to chat with databases. It\u2019s what we use to ask database questions, update info, delete unnecessary stuff, or even redesign the structure of the database itself. With SQL, businesses can: Grab specific data (using SELECT). Add new info (INSERT). Make updates (UPDATE). Clean house (DELETE). It\u2019s the backbone of popular database systems like MySQL, PostgreSQL, Oracle, and Microsoft SQL Server. Now, here\u2019s the catch: Salesforce has its twist on SQL. Instead of standard SQL, Salesforce uses Salesforce Object Query Language (SOQL ) , which is tailored to work with Salesforce\u2019s unique database structure. Let\u2019s jump into what makes SOQL special. What is Salesforce SQL (SOQL)? As we have noticed above, [Salesforce Object Query Language](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm) (SOQL) is another version of SQL. Still, while the first one is querying traditional relational databases, SOQL works with Salesforce\u2019s unique database structure, which is centered around objects instead of tables. Think of it as SQL\u2019s cousin who knows all the ins and outs of Salesforce. With it, users can pull data from Salesforce objects, like Accounts, Contacts, or opportunities, efficiently and accurately. This approach makes it super focused and efficient for retrieving exactly what businesses need without breaking anything. Salesforce leverages it to make data retrieval easy and scalable, whether running reports, building dashboards, or integrating with other systems. Companies can use SOQL in tools like Apex, Salesforce\u2019s developer console, or even through APIs when connecting Salesforce to external apps. Types of SOQL Queries When working with Salesforce data using SOQL, there\u2019s more than one way to get what you need. SOQL offers a few different types of queries, each designed for specific use cases. Let\u2019s dive into the main ones and how they can make your life easier. Basic Queries These are the bread-and-butter SOQL queries, simple and straightforward. You use them to fetch data from a single object. Examples : SELECT. Retrieving Data from an Object. SELECT Name, Industry FROM Account This pulls the names and industries of all accounts. INSERT. Adding a New Record (Handled in Apex, Not SOQL). Note: Unlike SQL, SOQL is read-only, so inserting data requires Apex DML statements. Account newAccount = new Account(Name = 'TechCorp', Industry = 'Technology');\ninsert newAccount; This creates a new account in Salesforce. UPDATE. Modifying Existing Records. Note: Updating data also requires Apex DML. Account acc = [SELECT Id, Name FROM Account WHERE Name = 'TechCorp' LIMIT 1];\nacc.Industry = 'Software';\nupdate acc; Used to update the industry field for the account named TechCorp. DELETE. Removing a Record. Account acc = [SELECT Id FROM Account WHERE Name = 'TechCorp' LIMIT 1];\ndelete acc; This deletes the TechCorp account from Salesforce. COUNT. Finding the Number of Records. SELECT COUNT(Id) FROM Contact This one counts the total number of contacts in Salesforce. LIMIT. Restricting Query Results. SELECT Name FROM Account LIMIT 10 It retrieves only the first 10 accounts. Filtered Queries Sometimes, businesses don\u2019t need all the data, just the stuff that matches specific conditions to narrow things down using the WHERE clause. Here, you\u2019re getting all opportunities where the deal has already been won. Example : SELECT Name, CloseDate FROM Opportunity WHERE StageName = 'Closed Won' Perfect for focusing on success stories. Parent-to-Child Queries (a.k.a. Relationship ones) They let companies fetch data from a parent object and its related child objects in one go. We\u2019ll use subqueries for this magic. This grabs account names along with the last names of all their related contacts. It\u2019s a two-for-one deal. Example : SELECT Name, (SELECT LastName FROM Contacts) FROM Account Child-to-Parent Queries These ones pull data from a child object and reference fields from its parent one, such as fetching contact names along with the name of the related account. Example : SELECT FirstName, LastName, Account.Name FROM Contact It\u2019s like seeing who works where. Aggregate Queries Need some quick math? Aggregate functions like COUNT(), SUM(), and AVG() let users calculate data right in the query. This counts the number of accounts grouped by their industry. Example : SELECT COUNT(Id), Industry FROM Account GROUP BY Industry Perfect for spotting trends or tracking growth. Geolocation Queries Got location-based data? SOQL can query it using geolocation fields with functions like DISTANCE or GEOLOCATION. This one grabs all accounts within 50 miles of San Francisco. Example : SELECT Name FROM Account WHERE DISTANCE(BillingAddress, GEOLOCATION(37.7749, -122.4194), 'mi') < 50 Great for planning local campaigns. Querying with Sorting Want your results in a specific order? Add ORDER BY to sort the data. Example : SELECT Name, AnnualRevenue FROM Account ORDER BY AnnualRevenue DESC This sorts accounts by revenue, showing the big fish first. SOQL vs. SQL: Key Differences Users who have worked with SQL before might be happy to see how SOQL fits into the picture. Both fetch data, but SOQL is purpose-built for Salesforce, while SQL works with traditional relational databases. They share some similarities but have key differences, especially in handling relationships, performance, and scalability. Let\u2019s break it all down, including their strengths and best use cases. Now, let\u2019s compare both query languages in a clear and organized way. Criteria SOQL (Salesforce Object Query Language) SQL (Structured Query Language) Purpose Designed for querying Salesforce objects and records. Used for querying relational databases (MySQL, PostgreSQL, etc.). Syntax Limited to SELECT statements (no INSERT, UPDATE, or DELETE). Supports full CRUD operations (SELECT, INSERT, UPDATE, DELETE). Querying Relationships Uses relationship queries (Parent-to-Child & Child-to-Parent) with subqueries. Uses JOIN operations for table relationships. Filtering Data Uses WHERE clause but lacks advanced operators like HAVING. Supports WHERE, HAVING, GROUP BY, and advanced conditions. Aggregation Supports COUNT(), SUM(), AVG(), MAX(), MIN() but requires GROUP BY for more complex queries. More robust aggregation with advanced functions and HAVING. Performance Works well for structured, object-based data but isn\u2019t designed for large-scale, complex joins. Can handle massive datasets with advanced indexing and optimization techniques. Data Storage Structure Data is stored as objects (e.g., Account, Contact) with predefined relationships. Data is stored in tables with rows and columns. Indexing and Optimization Queries are optimized using governor limits, indexed fields, and selective filtering. Uses advanced indexing, partitioning, and query optimization techniques. Security and Access Control Follows [Salesforce security](https://skyvia.com/blog/salesforce-security-best-practices/) settings (Profiles, Permission Sets, Sharing Rules). Uses database access control mechanisms (Roles, Privileges, Views). Scalability Limited by Salesforce Governor Limits (e.g., 50,000 record retrieval limit). Scales efficiently with proper database optimization. Full-Text Search Uses Salesforce Object Search Language (SOSL) for full-text search across multiple objects. Supports LIKE, FULLTEXT INDEX, and search engines (e.g., Elasticsearch). Integration with External Systems Requires APIs, middleware, or [ETL tools](https://skyvia.com/blog/etl-tools/) to connect with external databases. Natively supports integrations via ODBC, JDBC, and APIs. Use Cases Best for querying Salesforce data for reports, dashboards, and Apex. Ideal for handling complex, large-scale relational database queries. So, at first glance, both solutions might seem similar, but they are built for different environments. SQL is a universal database language used to manage and manipulate relational databases, while SOQL is specifically designed for querying Salesforce data. Key Differences between SOQL and SQL Read-Only vs. Data Manipulation. SOQL is strictly used for retrieving records; it doesn\u2019t support direct data modifications. In contrast, traditional query languages allow operations like INSERT, UPDATE, and DELETE.\u00a0 To modify records in Salesforce, changes must be made through the user interface or Apex DML statements. Data Storage Structure. Standard databases store information in tables with rows and columns representing records. Salesforce, however, organizes data as objects, which function like structured records with predefined relationships. Relationship Queries vs. Traditional Joins. Unlike relational databases that support complex JOIN operations across multiple tables, Salesforce queries only work with predefined relationships. Instead of arbitrary joins, it uses relationship queries to fetch connected data. data from related objects. No SELECT * for Fetching All Fields. In many databases, a simple SELECT * retrieves all fields from a table. Salesforce requires users to explicitly specify the fields they need (e.g., SELECT Name, Industry FROM Account). This limitation helps maintain performance and efficiency in a multi-tenant environment, preventing excessive data retrieval. Querying Related Records. Since standard SQL supports flexible joins between tables, data can be retrieved from unrelated sources. In Salesforce, queries are limited to objects with an existing relationship (parent-child or lookup). Example of a Parent-to-Child Query (Fetching Related Child Records): SELECT Name, (SELECT LastName FROM Contacts) FROM Account This one retrieves Accounts along with their related Contacts. Example of a Child-to-Parent Query (Referencing Parent Object Fields): SELECT FirstName, LastName, Account.Name FROM Contact It fetches Contacts while pulling the associated Account Name. Syntax Comparison: SOQL vs. SQL Even though both query languages share some similarities, their syntax has key differences due to the object-based structure of Salesforce data versus the table-based structure of relational databases. SOQL requires explicit field selection (SELECT * is not allowed). No arbitrary JOIN in SOQL; relationship queries replace joins. SOQL uses governor limits, requiring performance optimization. Full-text searches require SOSL, while SQL relies on LIKE. Both SQL and SOQL use ORDER BY, LIMIT, and GROUP BY, but SOQL has fewer aggregate functions. Below is a breakdown of some common queries to highlight how they differ in terms of syntax and functionality. 1. Selecting All Records SQL (Can use SELECT * to fetch all fields from a table): SELECT * FROM Customers; SOQL (Must explicitly specify fields, no SELECT *): SELECT Name, Phone, Email FROM Contact; Why? SOQL requires specifying fields to prevent excessive data retrieval in Salesforce\u2019s multi-tenant environment. 2. Filtering Data Using WHERE Clause SQL (Filtering with conditions): SELECT Name, Industry FROM Customers WHERE Industry = 'Technology'; SOQL (Same WHERE syntax, but objects instead of tables): SELECT Name, Industry FROM Account WHERE Industry = 'Technology'; Why? The structure is similar, but SQL queries tables while SOQL queries Salesforce objects. 3. Sorting Data with ORDER BY SQL: SELECT Name, Age FROM Customers ORDER BY Age DESC; SOQL: SELECT Name, Age FROM Contact ORDER BY Age DESC; Why? The syntax is identical, but Salesforce has query performance limits, so sorting large datasets requires optimization. 4. Limiting Query Results SQL ( LIMIT clause to restrict results): SELECT Name FROM Customers LIMIT 10; SOQL (Same LIMIT functionality): SELECT Name FROM Contact LIMIT 10; Why? SQL and SOQL use LIMIT, but it is crucial to stay within governor limits in Salesforce. 5. Using Aggregate Functions SQL (Aggregating data with GROUP BY ): SELECT Industry, COUNT(*) FROM Customers GROUP BY Industry; SOQL (Supports limited aggregate functions, requires GROUP BY): SELECT Industry, COUNT(Id) FROM Account GROUP BY Industry; Why? SOQL has fewer aggregate functions than SQL, but COUNT(), SUM(), AVG(), MIN(), and MAX() are supported. 6. Querying Related Records (JOIN vs. Relationship Queries) SQL ( JOIN to combine multiple tables): SELECT Customers.Name, Orders.OrderNumber \nFROM Customers \nINNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID; SOQL (Uses relationship queries instead of JOIN ) Parent-to-Child Query (Subquery for related child records): SELECT Name, (SELECT OrderNumber FROM Orders) FROM Account; Child-to-Parent Query (Direct reference to parent fields): SELECT FirstName, LastName, Account.Name FROM Contact; Why? Unlike SQL, SOQL does not support arbitrary joins. Instead, it uses relationship queries based on predefined object relationships. 7. Searching for Text (SOQL Has SOSL for Full-Text Search) SQL (Using LIKE for partial text search): SELECT Name FROM Customers WHERE Name LIKE '%John%'; SOQL (LIKE works the same way but is limited to specific fields): SELECT Name FROM Contact WHERE Name LIKE '%John%'; Salesforce-Specific Alternative: SOSL (Full-text search across multiple objects): FIND 'John' IN ALL FIELDS RETURNING Contact(Name), Account(Name); Why? Unlike SQL, SOSL is Salesforce\u2019s full-text search tool, allowing queries across multiple objects. Best Practices to Query Salesforce Database Writing efficient SOQL queries is crucial for maintaining performance, staying within Salesforce\u2019s governor limits, and ensuring smooth data retrieval. Unlike traditional SQL, it operates in a multi-tenant environment, so it helps prevent system slowdowns and unnecessary resource consumption. Let\u2019s consider what to follow when querying the Salesforce database. 1. Use Selective Queries to Stay Within Limits Salesforce enforces strict governor limits, restricting the number of records a query can return. Always use filters (WHERE clauses) to narrow down results instead of querying entire objects. Good Example (Using a Filtered Query): SELECT Name, Industry FROM Account WHERE Industry = 'Technology' Bad Example (Querying All Records Without a Filter): SELECT Name, Industry FROM Account Why? The second query might hit the 50,000 record limit, causing errors or performance issues. 2. Avoid SELECT *\u00a0 Always Specify Fields Unlike SQL, SOQL does not allow SELECT *, but even if it did, retrieving unnecessary fields would waste resources. Always specify only the fields you need. Good Example (Fetching Specific Fields): SELECT Name, Email FROM Contact Why? This improves query efficiency and reduces data load. 3. Use Relationship Queries Instead of Multiple Queries SOQL supports parent-to-child and child-to-parent relationship queries, reducing the need for multiple queries. Good Example (Fetching Related Child Records in One Query): SELECT Name, (SELECT LastName FROM Contacts) FROM Account Why? Instead of querying Accounts first and then separately querying Contacts, this fetches both in one go. 4. Limit the Number of Records Returned Use the LIMIT clause to avoid exceeding row limits and to optimize query performance. Good Example (Limiting Query Results): SELECT Name FROM Account LIMIT 100 Why? This ensures you only fetch what you need, preventing excessive data retrieval. 5. Optimize Query Performance with Indexing Indexed fields significantly improve query speed, especially for large datasets. Use selective filters on indexed fields like ID, Name, or external IDs. Good Example (Filtering on an Indexed Field): SELECT Name FROM Account WHERE Id = '0015g00000V9zXxAAJ' Why? Queries on indexed fields run much faster than those on non-indexed fields. 6. Use ORDER BY and GROUP BY Wisely Sorting and grouping data can slow down performance, so use them only when necessary. Good Example (Sorting Efficiently): SELECT Name FROM Account ORDER BY CreatedDate DESC LIMIT 50 Why? The LIMIT keeps performance in check. Good Example (Using Aggregation Properly): SELECT Industry, COUNT(Id) FROM Account GROUP BY Industry Why? Aggregating a manageable number of records prevents query slowdowns. 7. Use Special Tools for More Advanced Querying While SOQL is explicitly built for Salesforce, some [platforms](https://skyvia.com/blog/salesforce-query-tools/) , like [Skyvia](https://skyvia.com/data-integration) , [Salesforce SOQL Builder](https://marketplace.visualstudio.com/items?itemName=salesforce.salesforcedx-vscode-expanded) , [Salesforce Inspector](https://chromewebstore.google.com/detail/salesforce-inspector/aodjmnfhjibkcdimpodiifdjnnncaafh) , [Workbench](https://workbench.developerforce.com/login.php) , etc., offer SQL-like querying to make data retrieval easier and more flexible. They help users: Run queries. Explore relationships. Bypass some SOQL limitations by providing a more user-friendly interface. For instance , [Skyvia](https://skyvia.com/query) allows running SQL queries in its console to Salesforce, making it ideal for users familiar with traditional SQL syntax. Example Query Using Skyvia: SELECT Contact.FirstName, Contact.LastName, Account.Name \nFROM Contact \nINNER JOIN Account ON Contact.AccountId = Account.Id Why? Skyvia lets you use standard SQL syntax, making queries more familiar and flexible. How to Run SQL Query in Salesforce using Skyvia? [Skyvia](https://skyvia.com/) allows users to run SQL queries directly on Salesforce data in this field, making reporting and analysis much easier. Whether you need to join Salesforce data with other cloud solutions, perform advanced aggregations, or automate queries, Skyvia provides a no-code, scalable solution. In this guide, we\u2019ll walk through the step-by-step process of setting up Skyvia, connecting it to Salesforce, and executing SQL queries to fetch data, making reporting and analysis easier. The first step is to visit the [Skyvia website](https://skyvia.com/) and sign up for a free account if you haven\u2019t already. After logging in, click +Create NEW , and navigate to the Connections tab to create a new connection. Click [here](https://docs.skyvia.com/connections/index.html) for more details. Select Salesforce from the list of available connectors and authorize Skyvia to access the Salesforce account by providing the necessary credentials. Once your Salesforce connection is established, go to the [Query](https://docs.skyvia.com/query/#connections) tab and click on +Create NEW to create a new query. In the query editor, write SQL statements to interact with Salesforce data. For example, to retrieve all contacts from the Salesforce database, use: SELECT * FROM Contact; Note : Skyvia supports standard SQL syntax, allowing for complex [queries](https://docs.skyvia.com/query/#supported-sql-statements) involving joins, aggregations, and more. Click the Execute button to run your query. The results will be displayed below the query editor, where you can review and analyze the data. If you need to run this query regularly, save it by clicking the Save button. Note : Skyvia also offers scheduling options to automate query execution and export results to various formats or destinations. For detailed instructions and visual aids, refer to Skyvia\u2019s official [documentation](https://docs.skyvia.com/query/configuring-queries-with-query-builder.html) . Conclusion: When to Use SOQL vs. SQL? Both query languages are powerful, but they serve different purposes. Select SOQL when working within Salesforce to fetch data from objects like Accounts, Leads, or Opportunities, especially when needing to stay within Salesforce governor limits. It\u2019s ideal for generating reports, and dashboards and automating CRM tasks. On the other hand, choose SQL when querying external databases, performing complex joins across multiple tables, or handling large-scale aggregations and data analysis. If a business needs SQL-like functionality within Salesforce, tools like Skyvia allow for bridging the gap. Understanding when to use each language ensures optimized queries, better performance, and smoother data operations. FAQ for SOQL vs SQL How to Run SOQL Query in Salesforce? You can run SOQL queries in Salesforce using native tools like: \u2013 Developer Console: Go to Setup > Developer Console > Query Editor and enter your SOQL query. \u2013 Workbench: Visit Workbench, log in, and use the SOQL Query Tool . \u2013 Apex Code: Use SOQL within Apex classes to fetch and process Salesforce data. Example SOQL Query: SELECT Name, Industry FROM Account WHERE Industry = \u2018Technology\u2019 How to Run SQL Query in Salesforce? Salesforce does not support traditional SQL, but you can use third-party tools like: \u2013 Skyvia allows running SQL queries on Salesforce data. \u2013 Salesforce Inspector is a Chrome extension for quick data querying. \u2013 SOQL Builder in VS Code is an interactive query building for developers. Example SQL Query in Skyvia: SELECT Contact.FirstName, Contact.LastName, Account.Name FROM Contact INNER JOIN Account ON Contact.AccountId = Account.Id Can I Use SELECT * in SOQL Like in SQL? No, SOQL does not support SELECT *. You must explicitly specify the fields you want to retrieve. Not Allowed in SOQL: SELECT * FROM Contact;\u00a0 \u2014 This will cause an error Correct SOQL Query: SELECT FirstName, LastName, Email FROM Contact; This restriction helps improve performance in Salesforce\u2019s multi-tenant environment. Does SOQL Support Joins Like SQL? No, SOQL does not support arbitrary joins like SQL. Instead, it uses relationship queries to fetch related records. Parent-to-Child Query (Subquery in SOQL): SELECT Name, (SELECT LastName FROM Contacts) FROM Account; Child-to-Parent Query (Direct Reference in SOQL): SELECT FirstName, LastName, Account.Name FROM Contact; In SQL, you would normally use JOIN, but SOQL relies on predefined object relationships instead. What Are the Governor Limits for SOQL Queries? Salesforce enforces governor limits to ensure system performance: Maximum 50,000 records retrieved per query. Maximum 100 SOQL queries per Apex transaction. Query execution time limit to prevent excessive resource usage. Complex queries may require indexing for optimization. To avoid hitting limits, use filters (WHERE), limits (LIMIT), and indexed fields. When Should I Use SOQL Instead of SQL? Use SOQL when: \u2013 Querying Salesforce standard or custom objects (e.g., Accounts, Contacts, Opportunities). \u2013 Building reports and dashboards within Salesforce. \u2013 Writing Apex code that interacts with Salesforce data. Use SQL when: \u2013 Querying external databases (MySQL, PostgreSQL, SQL Server, etc.). \u2013 Performing complex joins across multiple tables. \u2013 Running advanced aggregations and data analysis outside Salesforce. Can I Automate Salesforce Queries? Yes! You can automate SOQL queries using: \u2013 Apex Triggers & Batch Jobs. Run queries automatically based on events. \u2013 Scheduled Reports. Automate report generation in Salesforce. \u2013 Skyvia or Data Loader. Schedule and export query results to external databases or CSV files. For advanced automation, consider using Salesforce Flow or Apex [Batch Processing](https://skyvia.com/blog/batch-etl-processing/) . Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsoql-vs-sql-best-practices-to-query-salesforce-database%2F) [Twitter](https://twitter.com/intent/tweet?text=SOQL+vs+SQL%3A+Best+Practices+to+Query+Salesforce+Database&url=https%3A%2F%2Fblog.skyvia.com%2Fsoql-vs-sql-best-practices-to-query-salesforce-database%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/&title=SOQL+vs+SQL%3A+Best+Practices+to+Query+Salesforce+Database) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [Power BI ETL: A Comprehensive Guide](https://skyvia.com/blog/etl-in-power-bi/)" }, { "url": "https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) SQL Server Data Warehouse: Easy and Practical Guide By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/#respond) 5180 February 6, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsql-server-data-warehouse-the-easy-and-practical-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=SQL+Server+Data+Warehouse%3A+Easy+and+Practical+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsql-server-data-warehouse-the-easy-and-practical-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/&title=SQL+Server+Data+Warehouse%3A+Easy+and+Practical+Guide) SQL Server Data warehouse? Sounds boring. Who would want to learn this? But what if I tell you that [data analysis is one of the sought-after skills today](https://www.thetimes.com/business-money/technology/article/times-recruitment-pays-to-work-in-data-zpfqt7mwn?) ? If \u201cdata warehouse SQL\u201d is a question you frequently type into Google, you\u2019re not alone. As businesses generate and rely on vast amounts of information, understanding how to store, manage, and query it efficiently becomes crucial. Data warehouses are the backbone of modern analytics, and SQL is the universal language with which to interact. Whether you\u2019re looking to optimize your queries, integrate information from multiple sources, or simply understand the basics, this article will [guide](https://skyvia.com/blog/salesforce-quickbooks-integration/) you through the essentials of using SQL in a data warehouse environment. This guide should be easy for everyone, no matter how much experience you have in SQL. You can follow the example described below on your PC. Simply download and install the [SQL Server Community Edition](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) for free. Table of contents What is a Data Warehouse? Dimensional Model Types of Data Warehouse Schema Advantages of DWH SQL Server How to Build SQL Server Data Warehouse Step 1: Get Business Requirements Step 2: Build the SQL Server Data Warehouse Step 3: Extract Data from the Transactional Database into the SQL Server Data Warehouse Step 4: Build the Sample Report ETL Tools for Data Integration Conclusion What is a Data Warehouse? A data warehouse is the central repository of information for analysis, artificial intelligence, and machine learning. To make the warehouse efficient, data is loaded from different sources, like transactional databases, flat files, or cloud storage, and should be regularly updated. The illustration for a typical data warehouse environment is shown below. The first part of the diagram is the sources of data. These are databases from transactional systems. It can be in SQL Server or another relational database. It can also be from flat files like CSVs, Excel, XML, and text. Afterward, you consolidate all the necessary records from the source into a single format called the staging area. For simplicity, you can also implement the staging area in SQL Server. Then comes the data warehouse SQL Server with a dimensional model. We will discuss how to make one with an example later. The final part of the diagram is different data marts. A data mart focuses on one aspect of the business, like sales, purchasing, and more. In this article, we will make a data warehouse with one data mart for insurance policy sales. SQL Server data warehouse needs to be modeled for efficient processing. Next, we\u2019ll review its components, focusing on the dimensional model and its role. Dimensional Model Here are some key terms used in the dimensional model. FACT TABLE A fact table contains all the facts about a business entity or process. It\u2019s at the center of the schema, surrounded by dimensions. A fact table may be about sales, ticket support, projects, etc. You can implement this as an SQL database table. Columns include the ID keys of dimensions and measures. Each record in the fact table will determine how detailed a fact table is. There can be several fact tables in a data warehouse defining different business processes in one warehouse. Each of them can share dimensions about location, date, and more. DIMENSIONS Dimension categorizes facts and measures in a fact table. For example, the city or region dimension describes a customer\u2019s location in a sales transaction. Other examples of dimensions are customer and product in a sales business. Dimensions also enable users to answer a business question. For example, \u201cHow much did we earn from Product X this month?\u201d In this question, the Product is the dimension of a Sales fact. Dimension is implemented as a table referenced by the fact table. It includes a primary key and the key description or name, such as a product ID and name. However, more can be defined within a dimension to categorize it and further build a hierarchy. For example, product category and subcategory describe a product. The primary key of each dimension can be different from the primary key of the source table. This happens when a table of customers from one database is combined with a table of customers from another. It\u2019s also called a surrogate key. MEASURES The measure is a property of the fact table that allows calculation. This can be sum, average, count, minimum, or maximum. For example, you can sum sales amounts to form total sales. Measures can be additive, non-additive, semi-additive, or calculated. The sales amount is an additive measure. You can sum or average it. But the unit price is non-additive. It may not make sense if you sum it up. Meanwhile, a calculated or computed measure is like its name. The total sales amount, for example, is calculated based on product unit price + tax. Types of Data Warehouse Schema The schema defines how information is structured in a data warehouse. Different schema types are designed to balance performance, storage efficiency, and query complexity. The choice of schema depends on the business requirements and data usage. Let\u2019s review the most common types. STAR SCHEMA The simplest and the most widely used dimensional model is a star schema. It has the fact table at the center and the dimensions surrounding it. It can also be described as a parent-child table design. The fact table is the parent, while the dimensions are the children. But since it\u2019s so simple, there are no grandchildren. Common characteristics of star schema The fact table is at the center, containing dimension keys (foreign keys) and measures. Primary keys in dimension tables are foreign keys in the fact table. No dimension table references another dimension table. They are denormalized. Advantages of star schema Simpler queries because of the simple design. Easily maintained. Faster access to records because of the denormalized dimension table design. SNOWFLAKE SCHEMA In a snowflake schema, dimension tables are normalized. The physical structure resembles a snowflake shape. Compared to a parent-child design, snowflake schemas can have grandchildren. Common characteristics of snowflake schema A fact table is also at the center, like the star schema. The fact table references first-level dimension tables. Dimension table can reference another dimension table. This design is normalized. Advantages of snowflake schema More flexible to changes in structure. Less disk space because of normalized dimension tables. Advantages of DWH SQL Server Choosing the platform is an essential step in building a data warehouse. SQL Server is an excellent option for many reasons: Integration with Existing Ecosystems SQL Server is part of the Microsoft ecosystem and integrates with popular services such as Power BI and Excel, simplifying report creation and data visualization for organizations. Scalability and Performance Designed to support systems of various sizes, from small setups to enterprise-level data warehouses, it offers features such as partitioning and column store indexes. Faster data processing and improved query performance are substantial benefits. Cost-Effectiveness Multiple editions, including a [free Community Edition](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) , make this platform accessible to organizations with different budgets without losing core features. Security and Compliance Data protection is ensured through encryption, access controls, and auditing capabilities. The platform also complies with industry regulations like GDPR and HIPAA. Advanced Analytics Integration Built-in support for data science and machine learning makes it a good choice for implementing predictive models and gaining insights directly within the database. Rich ETL Features SQL Server Integration Services ( [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) ) makes it easier to handle data extraction, transformation, and loading tasks across diverse sources. User-Friendly Management Tools like SQL Server Management Studio (SSMS) offer a straightforward way to set up and maintain data warehouses, catering to users with varying levels of expertise. How to Build SQL Server Data Warehouse Time to put the concepts above to practical use. In this example, we will use a fictitious company called ABC Insurance Co. The company sells fire insurance policies for residential houses, apartments, and business structures. Our data warehouse example will have these simple characteristics: One (1) transactional database. The staging area will have a copy of the transactional database for the tables and columns needed. The data warehouse will use a star schema focusing on insurance policy sales. Step 1: Get Business Requirements To make informed decisions, analysts must be prepared to answer questions from stakeholders, such as business leaders or managers. RECEIVE BUSINESS QUESTIONS Output for this step: Business questions and their objectives. Answers to business questions in the form of reports and their formats. Your stakeholders have questions in mind. Your role is to answer those questions so they can make informed decisions. In our example, we only need to answer how many sales were made in a particular period. Of course, there are more. However, we will only answer this question to demonstrate the concepts we have learned. To get the answers, pay attention to the current state of the system and the desired outcome. Ask for the report formats they need. Then, proceed to the next step, which is discussed next. INSPECT THE SOURCE TRANSACTIONAL DATABASE AND CREATE THE STAGING AREA Output for this step: Staging area database. Plan for extracting data from the source to the staging area. The transactional database contains all the currently available information. For this example, we assume all the information we need can be found in the source database. If there is missing information, you must go back to your stakeholders. Then, resolve the matter separately. Then, go back to this step. After seeing the source database, identify what tables and columns you need. You don\u2019t need everything. If you need to clean the data, identify the steps you need to do it. You may need to clarify some parts of the stakeholders\u2019 data. Now, let\u2019s assume that we already have what we need. Below, you can find a diagram of the database staging area. At this point, you need to plan how to get the data to the staging area. After this, you\u2019re ready for the next step. But before we do that, I think this question deserves to be answered. Why create a separate database for the staging area? Good point. You may ask, what\u2019s wrong with getting the data straight from the transactional database? Our example uses only 1 database source. In the real world, you don\u2019t just deal with sales. You can have other systems for purchasing, petty cash, payroll, and more. If these have separate databases, and you want to analyze them, this staging area may be good for them, too. How would you know? Ask yourself whether there is information that these [information](https://skyvia.com/blog/best-data-pipeline-tools/) [systems](https://skyvia.com/blog/best-data-pipeline-tools/) can share. If yes, consolidating them into one staging area will be an advantage. One example of something that they can share is an employee list. Another point is data cleansing. You don\u2019t want to touch a working transactional system. So, you clean the data in the staging area. And one more point is the precalculation of aggregates. Before reaching the data warehouse, do you need to do some complex calculations or summarization? You can also do that in the staging area. Step 2: Build the SQL Server Data Warehouse Finally, we have reached the focal point of this article. Start by setting up a new SQL Server database specifically for the data warehouse. Output for this step: SQL Server database for the data warehouse. Plan to populate the data warehouse from the staging area. To create a new database for the data warehouse, launch SQL Server Management Studio. Then , right-click the Databases folder in the Object Explorer and select New Database . Name your database and set the database options. We named ours as fire_insurance_DW . CREATE THE FACT TABLE Now, let\u2019s create new tables for the SQL Server. We will start with the fact table. For our fire insurance sales example, we have the structure as shown below. The fact table above includes 3 additive measures: premium, other_charges, and total_amount_paid . Meantime, total_charges is a computed measure based on premium + other_charges. Please also pay attention to the foreign keys client_id, building_city_id, product_id, and statement_date. They will reference dimension tables later. CREATE THE DIMENSIONS Next, create the dimension tables. We have product, client , city , and date dimensions. Each serves a purpose in reporting. The table below shows all the dimensions in our data warehouse example. See each dimension in more detail: dimFireInsuranceProducts includes all fire insurance products. This dimension will categorize product-related figures like total premium sales by product. dimClient includes the list of clients who bought fire insurance policies. dimCity includes the list of cities within states. The state information is included, which makes this table denormalized. This defines the location of the property insured. If we made the data warehouse with a snowflake schema, another dimension table for the state should be created. dimDate is a date dimension that will filter sales by period. Users can filter from yearly to daily summaries. Check the final [database diagram](https://www.devart.com/dbforge/sql/studio/database-diagram.html) of our data warehouse below. Analysis doesn\u2019t end in creating the database for the data warehouse. So, what are the next several steps? Step 3: Extract Data from the Transactional Database into the SQL Server Data Warehouse Here, we will extract information from the source database, send it to the staging area, and send it to the data warehouse. Before you extract data, do not forget to create the field mappings from the source and target. You can find an example of fact table mappings below. For the date dimension, you also need a script to generate data. The sample SQL code below will build a date table from 2020 to 2021. It uses the dimDate dimension table that we have in the data warehouse. DECLARE @StartDate date = '01/01/2020';\nDECLARE @EndDate date = '12/31/2021';\n;WITH seq(n) AS\n(\n SELECT 0 UNION ALL SELECT n + 1 FROM seq\n WHERE n < DATEDIFF(DAY, @StartDate, @EndDate)\n),\nd(d) AS\n(\n SELECT DATEADD(DAY, n, @StartDate) FROM seq\n),\nsrc AS\n(\n SELECT\n [transaction_date] = CONVERT(date, d),\n [year] = DATEPART(YEAR, d),\n [month_number] = FORMAT(d,'MM'),\n [year_month_number] = FORMAT(d,'yyyy-MM'),\n [year_month_short] = FORMAT(d, 'yyyy-MMM'),\n [month_name_short] = FORMAT(d,'MMM'),\n [month_name_long] = FORMAT(d,'MMMM'),\n [day_of_week_number]= DATEPART(WEEKDAY, d),\n [day_of_week] = DATENAME(WEEKDAY, d),\n [day_of_week_short] = FORMAT(d,'ddd'),\n [quarter] = 'Q' + CAST(DATEPART(QUARTER,d) AS NCHAR(1)),\n [year_quarter] = CAST(YEAR(d) AS NCHAR(4)) + '-Q' + CAST(DATEPART(QUARTER,d) AS NCHAR(1)),\n [week_number] = DATEPART(WEEK, d)\n FROM d\n)\nINSERT INTO dimDate\nSELECT * FROM src\nORDER BY transaction_date\nOPTION (MAXRECURSION 0); If you need more years, simply change the start and end dates in the script. Step 4: Build the Sample Report Finally, you can build the reports and dashboards your stakeholders asked for. You may use Excel because they are probably familiar with it. You can also use Power BI or [SQL Server Reporting Services](https://learn.microsoft.com/en-us/sql/reporting-services/create-deploy-and-manage-mobile-and-paginated-reports?view=sql-server-ver16) (SSRS). OUTPUT: SAMPLE REPORT A possible report output for the data warehouse we\u2019ve built is shown below. It uses Power BI to show product sales per period. A few more reports are possible with the data warehouse, like client sales or sales based on location. ETL Tools for Data Integration By itself, an SQL Server data warehouse is just a place for storing large volumes of information \u2013 it doesn\u2019t automatically solve business problems or generate insights. If you want it to have value, it needs to become a part of your data ecosystem. This ecosystem can\u2019t work without integrating your services and sources, and that\u2019s where ETL (Extract, Transform, Load) processes step in. How ETL Tools Solve Business Problems Data Synchronization: [ETL tools](https://skyvia.com/blog/etl-tools/) bring information from different sources into a consistent environment. This allows businesses to get a 360-degree view of their operations. Consistency and Data Quality: tools apply rules to cleanse, standardize, and validate data. It is essential for reliable reporting and decision-making. Automate Data Workflows: scheduling and automation help you keep the records up-to-date between your sources and the data warehouses, reducing the number of errors and manual efforts. [SQL Server Integration Services](https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) (SSIS) is a popular on-premises ETL tool that is tightly integrated with SQL Server. It\u2019s powerful but complex to set up and maintain, especially if you\u2019re looking for a beginner-friendly or cloud-based solution. You can integrate SSIS with SQL Server Agent to schedule and automate this workflow. While SSIS is a powerful tool, it has some drawbacks. SSIS is an on-premises solution, making setting up and running pricey. Plus, it\u2019s not beginner-friendly \u2014 creating and managing workflows can be easier to set up in alternative solutions. For those exploring alternatives, [Skyvia](https://skyvia.com/) provides a cloud, no-code solution. An intuitive tool that users love, backed by a 4.8/5 rating on [G2 Crowd](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) , and takes a spot among the top easiest ETL tools. Skyvia\u2019s Data Integration product offers tools to help move data, combining ETL, ELT, and reverse ETL functionality. [Import](https://skyvia.com/data-integration/import) is designed to gather data from different sources \u2013 cloud applications, flat files (CSV), or other databases into one target system. It features advanced mapping capabilities that let you configure field-to-field mappings, use expressions for transformations, and perform lookups to enrich your records. This functionality is useful for cases when you need to set up one-time or scheduled migrations without the need to use code. [Replication](https://skyvia.com/data-integration/replication) allows you to keep the latest records from cloud applications at your fingertips. Once the initial replication is completed, the database is updated by pulling only the latest changes from the source. Set up the replication schedule and keep data current without manual effort. [Data Flow](https://www.youtube.com/watch?v=Opc1W9lq-Es&list=PLE66g1kd4iq3OOjHBEEbhEaddZXfX2JB1&pp=iAQB) offers a powerful data pipeline designer to work with complex scenarios. With Data Flow, you can join data from multiple sources using lookups and conditional splits, chain multiple transformation stages within a single workflow, and load the processed records into one or multiple targets. See Skyvia\u2019s capabilities yourself \u2013 try it out for free! For a broader look at the ETL landscape and tips on selecting the right tool for your business, check out our [blog post on ETL tools](https://skyvia.com/blog/sql-server-etl-tools/) . It covers everything from on-premises solutions like SSIS to fully managed, cloud-based services like Skyvia \u2014 so you can choose the best fit for your organization\u2019s needs. Conclusion Creating an SQL Server data warehouse is a straightforward way to transform your raw information into useful insights that can help you uncover trends, improve decision-making, and support business growth. In this article, you learned how to build an SQL Server data warehouse from scratch. The example is simple; however, it covers most of the basic needs of the data warehouse.\u00a0 You can create a robust data warehouse in SQL by following the steps \u2014 gathering business requirements, designing and implementing the schema, extracting and transforming data, and creating insightful reports. Let\u2019s recap this topic with some key takeaways! Key Takeaways Data warehouses are central information repositories for analysis, artificial intelligence, and machine learning. Data flows from different sources like transactional databases, flat files, or cloud applications. The data is also updated regularly to make informed decisions on time. A staging area is a separate database that is used to consolidate records from different sources, clean data, and precalculate aggregates before loading it into the data warehouse. A Dimensional model is needed for efficient processing in a SQL Server data warehouse. A fact table contains facts (measures) about a business process and is at the center of the schema. There can be multiple fact tables in the SQL Server data warehouse. Dimensions categorize the facts and measures in a fact table. They allow you to answer business questions. A measure is a property of a fact table that allows calculations. The star schema is the simplest dimensional model. It has a fact table at the center and dimensions surrounding it. In a star schema, dimension tables do not reference other dimensional tables. The snowflake schema is another dimensional model in which dimension tables can reference other dimension tables. SQL Server is a good option for establishing a data warehouse because it integrates with existing ecosystems, offers scalability and performance, and provides security and compliance. Building a SQL Server data warehouse involves gathering business requirements , creating the data warehouse (database, fact/dimension tables), extracting/loading data via ETL (e.g., SSIS, [Skyvia](https://skyvia.com/) ), and building sample reports (e.g., Excel, Power BI). FAQ for Data Warehouse from SQL Server Can SQL Server data warehouse transform raw data into actionable insights for my business? Yes, but with a nuance. SQL Server works as a centralized repository that stores records from various sources. You can get insights from it by connecting to the SQL Server using business intelligence tools like Power BI, Excel, and Tableau. Do ETL tools simplify the process of loading data into a data warehouse? Yes, ETL (Extract, Transform, Load) tools do simplify data integration. They help clean, map, and load everything into the data warehouse while maintaining data quality and consistency. You can try [Skyvia](https://skyvia.com/) as your ETL tool for free. Are there any options to try SQL Server for free? Yes, you can experiment with the Community Edition of SQL Server. You can visit the [SQL Server Downloads](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) page to get an installer for your computer. Can I scale my warehouse if needed? Yes, SQL Server is designed to be scaled. You can handle data from small setups to large enterprise environments without compromising performance. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsql-server-data-warehouse-the-easy-and-practical-guide%2F) [Twitter](https://twitter.com/intent/tweet?text=SQL+Server+Data+Warehouse%3A+Easy+and+Practical+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fsql-server-data-warehouse-the-easy-and-practical-guide%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/sql-server-data-warehouse-the-easy-and-practical-guide/&title=SQL+Server+Data+Warehouse%3A+Easy+and+Practical+Guide) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/stripe-salesforce-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Stripe to Salesforce Integration: Easy to Follow Guide By [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) [0](https://skyvia.com/blog/stripe-salesforce-integration/#respond) 4647 April 23, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fstripe-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Stripe+to+Salesforce+Integration%3A+Easy+to+Follow+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fstripe-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/stripe-salesforce-integration/&title=Stripe+to+Salesforce+Integration%3A+Easy+to+Follow+Guide) In today\u2019s fast-paced, customer-oriented digital world, users expect a seamless payment experience. A secure and convenient payment solution integrated into a vendor\u2019s platform is a sure way for businesses to earn customer trust and build brand loyalty. That\u2019s where payment integrations come in. Salesforce, a flagship business platform, and Stripe, a top payment provider, make a perfect duo when it comes to visibility into transactions and customer behavior. Is your payment process aligned with the customer journey? If not, this might be the right time to rethink your approaches for enhanced user experience. Read on to discover how to integrate Stripe to Salesforce and the benefits it can bring. Table of contents What is Stripe? What is Salesforce? Why Integrate Salesforce with Stripe? Salesforce Stripe Integration Methods Method 1: Using the Stripe Connector for Salesforce Method 2: Custom Integration using Apex and JavaScript Method 3: Third-Party Integration Tools Best Practices for Salesforce Stripe Integration Common Challenges & Troubleshooting Tips Conclusion What is Stripe? [Stripe](https://skyvia.com/connectors/stripe) is a leading payment processing platform designed to help businesses accept payments online quickly and securely. Its feature-rich functionality, fraud prevention mechanisms, and native integration options make it the #1 choice for many businesses, particularly in the [e-commerce domain](https://skyvia.com/blog/best-ecommerce-integration/) . Stripe is distinguished by high transaction processing speed and security due to its easy-to-use suite of payment APIs. From a business perspective, it\u2019s a very convenient tool for embedding payments into websites and online stores. Key Features Pre-built integrations and libraries for every tech stack, from React and PHP to .NET and iOS. Supports 135+ currencies and a wide range of payment methods, and works with various banks globally. Comprehensive security features include tokenization, encryption, and built-in compliance with the Payment Card Industry (PCI) data security standards. ML-based fraud protection with Stripe Radar that detects and blocks fraudulent transactions. Common Use Cases for Stripe Marketplaces and [e-commerce websites](https://skyvia.com/blog/how-to-connect-stripe-with-shopify/) Subscription-based services [CRM platforms](https://skyvia.com/blog/hubspot-and-stripe-integration/#Introduction-to-HubSpot) What is Salesforce? [Salesforce](https://skyvia.com/connectors/salesforce) , also known as SFDC, is the world\u2019s leading CRM platform and is trusted by thousands of brands. For five subsequent years, it has been [gaining the top 3 Forbes ranking](https://www.salesforce.com/ca/blog/forbes-salesforce-most-innovative-companies/) as the most innovative company. Salesforce provides companies with a SaaS product that covers all stages and moments of the customer lifecycle. One of its many remarkable features is flexibility: Salesforce allows customization of virtually everything on the platform, from front-end interfaces to database models. Like Stripe, it supports a wide range of integrations via AppExchange, APIs, and custom development. Key Features Unified view of every customer by consolidating all related data \u2013 contact details, communication history, purchase behavior, service issues, and preferences \u2013 within a single pane of glass. Automation of all steps of the sales process, from lead capture to deal closure. Automation of marketing tasks, including email nurturing, segmentation, A/B testing, behavior tracking, and campaign performance analytics. Synchronization of calendars, contacts, and schedules with popular email platforms like Microsoft Outlook and Gmail. Built-in analytics and reporting tools for visualizing key metrics, tracking KPIs, and monitoring team or campaign performance in real time. Common Use Cases for Sales Sales process management Customer relationship management Marketing automation Customer service and support Analytics and reporting Partner relationship management Subscription and billing management (when integrated with billing platforms like Stripe) Field service management Why Integrate Salesforce with Stripe? Data visibility is always an advantage, especially when it comes to finance and sales. Integrating CRM with accounting applications allows you to constantly collect sales data and track statistics in real time, enabling better decision-making. Moreover, it can help enrich a customer profile for more personalized future interactions and cross-selling opportunities. But that\u2019s not all. Here are some of the key benefits of integrating Salesforce with Stripe: Customer data synchronization . Create a 360\u00b0 customer overview \u2014 a single, integrated environment for sales, marketing, and finance teams to work together. Access unified customer data on both platforms. Business users receive information about clients faster, meaning they can resolve issues more quickly, bypassing the accounting link. Streamlined payment processing . Free your sales and support teams from the necessity of manual payment handling and data re-entry. Integrating the Stripe payment gateway with Salesforce makes it possible to initiate and monitor payments directly from within the SFDC platform. Improved sales and financial data accuracy . Stripe-Salesforce real-time sync of payment data makes spreadsheets and manual reconciliations a thing of the past. Build your financial reports, forecasts, and sales performance metrics on up-to-date, accurate data only. Minimize the risk of errors and ensure complete, timely data, always. Automated billing and Salesforce subscription management. Handle recurring payments and subscriptions more efficiently. The Salesforce Stripe integration lets you embed Stripe\u2019s advanced billing features into Salesforce workflows, automating the entire billing lifecycle. Enhanced customer experience . Convenient and frictionless payments are a major driver of customer satisfaction. By integrating Stripe with Salesforce, businesses facilitate their customers\u2019 checkout experience, making it faster and available in their preferred payment method. Salesforce Stripe Integration Methods The three methods explored below are the most practical and widely used ways to connect Stripe with Salesforce: they are well documented and cover a full spectrum of technical skill levels. Although there could be a few variations, most of them fall under these main categories: Native Stripe connector for Salesforce Custom API integration using Apex + JavaScript Third-party tools Here\u2019s a quick comparison of the methods, with full details following in the next sections. Criteria Native Stripe Connector API Integration Third-party Tools Key differences Built and maintained by Stripe, tightly integrated with the Salesforce Platform Full-code, developer-driven integration; max flexibility Low-code, drag-and-drop automation; connects multiple apps Ease of implementation Admin-friendly; guided setup Requires Salesforce + API dev skills No-code, but the setup varies by tool Cost Free but requires Salesforce licenses Depends on internal dev resources; no license cost, but higher dev effort Varies: freemium to enterprise pricing Best for Businesses needing fast, secure access to Stripe data inside Salesforce Companies with complex workflows or custom logic requirements Small/medium teams needing quick automation without dev effort Customizability Moderate High Low to moderate, depends on the tool Method 1: Using the Stripe Connector for Salesforce This method is ideal for standard use cases. It enables native integration with Salesforce through Stripe\u2019s in-house developed connectors. The solutions are available at the [AppExchange](https://appexchange.salesforce.com/) , the Salesforce store of pre-built and customizable apps. [Stripe Connector for Salesforce Billing](https://docs.stripe.com/connectors/salesforce-billing) (deprecated): originally designed to integrate with Salesforce\u2019s Billing and CPQ modules. Focused on supporting subscription-based billing workflows. [Stripe Connector for Salesforce Platform](https://docs.stripe.com/connectors/stripe-connector-for-salesforce/overview) (recommended): a more modern, general-purpose connector that works with Salesforce CRM, core, and platform products. Built to support broader use cases, including e-commerce, customer management, and custom apps. Best for Customer payment visibility for sales teams. Automated workflows based on payment events. Non-subscription-based businesses. Pros Native integration : built by Stripe specifically for Salesforce. Secure & compliant: OAuth-based authentication and secure API key usage maintain data integrity and protect sensitive customer/payment info. Webhook support: enables real-time updates from Stripe. Quick setup with a no-code wizard: friendly for admins and business users. Cons One-way sync by default: the connector primarily pushes Stripe events into Salesforce but doesn\u2019t sync Salesforce data back to Stripe without custom development or middleware. Lack of out-of-the-box automation: while the webhook sync is real-time, there\u2019s limited logic to act on the data unless you build Flows, Apex triggers, or custom automation. Minimal reporting/analytics: Stripe data in Salesforce is not deeply visualized \u2013 reporting may require linking to Salesforce Analytics/Tableau CRM. Step-by-step Guide Step 1. Install the Stripe connector Navigate to the [Stripe connector installation guide](https://docs.stripe.com/plugins/stripe-connector-for-salesforce/installation-guide) to get the installation link. Enter the Salesforce organization where you plan to install the connector.\u200b After logging in, you\u2019ll be directed to the installation page. Choose Stripe for Salesforce Platform and click Get It Now . Follow the prompts to install the app. Choose Install for Admins Only (recommended by Stripe for better control of user access and permissions). Approve third-party access in the dedicated modal to enable data exchange between Salesforce and Stripe. Click Continue to complete the package installation. Step 2. Configure the permission set Assign a permission set The package incorporates two permission sets that enable access to specific application features: Stripe Connector Integration User : for users managing setup configurations and event subscriptions. Stripe Connector Data User: grants access to Stripe_Event__c object records. Navigate to Setup \u2192 Users \u2192 Permission Sets . Assign the permission set based on user activity: Stripe Connector Integration User : for setup/admin tasks. Stripe Connector Data User : for general data access. Note : Permission set assignments may vary. Integration User : requires cloning the permission set and manually enabling certain system-level privileges, as Stripe system permissions cannot be packaged due to limitations in Salesforce AppExchange apps. Follow the [guidance](https://docs.stripe.com/plugins/stripe-connector-for-salesforce/installation-guide#stripe-connector-integration-user) in the official documentation for detailed steps. Data User : choose Stripe Connector Data User on the Permission Sets page. Click Manage Assignments \u2192 Add Assignments . Assign the permission set to the required users. Step 3. Launch the initial setup wizard In App Launcher , search for and open Stripe for Salesforce Platform . Click Get Started and complete the steps outlined in the setup prompt: Authorize webhook events. In the new window click Allow to grant your org necessary permissions. Enter your Stripe API key. Click Add Account .\u200b Choose a Stripe API version (usually the latest) from the drop-down list. Click Install Package and finish the setup wizard. Step 4. Use metadata for Salesforce and Stripe record linking Stripe\u2019s application offers to add the necessary client data to the metadata field. If you add a Salesforce account ID as metadata for each Stripe customer, you can see payments related to a specific Salesforce customer. However, this step is not automated \u2013 you must manually enter the correct payment data into the CRM. Step 5. Configure webhooks In App Launcher , open Stripe for Salesforce Platform . Select your connected Stripe account. Click All Webhook Events . Choose a Stripe Object and toggle on the events you want Salesforce to track. 5. Save your settings. Step 6. Testing the integration Simulate Stripe events in test mode In the Stripe dashboard, switch to test mode. Use Stripe\u2019s test card numbers to simulate transactions. Trigger events such as: payment_intent.succeeded invoice.paid charge.refunded Verify in Salesforce Go to Stripe Event Logs or the Stripe Object Records in Salesforce. Check that the appropriate records have been created and linked to the correct Salesforce Accounts or Opportunities. Method 2: Custom Integration using Apex and JavaScript This method involves using Apex, Salesforce\u2019s proprietary programming language, and JavaScript, often within Lightning Web Components (LWC), a UI development framework on the SFDC platform. It offers the highest level of customization and is often preferred over low-code tools when it comes to complex, non-trivial tasks. Using code allows you to programmatically define the logic behind multi-layered scenarios, such as dynamic checkout flows based on user roles or regions, payment-triggered workflow automation, multi-step transactions with validation, partial payments, and more. Best for Specialized workflows that cannot be addressed by standard integration tools. Complex data handling scenarios. Pros Unique implementation : supports custom solutions that precisely fit business requirements.\u200b Full control : enables precise control over integration, including security & performance.\u200b Scalability : adjusts to accommodate growing business needs. Cons Development effort : requires technical expertise and resources to build and maintain custom Salesforce integrations. Longer implementation timeline : the method involves building the logic from scratch, including defining business requirements, writing, testing, and deploying custom code, authentication & security setup, etc. Maintenance overhead : requires constant upkeep to accommodate API changes. Step-by-step Guide Step 1. Set up Stripe API keys Log in to Stripe. In your Stripe account, navigate to Developers \u200b\u2192 API keys . Retrieve the publishable key and secret key. Step 2. Set up Salesforce named credentials In Salesforce, [named credentials](https://help.salesforce.com/s/articleView?id=xcloud.nc_basics.htm&type=5) serve to provide a secure way to call out to external APIs. The credentials specify the endpoint URL and the required authentication parameters (username, password, OAuth, etc.) in one definition. In Salesforce Setup, enter Named Credentials in the search box and click on it. Select New to create a new credential. Specify the required parameters: Label : StripeAPI\u200b Name : StripeAPI\u200b URL : https://api.stripe.com Note : Before creating a named credential, you must first create and configure an [external credential](https://help.salesforce.com/s/articleView?id=xcloud.nc_create_edit_external_credential.htm&type=5) to link it to. The external credential defines how Salesforce authenticates to an external system. You\u2019ll also need to create a principal record within the external credential \u2013 this acts as the \u201clogin identity\u201d Salesforce uses to communicate with Stripe. This is the step where you enter the API keys retrieved from your Stripe account. Overall, the process looks like this: Once your principal is saved and tied to the external credential, your named credential (StripeAPI) will inherit that authentication setup. Step 3. Develop Apex classes for making API requests This step involves specifying custom logic that you write in Apex on the server-side. In Salesforce Setup, navigate to Developer Console. Click File \u2192 New \u2192 Apex Class. Name the apex class. Add the Apex class code. Note : For more details and code examples, visit the [official documentation](https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_qs_HelloWorld.htm) . Step 4. Configure frontend interactions using JavaScript or LWC This step is about integrating Stripe\u2019s payment elements within Salesforce. It\u200b involves initializing Stripe inside your LWC\u2019s JavaScript file. Use the loadScript function to include [Stripe\u2019s JavaScript](https://docs.stripe.com/js) library in your LWC. Configure the Stripe component with your publishable API key. Inside your component\u2019s HTML and JS, set up Stripe\u2019s [payment elements](https://docs.stripe.com/js/element) . Implement functions to manage payment submissions and handle responses from Stripe. Step 5. Configure webhooks and event handling Webhooks allow Stripe to send real-time event notifications to Salesforce:\u200b Create a public site in Salesforce: set up a Salesforce site to expose an endpoint for receiving Stripe webhooks.\u200b Develop an Apex REST class: create an Apex class annotated with @RestResource to handle incoming webhook events.\u200b Register the webhook in Stripe: In your Stripe dashboard, go to Developers \u2192 Webhooks.\u200b Add endpoint: enter the Salesforce Site URL and select the events you want to listen for (e.g., payment_intent.succeeded ). Step 6. Test and deploy Thoroughly test the integration before deployment. Using Stripe\u2019s test mode, simulate transactions without processing real payments.\u200b Validate success and failure cases, ensuring proper error handling and data synchronization.\u200b Review Salesforce debug logs and Stripe\u2019s logs to identify and resolve issues. Method 3: Third-Party Integration Tools This method enables fast, no-code integrations by utilizing cloud platforms such as [Skyvia](https://skyvia.com/) , Zapier, Workato, Tray.io, MuleSoft, and others. While their capabilities vary in complexity and how they handle data volumes, these tools share more similarities than differences: Most of these platforms offer drag-and-drop or visual workflows to connect applications. They all provide pre-built connectors or templates for common systems, including Stripe and Salesforce. They automate repetitive tasks using conditional logic, triggers, and actions. Support for real-time data flows or batch syncing, depending on the platform and your plan. Best for Teams without technical resources looking for fast and easy integrations. Marketing and sales teams automating lead-to-payment workflows. Businesses needing multi-app workflows, e.g., Salesforce \u2192 Stripe \u2192 Slack or Gmail. Pros No/low-code setup: allows for building integrations through visual interfaces without writing code. Quick deployment: pre-built connectors and templates make it fast to get up and running. Workflow automation: customize workflows to trigger actions like invoice creation, status updates, or syncing customer data. Multi-app connectivity: can connect not just Stripe and Salesforce, but also other platforms in the same flow. Cons Limited customization for complex use cases: low-code tools may hit limits when you need deeply customized logic or heavy API manipulation. Cost can scale quickly: as pricing often depends on task volume or workflow complexity, it can get expensive for high-usage or enterprise-grade setups. In the sections below, we\u2019ll demonstrate the ease and efficiency of this integration method using Skyvia. We\u2019ll walk through two practical scenarios for connecting Stripe and Salesforce, both being common customer use cases: Automate the customer data upload from the payment gateway to the CRM system (as a periodic update). Upload pending payment information from Stripe PaymentIntents to Salesforce Opportunities. Since Skyvia is a highly flexible platform with a wide range of capabilities, there are several ways to set up periodic data updates, such as data import or synchronization. In this guide, we\u2019ll focus on the most advanced option \u2013 the [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) . Scenario 1: Automate Customer Data Upload from Payment Gateway to CRM This integration facilitates data visibility between different departments, such as sales and accounting. Regular syncs of a financial application and CRM ensure the relevance of data, allowing businesses to maintain a database of customers who have interacted with their companies in the past. Things to Consider Before You Start Before we proceed with the implementation, it\u2019s important to understand how data is structured and stored in Stripe and Salesforce, as this impacts your integration setup.\u00a0Stripe stores customer-related information in the Customers object, including data about those who made a payment or are still in the payment process. The Customers table also doesn\u2019t have a field that records the last modification date \u2013 the limitation that complicates incremental data syncing. At the same time, Salesforce offers several candidate objects that could receive incoming Stripe data: Leads : best fit in terms of data filling and adequacy. Contacts : the most populated and commonly used object for business representatives; suitable if your Stripe records represent real individuals you plan to engage. Customers : least fit for two reasons. In Stripe, the Customers table contains data of those users who haven\u2019t yet made a purchase, so in fact, they aren\u2019t yet customers. Also, the Salesforce Customers object may lack the necessary fields to store Stripe data effectively. So, for this integration, we\u2019ll synchronize Stripe\u2019s object, Customers, with Salesforce\u2019s Lead object. Step-by-step Guide Step 1. Sign up and configure your Skyvia account [Sign in](https://app.skyvia.com/) to the platform with your Skyvia account. If you don\u2019t have one yet, you can create it for free. After signing in, you are provided with a default workspace \u2013 a one-stop-shop for all operations on the Skyvia platform. In the next step, we\u2019ll establish connections to both Stripe and Salesforce \u2013 we\u2019ll need them to create an integration that moves Stripe\u2019s data into Salesforce. Step 2. Connect Skyvia to Stripe and Salesforce First, let\u2019s create a connection to Stripe. In your Skyvia account, click + Create New . Choose Connection . Select Stripe from the list of connectors. Specify the API key to create the connection. Note : You can generate the API keys in your Stripe account. In case you need to retrieve data from the connected account, specify Connected Account Id (optional). Click Create Connection . To create a connection to Salesforce, follow these steps: Click + Create New and choose Connection . Select Salesforce from the list of connectors. On the connection configuration page, specify the environment in which to perform operations. Click Sign in with Salesforce . Log in with your Salesforce credentials. Note : The connection is enabled via OAuth authentication, so you don\u2019t have to share your credentials with Skyvia. Instead, Skyvia generates a secure authentication token, which is bound to the connection. Click Create Connection . Step 3. Specify variables for the data flow As mentioned before, for this use case, we\u2019ll be using [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) \u2013 a visual pipeline designer for advanced synchronization scenarios. To start, let\u2019s create parameters and variables that will define the integration. The first parameter transferred between runs is the date the data was created. Since we only need to retrieve new data with each execution run, we specify the parameter NewData: In Skyvia, click + Create New and choose Data Flow . In the Data Flow editor, click Parameters in the top-left toolbar. In the modal that opens, click + Parameter . After completing the cycle, this parameter should be updated to ensure that only new data will be retrieved during the next execution. To do this, we\u2019ll create the MaxCreatedDate parameter using the Value component. In the Data Flow editor, choose Value from the list of components on the left. Drag the component onto the canvas. Configure the parameters: under Value specify the input field (in this case, CreatedDate) select max for the Value type choose NewData for Variable 4. Check the Update Variable on Data Flow Completion box to enable automatic updates after each run. We\u2019ll also specify the number of successful and failed rows as variables: Choose Variables in the top-left toolbar on the left. Click + to add a variable. Step 4. Define data flow from Stripe to Salesforce In this step, we\u2019ll connect pre-created connectors into a dataflow. Let\u2019s start with the Source component. Choose Source from the list of components on the left and drag it onto the canvas. Configure the connection parameters: choose Stripe for Connection and Execute Command for Actions . Select the fields whose values should be transferred to Salesforce. In this example, the fields are: Name, City, Country, Address (line 1, line 2), ZIP code, State, email, phone, date of account creation, and Metadata (for Last name and Company name lines are mandatory for Salesforce). We also select MAXDATE, an internal variable that stores the most recent timestamp from the previous update, and map it to the NewData parameter. Next, configure the Target component. Choose Target from the list of components on the left. Drag the component onto the canvas. Configure the connection parameters: choose Salesforce for Connection choose Insert for Actions choose Lead for Table Step 5. Map Stripe to Salesforce fields In the Mapping Editor, map the Source fields to Target columns. Note : Salesforce requires the Last Name field, but Stripe doesn\u2019t have it \u2013 it only provides a single Name field. To address this, we use the Token function to extract the necessary data from Stripe\u2019s Name field. We also need to convert the Company line, as the platforms apply a different number of fields: in Salesforce, it\u2019s a single field, while Stripe splits it into two fields that need to be merged. Step 6. Configure the Result setting The last step is to configure the Result settings. Here, we\u2019ll define what counts as success/error rows by using the preset variables. Once done, save and run the integration. Step 7. Schedule and automate data synchronization You can schedule your data flow for automatic execution. This option is useful for setting up periodic runs on specific days and times, or for delaying execution to a later point. To enable automated data synchronization, click Schedule on the Overview page and configure the timing. Step 8. Monitor integration workflow When the run is completed check the integration results in the Monitor tab. If errors occur, navigate to Logs tab and click the run results to review the failed records. Scenario 2. Upload pending payment information from Stripe PaymentIntents to Salesforce Opportunities It\u2019s a common case to have clients who\u2019ve been billed but, for some reasons, haven\u2019t completed their payment. These prospects are potential customers with a strong likelihood of converting. Since Stripe stores data on these unfinished transactions, it\u2019s a fair opportunity to turn that information into potential deals as well as understand why the client left without completing the purchase. Things to Consider Before You Start Like in Scenario 1, it\u2019s important to consider the differences in how Stripe and Salesforce store payment data. Stripe uses two key objects to track payments: PaymentIntents : stores data about in-progress or attempted payments. Charges : records finalized payments, including successful transactions and any associated refunds. Since this integration scenario focuses on analyzing incomplete transactions, we\u2019ll work with the PaymentIntents object, selecting clients based on their payment statuses, such as requires_confirmation, processing, and others. In Salesforce, the most appropriate object for representing payment transactions is the Opportunity . It uses the StageName field to display the current status of a deal within the sales pipeline, such as Prospecting, Qualification, Needs Analysis, Proposal/Price Quote, Closed Won, etc. Since the payments we\u2019re syncing haven\u2019t yet been completed, we\u2019ll map them to opportunities in the Need Analysis stage. Step-by-step Guide Step 1. Connect Skyvia to Stripe and Salesforce Establish connections to Stripe and Salesforce, as outlined in Step 2 of Scenario 1 . Step 2. Specify variables for the data flow As in the previous case, you must first create a parameter for regular data updates. This parameter is updated every time the Data Flow is running: Enable automatic updates after each run using the Value component (see Step 3 in Scenario 1 ). Next, create two variables to count successful and unsuccessful uploads: Step 3. Define data flow from Stripe to Salesforce In this step, we\u2019ll connect pre-created connectors into a dataflow. To configure the Source component: Choose Source from the list of components and drag it onto the canvas. Specify Stripe for Connection and Execute Command for Actions . Select PaymentIntents to choose clients with different statuses except for the successful ones: As in the previous case, we\u2019ll use the MAXDATE variable to retrieve only new records during the next execution. Next, we\u2019ll work with the Target connector, which is Salesforce. Configure the connection parameters: Specify Insert for Actions. Choose Opportunity for Table . Step 4. Map Stripe to Salesforce fields In the Mapping Editor, use auto-mapping to match Source and Target columns. Most fields are mapped automatically, with a few exceptions: Name : Salesforce requires a unique name when creating a new Opportunity. In this example, the name is generated as: Invoice + InvoiceID. StageName : This is a required field in Salesforce, set here to Needs Analysis. Amount : must be adjusted using Amount*0.01, since Stripe stores values in cents, while Salesforce expects amounts in dollars. Step 5. Configure the Result setting Configure the Result settings by defining success/error rows with the preset variables. Once done, save and run the integration. If necessary, set the schedule to run the integration automatically. Best Practices for Salesforce Stripe Integration Synchronizing Salesforce and Stripe is a common business case. Following the tried-and-tested practices outlined below will help you implement it in the most efficient way. Prioritize security by using secure API keys, setting appropriate permission levels, and regularly auditing access. Evaluate available methods through the lens of your business needs. Consider criteria such as data volume, the necessity of real-time processing, and customization level. Choose the one that best fits your needs. Regularly monitor integrations for errors or discrepancies. Set automated alerts for failed runs. Optimize performance by using efficient synchronization methods and avoiding unnecessary API calls. Common Challenges & Troubleshooting Tips Challenge Why it is important Troubleshooting Data synchronization issues Inaccurate or delayed synchronization can lead to discrepancies in customer data, payment statuses, and financial records. Implement real-time sync: use webhooks and APIs to ensure real-time data updates between both platforms. Error logging: maintain logs for synchronization processes to monitor failures and successes. Regular data audits: periodically verify data consistency to identify and rectify mismatches. Currency and amount discrepancies Stripe often records amounts in the smallest currency unit (e.g., cents), while Salesforce may use standard units (e.g., dollars), leading to potential mismatches. Data transformation: apply appropriate conversions during data transfer (e.g., divide Stripe amounts by 100 to match Salesforce\u2019s format). Field mapping: ensure that currency fields are correctly mapped and formatted in both systems. API limitations and rate limits Exceeding API call limits can result in failed transactions or delayed data updates. Optimize API calls: batch requests where possible and avoid unnecessary calls. Monitor usage: keep track of API usage to stay within limits and plan for scaling needs. Implement retry logic: incorporate mechanisms to retry failed API calls after a delay. Webhook configuration and management Improperly configured webhooks can lead to missed or duplicate events, affecting data integrity. Secure webhooks: validate webhook signatures to ensure authenticity. Idempotency: design webhook handlers to be idempotent, preventing duplicate processing. Logging and monitoring: keep detailed logs of webhook events and monitor for anomalies. Conclusion Stripe \u2013 Salesforce integration is an important step to optimize your payment processes and enhance user experience. Its key benefit lies in leveraging Stripe\u2019s robust payment infrastructure within your Salesforce CRM, allowing for seamless management of all payment processes directly from the platform. In this article, we explored three integration methods that cater to different business needs and technical skill levels. Choose the one that fits your goals best, and start leveraging its benefits today. Ready to connect Stripe with Salesforce? [Contact us](https://skyvia.com/schedule-demo) for a free consultation or [sign up](https://app.skyvia.com/) for a demo to explore how Skyvia can simplify your integrations. F.A.Q. for Stripe Salesforce Integration Is Salesforce Stripe integration secure? Yes, properly implemented Salesforce Stripe integrations are secure. Do I need technical skills to integrate Salesforce and Stripe? It varies by method. Custom integrations require technical skills; third-party tools and connectors require minimal technical expertise. Can I automate recurring billing using Salesforce and Stripe? Absolutely, recurring billing automation is a common benefit of Salesforce Stripe integration. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fstripe-salesforce-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Stripe+to+Salesforce+Integration%3A+Easy+to+Follow+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Fstripe-salesforce-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/stripe-salesforce-integration/&title=Stripe+to+Salesforce+Integration%3A+Easy+to+Follow+Guide) [Anastasiia Kulyk](https://skyvia.com/blog/author/anastasiia-kulyk/) With years of experience in technical writing, Anastasiia specializes in data integration, DevOps, and cloud technologies. She has a knack for making complex concepts accessible, blending a keen interest in technology with a passion for writing. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/structured-vs-unstructured-data/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Structured vs Unstructured Data: What Is the Difference? By [Rajendra Gupta](https://skyvia.com/blog/author/rajendrag/) [0](https://skyvia.com/blog/structured-vs-unstructured-data/#respond) 4896 April 21, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fstructured-vs-unstructured-data%2F) [Twitter](https://twitter.com/intent/tweet?text=Structured+vs+Unstructured+Data%3A+What+Is+the+Difference%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fstructured-vs-unstructured-data%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/structured-vs-unstructured-data/&title=Structured+vs+Unstructured+Data%3A+What+Is+the+Difference%3F) Data is the lifeblood of any digital company. The bigger your data repository, the more opportunities you have to drive growth and innovation. Data has an integral part in any business, be it marketing, sales, operations, you name it. Unstructured and structured data are two different types of data stored in a database system. The data is stored, retrieved, and used in different ways. In our article, we\u2019re going to cover: Table of contents What Is Structured Data? Examples of Structured Data What Is Unstructured Data? Examples of Unstructured Data Differences Between Structured and Unstructured Data Structured vs. Unstructured Data: Comparison Table Semi-Structured Data How to Convert Unstructured Data into Structured Data Tools for Processing and Analysis of Structured/Unstructured Data Conclusion: The Future of Data What Is Structured Data? Structured data is formatted in a certain way and follows specific guidelines. Structured data also adheres to predefined rules for formatting and labeling information. Usually, we store structured data in the [relational database (RDBMS)](https://www.devart.com/what-is-rdbms/) table columns with a fixed structure. The following is an example of structured data. You define a customer table with fields like First Name, Last Name , phone numbers, and social security numbers. The columns have predefined data types and their length. We cannot store a string in the numeric column. Once you define a table schema, you cannot change it while inserting or updating the data. You need to modify the table schema in case of any additional column or data type modification. If you require additional fields or information, modify the schema and work on the modified data structure. CHARACTERISTICS OF STRUCTURED DATA The structured data conforms to a data model with a predefined structure. Data is organized into entities such as tables, and these columns are linked together using relationships. All data stored in a table column have similar attributes. For example, if a table contains the [FirstName] column as string data, it will always store the string data for all records in the column. It does not allow dynamic structure change for a specific record. ADVANTAGES OF STRUCTURED DATA The fixed and well-defined schema helps easy management, less storage, and access to the data. The data can be indexed based on its attributes. The indexing helps to read data from a database quickly. Data security can be implemented at the granular level, i.e., row, column, or table. The structured data can be accessed easily by the machine learning algorithms. Therefore, you can quickly do data manipulation and calculations. You can perform Business Intelligence operations with Increased access to more tools. The structured data enables users to understand and analyze different data relationships quickly. DISADVANTAGES OF STRUCTURED DATA You need to define the schema well in advance, typical for all data requirements. If you need an additional column requirement, it requires structure modification for all records in the table. Therefore, the structured data is less flexible. It can be used for its intended goal with limiting business use cases. Structured data is usually for relational databases or data warehouses having rigid structures. Examples of Structured Data Spreadsheets. Relational databases such as Microsoft SQL Server, and Oracle. Online Transaction Processing \u2013 OLTP Systems. Reservation systems, Inventory control, and Sales transactions. The following image is an example of structured data stored in rows and columns in a table. What Is Unstructured Data? Unstructured data does not contain a predefined schema structure or does not belong to a data model. Therefore, we cannot store them in relational databases. We can use non-relational databases such as MongoDB, Couchbase, Apache Cassandra, Redis, DocumentDB for storing unstructured data. The unstructured data might have internal structural elements, but it does not store information in a predefined schema table format. It allows dynamic data generation and storage. We can use non-relational databases such as MongoDB, Couchbase, Apache Cassandra, Redis, DocumentDB for storing unstructured data. CHARACTERISTICS OF UNSTRUCTURED DATA It works with data that does not have a specific format or sequence. You do not define a specific schema or structure for data storage. It allows dynamic data storage for individual records. Data is portable and scalable. ADVANTAGES OF UNSTRUCTURED DATA As unstructured data does not have predefined rules, you can use it for more than one intended purpose. It is quick to adapt the unstructured data because it uses dynamic schema, and you do not need to edit all records for updating a single record. It can work efficiently with the heterogeneity of sources. DISADVANTAGES OF UNSTRUCTURED DATA You need more experienced persons, such as data analysts and data scientists, to work with the unstructured data and draw value from it. You need specific [data management tools](https://skyvia.com/blog/best-data-management-tools/) for data analysis. Indexing unstructured data is complex and prone to error due to flexible structure and a lack of predefined attributes. Its storage cost is high as compared to structured data. Examples of Unstructured Data As per the recent [report](https://indicodata.ai/blog/gartner-report-highlights-the-power-of-unstructured-data-analytics-indico-blog/) , 80% to 90% of enterprise data is unstructured. Therefore, it emphasizes the importance and criticality of working with unstructured data. Let\u2019s understand a few examples of unstructured data usage: Emails: The Email body or message is a popular unstructured data we use daily for email communication. Documents: Word files, spreadsheets, PDF, Powerpoint presentations. Websites: YouTube, Facebook, Instagram, and LinkedIn content can contain unstructured data such as social media messages. Media files: All sorts of media files such as images, audio, and video. Communication: Mobile communication data, SMS messages, location data, live chat, IM, collaboration software. Books, Magazines, articles, blogs, press releases, and Medical records (X-Rays, ECG, or imaginary data). Scientific research data. Satellite imagery, and sensor data. Differences Between Structured and Unstructured Data Structured data is highly specific in comparison to unstructured data. Structured data is stored in a predefined schema or format, whereas unstructured data is a conglomeration of many different types of information. Structured data has a fixed schema and is referred to as organized data. The information can usually easily be searched for and processed in a database. However, if any information does not comply with the schema requirements, it fails to be stored in a database. Unstructured data offers flexibility and scalability without defining a fixed schema before working with any document. It allows for storing data in various formats. However, it is slightly challenging to work in comparison with Structured data. Structured vs. Unstructured Data: Comparison Table The following table summarizes the difference between structured and unstructured data. Semi-Structured Data We can have one more data type, i.e., Semi-Structured data. The Semi-structured data does not conform to a specific data model. However, it has structural properties for quick data analysis. It can be considered as a combined version of Structured and Unstructured Data. EXAMPLES OF SEMI-STRUCTURED DATA Emails: Emails are an excellent example of semi-structured data. It has different tags for sender, recipients, date, subject, importance and can be easily categorized into different folders Inbox, Sent, Spam, Promotions. Markup language XML has a set of document encoding rules for defining the human and machine-readable formats. The JavaScript Object Notation (JSON) offers a semi-structured data interchange format. It can be used for transmitting data between web servers and applications. It is widely popular for data exchange and supported by various relational and non-relational databases. The No-SQL databases (MongoDB, documentDB, Couchbase) use flexible data model that can be used with semi-structured data for storing, importing, and exporting. The following image shows semi-structured data that contains student records in JSON format. How to Convert Unstructured Data into Structured Data The data conversion process is time-consuming and requires experience resources. It might involve the following phases. Define your structure data requirements. Data cleansing \u2014 removing duplicates, cleanup columns. Refine data. The data conversion might use the machine learning models with the Python, R services, or third-party tools such as Azure Data factory, log parser tools, Cogito Semantic Technology, Zoho Analytics, SAS Viya, TextMiner, RapidMiner. Tools for Processing and Analysis of Structured/Unstructured Data Among the tools that deal with structured data, we can highlight Skyvia. It is a cloud-based platform and an excellent ETL tool that has advanced transformation functionality, unlike usual ELT approaches, offering only data copying. Skyvia is a single solution for both ETL and Reverse ETL tasks, which can significantly reduce the developers\u2019 efforts. With Skyvia, you can [replicate data](https://skyvia.com/blog/top-data-replication-tools/) into DWH to further analyze it through Power BI (analytics reports, visualization, etc). In addition to this, you can use the Reverse ETL functionality, which returns the required actionable data back to the operational system. [Skyvia Replication](https://skyvia.com/data-integration/replication) and [Skyvia Import](https://skyvia.com/data-integration/import) can solve many [cloud data integration](https://skyvia.com/data-integration/) tasks with structured data. You probably also heard about data pipelines. The difference between data pipelines and ETL is pretty well described in the article by Edwin Sanchez \u2013 [What is a data pipeline?](https://skyvia.com/blog/what-is-data-pipeline) Besides that, Skyvia also offers such an advanced data transformation tool as [Data Flow,](https://docs.skyvia.com/data-integration/data-flow/) which can be extremely helpful when complex, multistage data transformation and integration scenarios foreseen. Among the tools that deal with unstructured data, we can highlight Apache Camel, Integrate.io, etc. Conclusion: The Future of Data Data is at the heart of our businesses in today\u2019s digital world, whether a business professional or a consumer. Data is collected at every moment, and it forms the basis of our many decisions. In the future, data may take on a more significant role in our lives, but it will likely be used in new ways. Each organization includes structured, unstructured, and semi-structured data. You might interchange data formats for data import, export, or consume them in a standard format. I hope this blog is helpful and exciting! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fstructured-vs-unstructured-data%2F) [Twitter](https://twitter.com/intent/tweet?text=Structured+vs+Unstructured+Data%3A+What+Is+the+Difference%3F&url=https%3A%2F%2Fblog.skyvia.com%2Fstructured-vs-unstructured-data%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/structured-vs-unstructured-data/&title=Structured+vs+Unstructured+Data%3A+What+Is+the+Difference%3F) [Rajendra Gupta](https://skyvia.com/blog/author/rajendrag/) Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/support-ukraine/", "product_name": "Unknown", "content_type": "Blog", "content": "[Skyvia](https://skyvia.com/blog/category/uncategorized/) Support Ukraine in the Fight for Freedom! By [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) [0](https://skyvia.com/blog/support-ukraine/#respond) 4654 March 30, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsupport-ukraine%2F) [Twitter](https://twitter.com/intent/tweet?text=Support+Ukraine+in+the+Fight+for+Freedom%21&url=https%3A%2F%2Fblog.skyvia.com%2Fsupport-ukraine%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/support-ukraine/&title=Support+Ukraine+in+the+Fight+for+Freedom%21) On February 24, the world changed forever. The beautiful and peaceful Ukraine was attacked by Russian invaders who wanted to break Ukrainian spirit and trample on Ukrainian values. But the invaders failed. Ukraine is strong and united like never before. Here, in Skyvia, we condemn and despise the aggression of the Russian Federation against the territorial integrity and independence of Ukraine and call on everyone to contribute to the heroic struggle of Ukrainians. You can help the Armed Forces of Ukraine by making donations. The details you can find, following these links: [ArmySOS](https://armysos.com.ua/en/) [SaveLife](https://savelife.in.ua/en/donate/) [Ministry of Defence of Ukraine](https://www.mil.gov.ua/en/) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fsupport-ukraine%2F) [Twitter](https://twitter.com/intent/tweet?text=Support+Ukraine+in+the+Fight+for+Freedom%21&url=https%3A%2F%2Fblog.skyvia.com%2Fsupport-ukraine%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/support-ukraine/&title=Support+Ukraine+in+the+Fight+for+Freedom%21) [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) Technical Writer" }, { "url": "https://skyvia.com/blog/talend-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top Alternatives to Talend Open Studio for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/talend-alternatives/#respond) 1678 June 11, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftalend-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Alternatives+to+Talend+Open+Studio+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftalend-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/talend-alternatives/&title=Top+Alternatives+to+Talend+Open+Studio+for+2025) Talend Open Studio has been a go-to tool for data integration for years, beloved for its open-source nature and powerful features. However, as technology evolves, so do users\u2019 needs. [Talend discontinued Open Studio at the end of January 2024](https://www.linkedin.com/pulse/sustain-industrialize-your-data-projects-talend-company-monteil-hruje/) to focus on more advanced and scalable solutions that better meet today\u2019s data integration demands. Companies that love this open-source solution are still a bit shocked by such a decision and may consider staying there or looking at other tools to solve their problems. There are two paths: You may leave things going, but forget about updates to Talend Open Studio and consider the appropriate risks. (As it\u2019s already outdated after January 2024). Or Choose another tool to solve your business pains. In this article, we\u2019ll check the top alternatives for Talend Open Studio, consider the benefits of finding other abilities, and help you select the best one for your company\u2019s data lifecycle. Table of Contents Benefits of Exploring Other Options Top Talend Open Studio Alternatives Skyvia Informatica SAS Data Management Apache Nifi Boomi IBM InfoSphere How to Choose the Right Data Integration Tool Conclusion Benefits of Exploring Other Options Finding the Perfect Fit. Every business is unique. By exploring other tools, companies might find one that fits their needs better, whether it\u2019s ease of use, performance, or specialized features. Improved Performance. Some alternatives might handle large datasets and complex transformations more efficiently, saving users time and reducing frustration. Enhanced Real-Time Integration. If real-time data processing is crucial for your operations, there are tools specifically designed for real-time analytics that might be a better fit. Access to Advanced Features. Other tools might offer advanced features unavailable or limited in Talend Open Studio, including advanced machine learning integration, better data governance, and more robust data security. Cost-Effective Solutions. While Talend Open Studio is free, some paid alternatives might offer better value for money, especially if they can save you time and resources in the long run. Community and Support. Different tools come with varying levels of community support and documentation. Users might find another tool with a more active community or better customer support, which can be invaluable when encountering issues. Top Talend Open Studio Alternatives Skyvia [Skyvia](https://skyvia.com/data-integration/) is a cloud-based [data integration](https://skyvia.com/data-integration/) platform that makes it easy to migrate, back up, and synchronize data without writing a single line of code. The last [G2 Crowd](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) rating shows Skyvia is second in The Top 20 Easiest To Use [ETL Tools](https://skyvia.com/blog/etl-tools/) list. The solution covers data integration scenarios of any complexity, being a magical assistant for data integration needs. Whether you\u2019re a tech newbie or a seasoned pro, Skyvia\u2019s intuitive interface makes it a breeze to connect [180+](https://skyvia.com/connectors) popular data sources and get things flowing smoothly. Plus, it\u2019s affordable, so it\u2019s perfect for businesses of all sizes looking to streamline their data management. Features Data integration, ETL, ELT, and reverse ETL. Data backup and restore. Data synchronization. Data replication. Pricing Paid [plans](https://skyvia.com/pricing) start at $7/month and are scalable according to your business expectations. Free plan available. Pros User-friendly interface. Affordable pricing. Comprehensive integration options. Wide range of data source and destination supported. Cons Customer support is good enough; however, having more video tutorials would be helpful. User Reviews Positive : Users appreciate its ease of use and affordability, not depending on the business size. Negative : Some users report limitations in performance issues with large datasets. Informatica [Informatica](https://www.informatica.com/) is a powerhouse known for its robust ETL (extract, transform, load) capabilities. The platform supports [90+](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) connectors, easily handles big data, and provides advanced features like AI-driven insights and top-notch data governance. It\u2019s perfect for large businesses needing a reliable and scalable solution to manage vast data. With Informatica, businesses get a comprehensive suite of tools to tackle many data challenges they throw at them. Features Advanced ETL processes. Data quality and governance. AI-driven insights. Extensive connectivity options. Cloud and on-premises deployment. Pricing Custom [pricing](https://www.informatica.com/products/cloud-integration/pricing.html) based on business needs. Generally, the higher cost reflects enterprise capabilities. Pros Highly scalable and reliable. Extensive feature set for data management and governance. Strong customer support and professional services. Cons High cost, potentially prohibitive for small businesses. The steeper learning curve for new users. User Reviews Positive : Praised for its robustness, scalability, and comprehensive feature set. Negative : Criticized for being expensive and complex to learn for beginners. SAS Data Management [SAS Data Management](https://www.sas.com/en_us/solutions/data-management.html) has been a leader in analytics for years, and its data management solution lives up to the hype. It\u2019s designed to help users clean, enrich, and transform data, ensuring the highest quality data for analytics needs. SAS Data Management seamlessly integrates with other SAS tools, making it a good choice for those already in the SAS ecosystem. It\u2019s like having a personal trainer for data, getting it into the best shape possible. Features [Data integration and ETL](https://skyvia.com/blog/data-integration-and-etl/) . Data quality and cleansing. Master data management. Integration with SAS analytics. Data governance tools. Pricing Custom pricing based on organizational needs. Typically higher, enterprise-focused pricing. Pros Excellent data quality and cleansing capabilities. Seamless integration with SAS analytics tools. Comprehensive data governance features. Cons It\u2019s expensive and suited for larger organizations. The platform can be complex and require significant training. User Reviews Positive : Users love the data quality features and integration with SAS analytics. Negative : The high cost and complexity can be a barrier for smaller businesses. Apache Nifi If you\u2019re into open-source tools with a ton of flexibility, you\u2019ll love [Apache Nifi](https://nifi.apache.org/) , which provides automated data flows and real-time data processing. Imagine being able to design complex data workflows with a simple drag-and-drop interface. Nifi makes it possible. It\u2019s perfect for those who need to handle data streams and want a high degree of customization. Being open-source means it\u2019s free, which is always a nice bonus. With a strong community behind it, Nifi is a smart choice for tech-savvy users looking for solid data flow management. Features Real-time data processing. Drag-and-drop interface for data flow design. Extensive support for various data formats. Highly customizable. Secure data transmission. Pricing Free and open-source. Pros Highly flexible and customizable. Supported by a strong community. Cons Steeper learning curve. Documentation can be lacking for some advanced features. User Reviews Positive : Praised for its flexibility and real-time capabilities. Negative : Some users find learning difficult and note that the documentation could be improved. Boomi [Boomi](https://boomi.com/) , a gem from Dell Technologies, is designed to automate workflows effortlessly. With Boomi\u2019s user-friendly drag-and-drop interface, even non-techies can create complex integrations quickly. It\u2019s scalable, flexible, and packed with [240+](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) pre-built connectors, making integrating different systems easy, not depending on the company size. Boomi helps businesses streamline operations and keep data in sync across the board. Features Cloud-native integration platform. Extensive library of pre-built connectors. Real-time data integration. Workflow automation. Pricing Custom pricing based on usage and needs. Typically, it is in the mid to high range, depending on the scale. Free 30-day trial available. Pros User-friendly with a drag-and-drop interface. Extensive pre-built connectors. Scalable and flexible cloud-based platform. Cons It can be expensive for small businesses. Some advanced features may require additional costs. User Reviews Positive : Users appreciate its ease of use and extensive integration options. Negative : Some users mention the higher cost and occasional performance issues. IBM InfoSphere If you\u2019re running a large enterprise with complex data needs, [IBM InfoSphere](https://www.ibm.com/products/information-server-for-data-integration) is your go-to solution. It offers advanced ETL abilities, strong data governance, and real-time integration, all wrapped in a scalable architecture. It handles the most demanding data environments, ensuring data is secure, accurate, and always available. It\u2019s backed by IBM\u2019s renowned support, making it a trusted choice for enterprise data management. Features Advanced ETL abilities. Data governance and quality management. Real-time data integration. Pricing Custom pricing based on enterprise needs. Generally higher cost reflecting comprehensive capabilities. Pros Strong data governance and security features. Highly scalable for large enterprises. Cons It\u2019s expensive and suited for large organizations. The steeper learning curve for new users. User Reviews Positive : It\u2019s praised for its robustness and comprehensive feature set. Negative : High cost and complexity can be challenging for smaller teams. How to Choose the Right Data Integration Tool Choosing the right data integration tool doesn\u2019t mean picking the first one you see. You have to understand the specific needs of the business and select a tool that matches those requirements. Here are some key factors to keep in mind. Usability Look for an intuitive and easy-to-navigate tool. A simple drag-and-drop interface can make a difference if you\u2019re not a tech wizard. Consider how quickly your team can speed up with the tool. The shorter the learning curve, the better. Integration Capabilities Ensure the tool supports all the platforms and applications you use. The more connectors it has, the more versatile it is. Check if the tool can easily integrate with the existing systems and workflows. Scalability Choose a tool that can grow with your business. As your data needs increase, you\u2019ll want a tool to handle the load without breaking a sweat. Performance Consider how quickly the tool processes data. You don\u2019t want to be stuck waiting for ages for data to sync. Ensure the tool runs smoothly on current hardware without hogging all the resources. Cost Determine the budget and find a tool that offers the best bang for your buck. Some tools provide great free versions or affordable pricing tiers. Look at the features offered and decide if they justify the cost. Sometimes, it\u2019s worth paying more for a tool that delivers. Support and Community Check if the tool offers good customer support. An active user community and comprehensive documentation can be a great resource for troubleshooting and tips. Conclusion We\u2019ve taken a whirlwind tour through some of the top alternatives to Talend Open Studio for 2024, and it\u2019s been quite the journey. Each tool brings its own unique flavor to the table, so let\u2019s wrap things up with a quick recap. There\u2019s no magic tool for doing everything, so we still have to select between the criteria, like the solution\u2019s usability,\u00a0 amount of connectors, performance, support capabilities , etc. The last important question is how much this orchestra costs and how long your team has to learn to use the tool perfectly. The excellent selection is a universal cloud-based data integration platform that can perform almost any complex user scenario for a reasonable price or even for free, like Skyvia. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftalend-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Alternatives+to+Talend+Open+Studio+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftalend-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/talend-alternatives/&title=Top+Alternatives+to+Talend+Open+Studio+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/ten-best-cdc-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best Change Data Capture Tools (CDC) for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/ten-best-cdc-tools/#respond) 2027 April 19, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ften-best-cdc-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Change+Data+Capture+Tools+%28CDC%29+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ften-best-cdc-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/ten-best-cdc-tools/&title=10+Best+Change+Data+Capture+Tools+%28CDC%29+for+2025) Businesses always succeed by selecting the correct solution to work with data. The modern market offers many different ones, and it\u2019s challenging to make the right choice, especially when talking about complicated tools like the [Change Data Capture (CDC)](https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server?view=sql-server-ver16) . The CDC solutions are complex and have many pitfalls, so let\u2019s define what businesses can take with this approach, what features are essential, what\u2019s unnecessary, and so on. This article presents the top ten Change Data Capture tools for 2025 to help companies find the tool of their dreams. Table of Contents What are the Change Data Capture (CDC) Tools? Why Do You Need CDC Tools? Benefits of Change Data Capture Tools Challenges and Considerations CDC Tools Best CDC Tools for 2025 Conclusion What are the Change Data Capture (CDC) Tools? The [CDC method](https://en.wikipedia.org/wiki/Change_data_capture) automatically identifies and captures changes made to data in a database or source system in real-time or near real-time, like: New entries Updates Deletions It facilitates replicating these changes to other systems, databases, or applications for data integration, synchronization, and analytics, avoiding bulk data load operations. In other words, this approach helps to move high-volume data across multiple data systems and keep cloud DB\u2019s architecture up to date. Let\u2019s see how it works in the business reality. Imagine an e-commerce company using an operational database (e.g., [PostgreSQL](https://skyvia.com/connectors/) ) to manage its online transactions, including orders, customer details, inventory levels, and pricing information (they also got a DWH with a copy of their data). They must ensure that the data in their data warehouse is always up-to-date with the latest transactions and changes happening in the operational database. This synchronization must occur in real-time or near-real-time to support timely decision-making and reporting. What they did: Implemented a CDC solution to monitor the operational database for any changes, including new transactions, customer records updates, inventory level changes, and product pricing modifications. As transactions occur throughout the day (e.g., a customer places a new order, updates their shipping address, or a product\u2019s inventory level changes), the CDC tool detects these changes in the operational database. This detection includes capturing new rows inserted, updates made to existing rows, and rows that have been deleted. The CDC tool captures these changes and temporarily stages the changed data, often transforming it into a format suitable for the data warehouse (e.g., converting data types or restructuring data to match the warehouse schema). The staged changes are streamed to the data warehouse in real-time or near-real-time. The CDC tool ensures that these data changes are applied to the corresponding tables in the data warehouse, maintaining consistency and integrity. With the data warehouse continually updated through the CDC, the e-commerce company can run real-time analytics and generate up-to-the-minute reports. For example, they can immediately analyze the impact of a promotional campaign on sales, monitor inventory levels in real-time to prevent stockouts, and update financial forecasts based on the latest transaction data. Why Do You Need CDC Tools? To answer why you need CDC tools in your company life, let\u2019s review how they may help your business. Need Why It\u2019s Needed CDC\u2019s Role Real-Time Data Synchronization Businesses require up-to-date data across systems for accurate analytics and timely decision-making. CDC tools ensure continuous and immediate synchronization of data changes, keeping all systems current. Efficient Data Integration Traditional [batch processing](https://skyvia.com/blog/batch-etl-processing/) methods can be slow and consume excessive resources, leading to data latency. By capturing only the changed data, CDC reduces the volume of data transfer, leading to more efficient [integration.](https://skyvia.com/data-integration/) Operational Efficiency Manual data replication processes are resource-intensive and prone to errors. Automation of data capture and replication with CDC minimizes manual efforts and system load, enhancing efficiency. Support for Event-Driven Architecture Modern applications benefit from reacting in real-time to data changes for more dynamic interactions. CDC detects data changes and triggers events in real-time, facilitating complex, responsive architectures. Data Consistency and Quality Accurate analytics and operations require consistent data across all platforms and systems. CDC helps maintain consistency by replicating every change from source to target, improving data quality. Regulatory Compliance and Auditing Regulations require businesses to manage data strictly, including maintaining change logs and historical records. CDC solutions track and log data changes, aiding in compliance with regulations and facilitating audits. Benefits of Change Data Capture Tools CDC tools may either be beneficial or challenging for businesses. Let\u2019s start with the benefits. Minimized impact on source data systems. Compared to traditional extraction methods, CDC tools are focused on data changes, which reduces the load on source databases and minimizes performance degradation and operational disruption, ensuring that critical systems remain efficient and responsive. Optimized Network and Storage Usage. Transferring only changed data instead of entire datasets reduces network capacity requirements and storage consumption. This optimization helps improve the performance of data transfer and processing tasks. Enhanced Analytical and Reporting Abilities. With access to up-to-date data, businesses can perform more accurate and timely analytics and reporting, providing better insights into customer behavior, market trends, and operational performance to make informed strategic decisions. Support for Event-driven Architecture. CDC tools can trigger workflows or processes based on specific data changes, supporting the implementation of event-driven architectures. This approach means more responsive and dynamic application behaviors, improving customer experiences. Reduced Costs and Resource Requirements. CDC tools automate the data capture and replication process, reducing the need for manual data handling and batch processing jobs. This automation improves operational efficiencies and cost savings, especially regarding labor and computing resources. Enables Data Modernization and Cloud Migration. CDC is an effective strategy for migrating data to modern databases or cloud platforms with minimal downtime. It supports incremental data migration, allowing companies to modernize their data infrastructure without interrupting operations. Scalability. Modern CDC tools scale with your data infrastructure, handling large volumes of data and high throughput requirements. This scalability ensures that CDC processes can grow alongside your business needs. Challenges and Considerations CDC Tools In the case of CDC tools, aside from benefits, there are a few challenges companies have to navigate. Let\u2019s step closer and consider what\u2019s going on. Network issues, system failures, or downtime can disrupt CDC processes, leading to data loss or replication delays. Opt for CDC solutions that offer built-in fault tolerance, automatic recovery, and checkpointing mechanisms to handle disruptions gracefully. Challenge Solution Complexity of Implementation Setting up CDC can be complex, especially in heterogeneous environments with multiple source systems and databases. However, careful planning and expertise from CDC solution providers or consultants can help to evaluate compatibility with existing systems and the scalability of the CDC solution. Performance Impact Incorrectly configured CDC tools might still impose a load on the source systems, potentially affecting their performance. To solve this problem, the performance impact during the pilot phase must be monitored and configurations adjusted. Consider CDC tools that offer minimal impact on source systems. Data Privacy and Security Replicating sensitive data across systems and networks often concerns data privacy and security. Implement strong encryption for data in transit and at rest. Ensure the CDC tool complies with relevant data protection regulations and check access controls. Data Consistency Ensuring data consistency across source and target systems can be challenging, especially in complex transactions. Select CDC tools that support transactional consistency and handle multi-row transactions as a single unit. Test thoroughly for scenarios that could lead to data inconsistencies. Scalability As data volumes grow, ensuring the CDC system can scale to handle increased load is important. Select solutions that can dynamically allocate resources or distribute load efficiently. Keeping Up with Schema Changes CDC tools must adapt to changes in database schemas while maintaining data and creating consistency. Select CDC tools that automatically detect and adapt to schema changes or provide mechanisms to manage and propagate these changes quickly. Regulatory Compliance Ensuring CDC processes comply with industry regulations and data governance policies can sometimes be complex. In this case, implementing governance practices that include CDC data flows is a good choice. Technical Expertise Implementing and managing CDC solutions requires specialized knowledge and skills. To solve this challenge, it is a good idea to invest in IT staff training or partner with vendors that offer strong technical support and professional services. Best CDC Tools for 2024 Choosing the right Change Data Capture tool is pivotal for achieving efficient, real-time data integration. Here\u2019s an overview of 10 CDC tools, highlighting their key features, pros, and cons to help find the best fit for your unique business needs. Skyvia [Skyvia](https://skyvia.com/data-integration/) is a cloud-based, no-code data integration platform with a user-friendly interface, rated among the Top 20 [Easiest To Use ETL Tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) by G2 Crowd. It offers various [data integration](https://skyvia.com/data-integration/) scenarios and supports [180+](https://skyvia.com/connectors/) data sources, including cloud applications, databases, and data warehouses. It\u2019s ideal for users looking for a no-code interface to connect their SaaS applications, databases, and cloud data warehouses without deep technical expertise. Key Features Offers ETL, ELT, and Reverse ETL integration scenarios as well as flexible scheduling in addition to CDC. Provides secure data transfer with encryption and two-factor authentication. Pros Easy to use. No installation is required. Versatile data management solutions. Cons Depends on internet connectivity. More video tutorials would be helpful. Pricing The pay-as-you-go [pricing](https://skyvia.com/pricing/) offers a free tier; paid plans start from $15/month with offers based on features and data volumes. The paid functionality offers max execution frequency, scheduled integrations, source and expression lookup mapping and sync, data import, splitting, etc. Debezium [Debezium](https://debezium.io/) is an open-source CDC platform that streams database changes into Apache Kafka. It specializes in capturing real-time row-level database changes, turning your database into an event stream. It\u2019s beneficial in microservices architectures where individual services must react to shared data store changes. Key Features Native integration with Apache Kafka for real-time data streaming. Extensive database support, including PostgreSQL, MySQL, MongoDB, and SQL Server. Transparent handling of database schema changes. Pros Wide database support. You can start and stop data apps as needed. It integrates well with Kafka ecosystems. Cons It requires Kafka expertise. Setup can be complex. Pricing The solution is open-source, so it\u2019s free to use. However, be careful with hidden costs. Oracle GoldenGate If searching for a comprehensive software package for real-time data integration and replication in heterogeneous IT environments, look at [Oracle GoldenGate](https://www.oracle.com/integration/goldengate/) . The capability to support various platforms makes it a strong choice for complex, heterogeneous environments. This solution is often used in scenarios requiring minimal latency between source and target systems, like financial trading platforms. Key Features Advanced conflict resolution mechanisms for complex replication scenarios. Detailed transaction data capture and delivery without impacting source systems. Pros High-performance data replication and real-time data integration across heterogeneous databases. The solution supports various platforms beyond Oracle, including cloud services. It supports a broad array of databases. Cons High cost. Complexity in setup and maintenance. Pricing Licensing costs start high, typically for enterprise use; specific [pricing](https://www.oracle.com/integration/pricing/) requires a quote from Oracle. Apache NiFi Designed by the NSA, [Apache NiFi](https://nifi.apache.org/) is an open-source software project that provides a web-based UI to automate data flow between systems. It includes data routing, transformation, and system mediation features, which is especially beneficial for data engineers looking to manage data movements visually. Key Features The tool supports broad data sources and destinations with over 300 processors. It prioritizes data flow control with backpressure and prioritization mechanisms. Pros Flexible and scalable graphical interface for designing, running, and monitoring data flows. The solution facilitates secure data routing, transformation, and system mediation. Cons The learning curve for the data flow approach may be complex for small projects. Pricing The platform is open-source, so that you may use it for free. Qlik Replicate [Qlik Replicate](https://www.qlik.com/us/products/qlik-replicate) is a data integration and ingestion solution offering a straightforward, graphical approach to replicating, ingesting, and streaming data across various databases, data warehouses, and Hadoop. It minimizes the impact on source systems, which is vital for operational systems that cannot afford downtime. Key Features Automated schema conversion and validation. Broad support for data sources and targets, including major databases and data warehouses. Minimal performance impact on source systems. Pros User-friendly graphical interface. Efficient data replication. Cons The solution can be expensive for small to medium-sized businesses and might require additional training. Pricing [Pricing](https://www.qlik.com/us/pricing/data-integration-products-pricing) is flexible and depends on your business needs. Contact Qlik for a quote. And take into consideration a $350 per-user license fee and an additional $77 for software updates and support. IBM InfoSphere Data Replication [IBM InfoSphere Data Replication](https://www.ibm.com/docs/en/db2/11.5?topic=infosphere-data-replication) provides real-time data replication and CDC capabilities that support a wide range of data sources and targets. Its integration with IBM\u2019s analytics suite makes it a strong contender for businesses already invested in IBM\u2019s ecosystem and looking to enhance their analytics and data governance strategies. Key Features The tool supports data transformations and enrichments during replication. It provides robust conflict detection and resolution for bidirectional replication. Pros Reliable performance. Extensive data source support. Cons It can be complex to manage. The total cost of ownership is high. Pricing Pricing mode is custom and requires contacting IBM for details. Striim [Striim](https://www.striim.com/) is a real-time [data integration solution](https://skyvia.com/blog/data-integration-tools/) providing streaming data ingestion, CDC, and analytics. It allows continuous query processing and streaming analytics, making it suitable for businesses that need to analyze data in motion for immediate insights, such as in IoT, e-commerce, and online services. Key Features In-memory stream processing for fast data analysis and transformation. 100+ pre-built connectors for databases, cloud services, and data warehouses. Visualizations and dashboards for monitoring data flows and analytics. Pros Real-time performance. Comprehensive integration features. Cons Pricing transparency. The learning curve for streaming concepts. Pricing It is not publicly listed and requires engagement with Striim sales for a quote. StreamSets [StreamSets](https://streamsets.com/) is a platform for building continuous data ingest and processing pipelines that handle data drift\u2014changes to the data structure, semantics, or infrastructure that can break pipelines. It\u2019s a good choice for data engineers who need to manage evolving data sources without constant manual intervention. Key Features Visual pipeline designer for easy creation and maintenance of data flows. The solution supports 100+ sources and destinations, including cloud-native services. Built-in performance monitoring and operational metrics. Pros An ability to handle data drift automatically, adapting pipelines to schema changes. Extensive connector support. Cons Overkill for simple needs. The tool requires time to learn. Pricing The pricing offers Professional and Enterprise editions. The Professional edition costs $1,000 monthly and allows running five active jobs on 50 published pipelines. The Enterprise edition isn\u2019t limited to the number of jobs to run or the number of pipelines to publish. However, Enterprise edition pricing requires contacting StreamSets. Talend [Talend](https://ua.talend.com/) offers a comprehensive suite of apps for data integration, quality, management, and big data. Its open-source foundation makes it accessible and provides robust enterprise solutions with advanced features for large-scale data projects. Its CDC works with Oracle, MS SQL Server, DB2, MySQL, and more databases. The solution suits businesses with complex data landscapes looking to consolidate their data integration efforts. Key Features [1000+](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) connectors for various data sources and applications. It supports batch and real-time data processing. Open-source foundation with a strong community and enterprise support. Pros Powerful ETL and data management capabilities. Open-source version available. Cons The enterprise version can be costly. Steep learning curve. Pricing Open Source version is free; Talend Data Fabric (enterprise version) [pricing](https://talend.com/pricing/) is available upon request. Hevo Data If you\u2019re looking for a no-code data pipeline platform offering real-time data integration, transformation, and automation, [Hevo](https://hevodata.com/) is a good choice. It allows businesses to consolidate their data for analytics easily. It\u2019s ideal for companies that prioritize ease of use and want to integrate disparate data sources quickly without a heavy engineering lift. Key Features Hevo supports [150+](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) sources, including databases, SaaS applications, and cloud storage. It offers transformations with Python code for custom data processing needs. Real-time data load with schema detection and automatic mapping. Pros User-friendly, no-code interface. An ability to identify data automatically. Cons Pricing can be steep for high-volume data. Limited custom transformation control. Pricing Hevo offers a 14-day free trial; [paid](https://hevodata.com/pricing/pipeline/) plans start at $239/month based on event volume and features. Choosing the Best Change Data Capture Tool To find the best CDC tool, you must clearly understand your business needs, including technical requirements, strategic objectives, operational constraints, etc. Define the use cases, appropriate connectors, data transfer and encryption capabilities, and real-time delivery. The price of the solution may also be a painful aspect of the selection. Let\u2019s review the most essential criteria to help you choose your solution. Criterion Description Compatibility with Source and Target Systems Ensure the CDC tool supports your source databases (e.g., MySQL, PostgreSQL, Oracle) and target destinations (e.g., data warehouses, cloud platforms). Compatibility is crucial for seamless integration and data transfer. Performance and Scalability Evaluate the tool\u2019s ability to handle your data volume and velocity without performance degradation. Remember about future growth and ensure the tool can scale to meet increasing data demands. Real-time Processing Capabilities If your use case requires immediate data availability for real-time analytics or operational reporting, assess the tool\u2019s latency from change capture in the source system to availability in the target system. Data Transformation and Enrichment Some CDC tools offer built-in data transformation and enrichment features. Ensure that you need to preprocess data during replication and that the tool supports these functionalities Ease of Use and Management The tool\u2019s user interface has to be easy to configure and maintain requirements. Tools with a steep learning curve or complex management can increase the total cost of ownership. Reliability and Fault Tolerance The ability to recover from failures and ensure data integrity is critical. To maintain data accuracy, look for features like checkpointing, automatic retries, and transaction consistency. Security Features Review the tool\u2019s security mechanisms, including encryption, access controls, and compliance with standards like GDPR or HIPAA, especially if handling sensitive data. Costs Discover the pricing model (subscription, volume-based, perpetual license) and evaluate it according to your budget. Consider both initial costs and long-term expenses, including maintenance and support. Support and Community Assess the level of support provided by the community vendor for open-source tools. Access to expert assistance and a vibrant community can be invaluable for troubleshooting and best practices. Integration with Existing Ecosystem The CDC tool should integrate well with your existing data management ecosystem, including ETL processes, analytics platforms, and data governance tools, to enhance rather than complicate your data landscape. Conclusion It\u2019s complicated and tedious to support the CDC infrastructure with manual coding. It takes a lot of developers time they can spend on some tasks improving your business and saving your money. So, investing in some third-party CDC tool is a win-win solution. Of course, the selection of such a tool depends on your company\u2019s business scenarios. However, always consider the balance between the provided functionality, the solution\u2019s usability, and cost savings. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ften-best-cdc-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Change+Data+Capture+Tools+%28CDC%29+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ften-best-cdc-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/ten-best-cdc-tools/&title=10+Best+Change+Data+Capture+Tools+%28CDC%29+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-apache-airflow-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 8 Apache Airflow Alternatives in 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/top-apache-airflow-alternatives/#respond) 3340 July 11, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-apache-airflow-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+8+Apache+Airflow+Alternatives+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-apache-airflow-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-apache-airflow-alternatives/&title=Top+8+Apache+Airflow+Alternatives+in+2025) Manual tasks make up the lion\u2019s share of processes in any business. While some could be creative, most are repetitive and require lots of time and attention. Luckily, software solutions come as lifesavers for automating such routine procedures. One-fifth of all businesses have already experienced the power of workflow automation software. Gartner predicts tha [t 70% of organizations](https://www.gartner.com/en/newsroom/press-releases/2022-10-03-gartner-survey-finds-85-percent-of-infrastructure-and-operations-leaders-without-full-automation-expect-to-increase-automation-within-three-years) will have implemented this kind of software by 2025. Apache Airflow is a workflow streamlining solution aiming at accelerating routine procedures. This article provides a detailed description of Apache Airflow as one of the most popular automation solutions. It also presents and compares alternatives to Airflow, their characteristic features, and recommended application areas. Based on that, each business could decide which workflow automation tool could benefit them. Table of Contents About Apache Airflow Advantages of Apache Airflow Most Common Apache Airflow Use Cases Skyvia: Best Alternative to Airflow Key Features of Skyvia Other Apache Airflow Competitors Luigi Prefect Dagster Apache NiFi Azure Data Factory Google Cloud Dataflow AWS Step Functions Summary About Apache Airflow Some years ago, the well-known company Airbnb decided to orchestrate its multiple complex workflows, so they created Airflow for that. Now, this open-source tool appears under the custody of Apache Software Foundation\u2019s incubation program. [Airflow](https://airflow.apache.org/) aims to ensure effective workflow automation through the construction of data pipelines. All this is done using the Python programming language by creating tasks and establishing dependencies between them. Such constructions of tasks and dependencies are known as [directed acyclic graphs (DAG)](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html) . Airflow implements scheduler and executor components that respectively plan and carry out DAGs. Advantages of Apache Airflow This tool is used in over 200 companies worldwide, including famous cross-industry players like Yahoo, PayPal, Intel, and Stripe. If such well-known companies use Airflow, there should be reasons for that. So, let\u2019s look at the advantageous features this service offers businesses. Easy to deploy. Building simple data transfer or complex machine learning workflows is done with Python. Moreover, Airflow makes it easy to modify and adjust data pipelines anytime. Real-time monitoring. This service implements an advanced alert system that notifies about any error or interruption in the running task. Log files are easily accessed via GUI, providing a detailed error description. Variability. Airflow integrates with [tools](https://airflow.apache.org/docs/apache-airflow-providers/operators-and-hooks-ref/software.html) such as Zendesk, PostgreSQL, Docker, Kubernetes, Github, and others. It ensures flexibility of choice for setting up data workflow in the environment of one\u2019s choice. Community support. Being an open-source platform, the [Airflow community](https://airflow.apache.org/community/) is constantly expanding. It contributes to the fast addition of new features and bug fixes as well as fruitful collaboration among community members. Most Common Apache Airflow Use Cases Apache Airflow is used within a variety of scenarios. Use Case 1. Scheduling and pre-scheduling workflows based on specific parameters. For instance, when a sales deal is made and registered in Dynamics CRM, this data should be transferred to HubSpot. Due to the data sensors Airflow relies on, the process is triggered when data arrives in Dynamics CRM and is sent to HubSpot later. Use Case 2. Airflow is also suitable for designing ETL pipelines for working with batch data. That way, the data is extracted from one or several sources, needed data transformations are performed, and then data gets loaded to the destination platform. Use Case 3. Airflow is an excellent tool for training machine learning models. Those require large amounts of data for the training set used in machine learning algorithms to make accurate predictions. Being an open-source tool, Apache Airflow can\u2019t boast powerful support options despite relying on a solid community. It might be an issue when learning how to use the tool and figuring out how its features work. Moreover, the Python coding requirement might not be suitable for all users. Therefore, we have prepared a list of Airflow alternatives that provide similar functionality but are easier to use at the same time. Skyvia: Best Alternative to Airflow Skyvia was initially designed as a [cloud data integration platform](https://skyvia.com/) offering such options as ETL data pipeline construction and ELT scenarios. However, Skyvia keeps on developing and thus provides other tools for working with data. One of those is the [Automation](https://docs.skyvia.com/automation/) solution that allows building and managing workflows similar to Airflow. See the detailed comparison of [Skyvia vs. Apache Airflow](https://skyvia.com/etl-tools-comparison/apache-airflow-alternative-skyvia) tool. Building workflow automation with Skyvia is easy because everything can be done with a visual wizard without any knowledge of Python . First, tune up the trigger \u2013 it could be of the Manual, Connection, Run on Schedule, or Webhook type \u2013 to specify the event invoking the automation workflow. Then, add components to the diagram to define which input data should be considered and how it should be elaborated. In fact, the Automation tool by Skyvia offers high flexibility in data automation process planning and management. Skyvia offers the ETL, ELT, and Reverse ETL functionality for any data integration process. Set up source and destination data platforms or apps for indicating the data path. Then, determine how data needs to be transformed and mapped. Also, Skyvia allows scheduling data integration processes so that new or updated data is transferred regularly. While simple scenarios are delivered by [Skyvia Import](https://docs.skyvia.com/data-integration/import/) , more complex data-related tasks use [Data Flow and Control Flow](https://www.youtube.com/watch?v=U8Zbk03E58Q&t=57s) tools. They make it possible to involve multiple [data connectors](https://skyvia.com/connectors/) and complex data transformation scenarios. Key Features of Skyvia We have presented how Skyvia copes with data workflows within Automation, Import, and Data Flow. However, apart from being a [data integration service](https://skyvia.com/data-integration/) , Skyvia offers its users many other functions and features. User-friendly. The no-code approach in Skyvia makes it possible to construct workflow automation and data integration scenarios in minutes. Accessed via browser. Being a cloud-based platform, Skyvia is OS-independent and easily accessed from any browser. It\u2019s incredibly convenient for remote teams as any collaborator could access or adjust data pipelines. Multifunctional. Apart from simple data import, Skyvia offers solutions for bi-directional data synchronization, data replication, data migration, etc. Affordable and Scalable. Start using Skyvia for free and pay only when an extended set of features is needed. With the Enterprise pricing plan, scheduling an unlimited number of integration and data workflow scenarios is possible. Other Apache Airflow Competitors Luigi Spotify developed [Luigi](https://github.com/spotify/luigi) to manage operations in Hadoop, such as MapReduce, that require heavy computing operations across clusters. Similar to Airflow, Luigi has the task concept at central \u2013 this is where actual computational operations take place. Also, Luigi introduces the \u2018target\u2019 notion that stands for an external file for data output. Even though Airflow and Luigi have much in common (open-source projects, Python used, Apache license), they have slightly different approaches to data workflow management. The first thing is that Luigi prevents tasks from running individually, which limits scalability. Moreover, Luigi\u2019s API implements fewer features than that of Airflow, which might be especially difficult for new users. Prefect [Prefect](https://www.prefect.io/) , similarly to Airflow, is designed to help user orchestrate their workflows using Python. This service is web-based and could be easily set up with containerization tools, such as Docker, and deployed on the Kubernetes clusters. As a result, Prefect offers a high grade of scalability to its users, which might be of particular value to rapidly growing companies. Prefect uses the \u2018flow\u2019 concept to describe the tasks and dependencies between them. This term corresponds to the DAG concept in Airflow. Moreover, the service can cope with several running tasks executed concurrently within the same workflow and offers a user-friendly interface for monitoring them. Unlike Airflow, which runs both on-premises and in the cloud, Prefect is more oriented towards the cloud environment. Thus, it would be suitable for those companies that keep their infrastructure on private or public cloud infrastructure. Prefect is used in a variety of industries: Finance, Healthcare, E-commerce, Retail, Logistics, etc. Due to its ease of use and flexibility, this software could be adapted to any company type. Dagster Similar to the above-mentioned tools, [Dagster](https://dagster.io/) also runs with Python and has tasks and their dependencies in mind. However, a totally new concept of asset-centric development is introduced \u2013 that way, companies focus on resources to deliver rather than executing individual tasks. This approach makes Dagster stand out from other [data pipeline management tools](https://skyvia.com/blog/best-data-pipeline-tools/) on the market. Unlike Airflow, which supports any production environment, Dagster concentrates on cloud services and supports modern data stacks. Being cloud-native and container-native, this solution makes the scheduling and execution processes easier. Dagster was created with such specific goals in mind: designing ETL data pipelines, implementing machine learning curves, and managing data-driven systems. Apache NiFi Another product by Apache is called [NiFi](https://nifi.apache.org/) \u2013 even though it\u2019s also dedicated to data workflow management, it differs from Apache Airflow in many aspects. First of all, Apache NiFi is a completely web-based tool with a drag&drop interface and no coding. It\u2019s easy to add and configure processors as graph nodes of data workflow, set up routing directions as graph edges, and indicate operations with data (filtering, joining, etc.). Apache NiFi is often perceived as an ETL tool for managing data in batches as well as data streams. The latter concept is not supported in Airflow, so NiFi is often chosen by those companies that need to deal with streaming data. Azure Data Factory While Apache Airflow focuses on creating tasks and building dependencies between them for workflow automation, [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory) is suitable for integration tasks. It would be a perfect fit for the construction of the [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) pipelines for data migration and integration across platforms. In fact, [Data Factory is more similar to Skyvia](https://skyvia.com/etl-tools-comparison/azure-data-factory-vs-apache-airflow) rather than to Apache Airflow. Both tools are hosted on Azure data cloud and thus offer excellent data security and protection to their customers. Each solution grants scalability and applies a pay-as-you-go pricing model. However, there are also some differences \u2013 Data Factory offers 90+ pre-build connectors (on-premises services as well as cloud apps), while this number is 160+ for Skyvia. Google Cloud Dataflow Another data orchestration solution is [Dataflow](https://cloud.google.com/dataflow) , which is a part of Google Cloud. Its concept is slightly different from Airflow\u2019s and Skyvia\u2019s, though the objective remains the same \u2013 to organize data and take advantage of it. Google Cloud Dataflow is highly focused on real-time streaming data and batch data processing from web resources, IoT devices, etc. Data gets cleansed and filtered as Dataflow implements Apache Beam to simplify large-scale data processing. Such prepared data is ready for analysis for Google BigQuery or other analytics tools for prediction, personalization, and other purposes. AWS Step Functions The solution called [Step Functions](https://aws.amazon.com/step-functions/) was designed by Amazon for orchestrating workflow. This application introduced the concept of \u2018step\u2019 and \u2018state\u2019 for data automation workflow. The first defines the dataflow logic, where each action corresponds to a specific process stage. Meanwhile, the latter represents the condition of the dataflow process at each step, and stores log about it. This service suits for many use cases, such as building ETL pipelines, orchestrating microservices, and managing high workloads. AWS Step Functions is particularly efficient when combined with other AWS solutions: Lambda for computing, Dynamo DB for storage, Athena for Analytics, SageMaker for machine learning, etc. Summary Businesses operating in various industries often experience a need to automate regular daily processes. Marketing departments may want to analyze leads and improve campaign management with automation solutions. Sales teams would benefit from streamlining NDA requests and RFP approvals with software solutions. Engineering specialists would take advantage of automatic feature request reception with further sorting and analysis. All this is possible with workflow automation tools such as Skyvia, Airflow, Luigi, Dagster, Prefect, etc. Check detailed [tools comparison](https://skyvia.com/etl-tools-comparison/) charts with feature comparison and parameters juxtaposing. Consider Skyvia for workflow automation and other data-backed operations with no coding. Try Skyvia today! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-apache-airflow-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+8+Apache+Airflow+Alternatives+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-apache-airflow-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-apache-airflow-alternatives/&title=Top+8+Apache+Airflow+Alternatives+in+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-aws-glue-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) AWS Glue Alternatives: Top Competitors and ETL Solutions By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-aws-glue-alternatives/#respond) 3345 June 30, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-aws-glue-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=AWS+Glue+Alternatives%3A+Top+Competitors+and+ETL+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-aws-glue-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-aws-glue-alternatives/&title=AWS+Glue+Alternatives%3A+Top+Competitors+and+ETL+Solutions) In modern reality, the data friendship between different systems is like fresh air that lets any business breathe. Of course, every successful company uses data integration for analysis, more effective decision-making, and qualified data management. But let\u2019s review in numbers why the miracle is unavoidable using the examples of the popular data market sources\u2019 [predictions](https://zigiwave.com/data-integration-market-predictions/) : Fortune Business : the market will grow to USD 29.16 billion by 2029 compared to USD 11.94 billion in 2022, with a CAGR of about 13.6%. Markets & Markets : the market growth prediction is from USD 11.6 billion in 2021 to USD 19.6 billion by 2026. Market Research : the data integration market reaching prediction is about USD 24.3 Billion by 2028, growing at a CAGR of 12.8% from 2021 to 2028 compared with USD 9.26 Billion in 2020. Research and Markets : expects the data market to grow to $22.1 billion by 2027, rising at a market growth of 10.4% CAGR. Impressive, isn\u2019t it? So, now let\u2019s see what data integration systems can help your business jump into these successful statistics. Why not start with AWS Glue? Table of Contents Meet AWS Glue Advantages of AWS Glue What about AWS Glue imperfections? Discover AWS Glue\u2019s Successful Use Cases Skyvia \u2013 Leader Among AWS Glue Alternatives The Benefits of Skyvia More AWS Glue Alternatives Matillion Integrate.io Stitch Boomi Azure Data Factory Meet AWS Glue [AWS Glue](https://aws.amazon.com/glue/) is a serverless data integration service offered by Amazon Web Services that simplifies discovering, preparing, moving, and integrating data from multiple sources for analytics, ML, and other apps. AWS Glue makes ETL processes more efficient and cost-effective by automating many steps. With AWS Glue, you can easily create and manage ETL jobs and build and maintain data catalogs. Advantages of AWS Glue The first step is to understand why you need this platform. In other words, what pains it solves and what you\u2019re ready to pay for : The solution provides an easy-to-run serverless cost-effective environment. Integrating with AWS services like Amazon Aurora, Amazon RDS engines, Amazon Redshift, Amazon S3, and 70+ other data sources provides perfect data management. The AWS Glue\u2019s data catalog metadata storage can request and transfer data efficiently. The monitoring of job running allows workload review and cost analysis. What about AWS Glue imperfections? Built-in connectors limitations: The local built-in connectors may not support some data storages you need, but you can use the AWS Glue Studio to access the ones not natively supported here. The semi-structured data handling: The data infer schemas for this type may sometimes cause data processing errors because the crawlers\u2019 technology isn\u2019t always accurate for this data type. Job scheduling and dependency management pitfalls: The scheduling here isn\u2019t so flexible, especially for complex ETL workflows. The solution handles the inter-job dependencies, retrieves jobs failing, and filters incorrect data. Another job scheduling and dependency trouble comes when using AWS Glue with Amazon S3 and IAM services. So, be careful while planning the data integration and testing the workflow. AWS Glue widespread use cases The system helps to simplify the ETL pipeline development so that you may discover, prepare, and integrate data from multiple sources. Data warehousing and analytics. AWS Glue is helpful in data extraction, transformation, and loading for further analytics, ML, or other apps. Data lakes creation and management. The platform might be perfect in this area, allowing data discovery, preparation, and integration from a set of sources. It\u2019s a popular solution for ETL and data streaming. Skyvia \u2013 Leader Among AWS Glue Alternatives We usually search for simplicity and cost-effectivity while considering our business needs. So, let\u2019s review [Skyvia](https://skyvia.com/) as the solution covering both challenges. What can Skyvia offer you compared to AWS Glue? At first, the data processing is more flexible (data ingestion, data sync, workflow automation, etc.). Skyvia is a no-cod e, easy-to-use , and cost-effective tool. The vast set of sources and destinations to select. The competitive [pricing](https://skyvia.com/pricing/) , including the freemium model, if compared to [AWS Glue](https://aws.amazon.com/glue/pricing/) (remember the hourly rate for crawlers and ETL jobs here). The platform\u2019s effectiveness and reliability: according to the [FeaturedCustomers Spring 2022 Customer Success Report Ranking](https://www.featuredcustomers.com/press-releases/the-top-etl-software-vendors-according-to-the-featuredcustomers-spring-2022-customer-success-report) and [G2 customer satisfaction (4.7 out of 5)](https://www.g2.com/products/skyvia/reviews#reviews) , Skyvia became one of the top performers in data integration compared to similar functionality tools. Skyvia\u2019s benefits The main advantage of the [Skyvia](https://skyvia.com/) platform is its simplicity . You can set the data transfer in a few minutes, and there\u2019s no need for any additional tools or learning. Security and compliance with the industry standards features: AES 256-bit encryption. SSL encryption for data transfer. GDPR, HIPAA, and PCI DSS standards. Another benefit is the scheduled data backup ability. You\u2019ll have access to the data even in loss or system failure. With Skyvia, you can search, view, export, and restore backed-up data in a few clicks, not wasting time and resources. More AWS Glue Alternatives Though Skyvia is simple to use to save you time and money, and is secured as much as possible. However, let\u2019s compare it and AWS Glue with other data integration market players. We\u2019d recommend you pay attention to the following ones: Matillion Integrate.io Stitch Boomi Azure Data Factory Matillion G2 customer satisfaction [4.3 out of 5](https://www.g2.com/products/matillion-2023-06-26/reviews#reviews) , based on 38 reviews. Key features [Matillion](https://www.matillion.com/) offers you robust data integration ability with a user-friendly interface. It\u2019s a cloud-based ETL solution that simplifies data extraction, transformation, and BI. Along with 140+ connectors, it enables connector creation for any REST API source. BTW, remember that Skyvia focuses on cloud-to-cloud integration, and AWS Glue is limited to AWS. The areas to work with The data warehousing and ETL. Cloud-based data integration. Parameter AWS Glue Skyvia Matillion Focus ETL, ELT, Reverse ETL, streaming. Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. Data ingestion, data transformation, and business intelligence. Skill level Low-code, no-code solutions or, coding in Python or Scala on complex scenarios. No-code and easy to use wizard. Requires technical background for ETL, no code wizard for Data Loader. Advanced ETL capabilities Job bookmarking, parallel execution. Visual ETL data pipeline designer with data orchestration capabilities. No. Pricing Pay-as-you-go. No minimum contract term. Volume-based and feature-based pricing. Freemium model allows to start with a free plan. Consumption-based pricing. [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/) Integrate.io G2 customer satisfaction 4.3 out of 5 , based on 187 reviews. Key features Compared to other platforms, [Integrate.io](https://www.integrate.io/) is simple and user-friendly enough to handle data from Amazon Vendor, Seller, and Instagram solutions. It supports semi-structured and structured data ETL but isn\u2019t so intuitive. Compared with [Skyvia](https://skyvia.com/etl-tools-comparison/integrateio-alternative-skyvia) , there are fewer connectors (just 150+), and there\u2019s no free pricing plan. The areas to work with E-commerce and online marketplace. Supply chain and logistics. CRM and marketing automation. Parameter AWS Glue Skyvia Integrate.io Focus ETL, ELT, Reverse ETL, streaming. Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. ETL, ELT, and Reverse ETL. Skill level No-code and easy-to-use wizard. No-code and easy to use wizard. Low-code, no-code solutions. Advanced ETL capabilities Job bookmarking, parallel execution. Visual ETL data pipeline designer with data orchestration capabilities. Advanced database API features for Enterprise plans. Pricing Pay-as-you-go. No minimum contract term. Volume-based and feature-based pricing. Freemium model allows to start with a free plan. ETL/Reverse ETL: Starts at $15,000/year. ELT/CDC: Starts at $199/month. 14-day free trial. [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/) Stitch G2 customer satisfaction [4.5 out of 5](https://www.g2.com/products/talend-stitch/reviews#reviews) , based on 67 reviews. Key features [Stitch](https://ua.stitchdata.com/) is the user-friendly data ingestion ELT platform supporting about 130+ connectors and real-time data replication. With a wide range of connectors and a focus on efficiency, businesses can easily consolidate and analyze their data for valuable insights. However, [Skyvia](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) is more flexible in data management (ELT, ETL & reverse) and there\u2019s a free [pricing](https://skyvia.com/pricing/) plan. The areas to work with The fashion industry. Home decor and design. The automotive and aeronautical industries. Parameter AWS Glue Skyvia Stitch Focus ETL, ELT, Reverse ETL, streaming. Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. Data ingestion, ELT. Skill level No-code and easy-to-use wizard. Volume-based pricing with newly added or edited rows. No coding required. Advanced ETL capabilities Job bookmarking, parallel execution. Visual ETL data pipeline designer with data orchestration capabilities. No. Pricing Pay-as-you-go. No minimum contract term. Volume-based and feature-based pricing. Freemium model allows to start with a free plan. Volume-based pricing with new added or edited rows. [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/) Boomi G2 customer satisfaction [4.3 out of 5](https://www.g2.com/products/boomi/reviews#reviews) , based on 249 reviews. Key features [Boomi](https://boomi.com/) is a low-code ETL solution supporting 90+ connectors. It can map, transform and validate the data, monitor, and report the pic in real-time. The service has an easy graphical user interface, and users have positive feedback about it. It\u2019s good enough, but if compared with [Skyvia](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) , not so robust, according to the sources. So, if you need more, take it in mind. The areas to work with The AtomSphere apps connect. API design and management. Workflow Automation. Parameter AWS Glue Skyvia Boomi Focus ETL, ELT, Reverse ETL, streaming. Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. Data integration, ETL, workflow automation. Skill level No-code and easy-to-use wizard. No-code and easy-to-use wizard. Low-code solution. Advanced ETL capabilities Job bookmarking, parallel execution. Visual ETL data pipeline designer with data orchestration capabilities. The Boomi Integration visual design interface for process creation. Pricing Pay-as-you-go. No minimum contract term. Volume-based and feature-based pricing. Freemium model allows to start with a free plan. Pricing depends on the number of used connectors, workflow, environments, etc. With a 30-day free trial. [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/) Azure Data Factory G2 customer satisfaction [4.5 out of 5](https://www.g2.com/products/azure-data-factory/reviews#reviews) , based on 62 reviews. Key features [Azure Data Factory](https://azure.microsoft.com/en-us/products/data-factory) is a cloud-based ETL, ELT, and reverse solution with a user-friendly interface, but its major difference from other tools is in scalability and pricing. It uses 90+ connectors that cover on-premises and cloud-based data. We also should mention that being a Microsoft product, Azure Data Factory takes customer data privacy and security very seriously. The areas to work with Manufacturing (data integration and transformation). Retail (data orchestration and workflow automation). Banking and finance (analytics and reporting). Parameter AWS Glue Skyvia Azure Data Factory Focus ETL, ELT, Reverse ETL, streaming. Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. ETL, ELT, Reverse ETL, streaming. Skill level Low-code, no-code solutions or, coding in Python or Scala on complex scenarios. No-code and easy-to-use wizard. Low-code, no-code solutions. Coding in various languages for complex scenarios. Advanced ETL capabilities Job bookmarking, parallel execution. Visual ETL data pipeline designer with data orchestration capabilities. Importing [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) packages. Calling External processes from the pipeline. Use Hadoop Streaming. Pricing Pay-as-you-go. No minimum contract term. Volume-based and feature-based pricing. Freemium model allows to start with a free plan. Always Free for 5 low-frequency jobs. Included in Azure Free Trial with $200 credit for 30 days. [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/) Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-aws-glue-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=AWS+Glue+Alternatives%3A+Top+Competitors+and+ETL+Solutions&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-aws-glue-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-aws-glue-alternatives/&title=AWS+Glue+Alternatives%3A+Top+Competitors+and+ETL+Solutions) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-bi-visualization-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top BI Visualization Tools for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-bi-visualization-tools/#respond) 1522 July 30, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-bi-visualization-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+BI+Visualization+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-bi-visualization-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-bi-visualization-tools/&title=Top+BI+Visualization+Tools+for+2025) Table of Contents What Are BI Visualization Tools? Best BI Visualization Tools How to Choose the Best Tool for Your Business Skyvia\u2019s Integration with BI Visualization Tools Conclusion What Are BI Visualization Tools? Have you ever wondered how those super cool graphs and charts in business reports come to life? That\u2019s where Business Intelligence (BI) visualization tools come in. BI visualization tools transform raw data into easy-to-understand, eye-catching visual formats like bar charts, pie charts, heat maps, and more. They convert overwhelming amounts of data into visuals, simplifying the data analysis process. Such tools help businesses see trends, patterns, and insights they might miss by looking at just spreadsheets. Benefits for Businesses Enhanced Decision-Making It\u2019s impossible to make a correct decision without seeing all the business data. BI visualization tools solve this problem by simplifying the process of spotting trends and comparing metrics. Uncover Hidden Insights Sometimes, the most valuable insights are hiding in plain sight. Such tools turn complex data into visual stories so businesses can discover patterns and correlations they might have missed. It\u2019s like having a data treasure map to find bigger revenue. Improved Operational Efficiency Time is money, and BI\u00a0 tools save plenty of both by streamlining the data analysis process, making it quicker and more efficient. No more sifting through endless spreadsheets. With automated reports and real-time dashboards, businesses can keep their finger on the pulse and respond to changes swiftly. Selecting the tool created for the appropriate ecosystem (Microsoft, Google, etc.) is a good idea for companies already working there. Better Collaboration Such solutions simplify creating and sharing dashboards, ensuring everyone sees the same information while enabling data-driven decision-making across the company. Best BI Visualization Tools Power BI [Power BI](https://www.microsoft.com/en-us/power-platform/products/power-bi) is created by Microsoft, so it seamlessly integrates with other Microsoft products like Excel, Azure, and SQL Server. If a company is already in the Microsoft ecosystem, Power BI is a natural extension of the existing tools. It helps turn data into interactive and visually appealing dashboards and reports. The solution is pretty intense in data modeling and allows connecting to various data sources, cleaning and transforming data, and building complex models. It\u2019s perfect for creating a single source of truth, ensuring everyone in the organization is working with accurate and up-to-date information. Benefits Seamless integration with other Microsoft products, like Excel and Azure, makes data import and export a breeze for companies already using these. Power BI offers various visualizations, including bar charts, line graphs, pie charts, maps, scatter plots, and more. Users can create interactive dashboards that allow viewers to drill down into specific data points, filter information dynamically, and explore data in more detail.These dashboards provide real-time insights and help teams make informed decisions quickly. Power BI scales to meet business needs regardless of size. It\u2019s designed to handle big data and can grow with each business. Drawbacks Power BI can be challenging for beginners. There\u2019s much to learn, from data modeling to creating complex visualizations. It might take some time and training to get the hang of it. It\u2019s a good idea to use the [Power BI community](https://community.fabric.microsoft.com/t5/Power-BI-forums/ct-p/powerbi) for help. Initial setup and configuration can be tricky, especially for businesses without a dedicated IT team. Tableau [Tableau](https://www.tableau.com/) allows the creation of interactive visuals with just a few clicks. It\u2019s perfect for those who need to dive deep into data and want the freedom to explore it from different angles. Tableau can connect to various data sources, both on-premises and cloud. Whether users are pulling data from Excel, SQL databases, or cloud services like AWS and Google Cloud, Tableau easily handles it. Its drag-and-drop functionality makes it a favorite among data analysts and visualization experts. Benefits One of Tableau\u2019s biggest strengths is its vibrant and supportive [community.](https://www.tableau.com/community) There are many resources, forums, and groups where users can find help, share ideas, and learn best practices. Tableau allows real-time data analysis. So, users can connect to live data sources and see up-to-date information in their dashboards to make timely and informed decisions. Drawbacks The learning curve can be steep, and getting the most out of Tableau might require some training. Tableau excels in data visualization but is not the best choice for handling large-scale production data. Users might need to pair Tableau with other tools for massive datasets or highly complex data operations to do the job efficiently. Looker Studio [Looker Studio](https://lookerstudio.google.com/u/0/navigation/reporting) (formerly Google Data Studio) is a free, web-based tool for creating dynamic and interactive dashboards and reports. The tool is convenient for companies already involved in the Google ecosystem. Its user-friendly drag-and-drop interface is suitable even for non-techs. The solution is perfect for turning data into clear, compelling stories anyone can understand. Benefits The tool is accessible for beginners and powerful enough for more advanced users. It\u2019s free and accessible to businesses of all sizes. You don\u2019t need to worry about costly licenses or subscriptions, making it a great option for startups and small businesses. It seamlessly connects with Google Analytics, Google Sheets, Google Ads, and more. Drawbacks While Looker Studio is great for visualizations, it might not have all the advanced data manipulation and transformation features that other BI tools offer. If you need complex data modeling, it may be lacking. Users report challenges with formatting and customization options. Sometimes, getting reports to look exactly how users need them can be tricky, especially compared to more mature BI tools. Looker [Looker](https://cloud.google.com/looker?hl=en) is part of the Google Cloud Platform, helping businesses explore and analyze data like a pro. Its unique modeling language, LookML, allows users to define complex data relationships and create reusable data models. Looker smoothly integrates with various SQL databases and cloud storage solutions, allowing companies to work with data from multiple sources. It provides robust analytics and empowers users to build sophisticated data models that drive meaningful insights. Benefits LookML allows the data team to define intricate data relationships and create reusable models, making it easier to maintain consistency and accuracy across all reports and dashboards. Looker offers pre-built templates known as Looker Blocks, ready-to-use pieces of code that users can plug into their projects, saving time and effort. They cover common analytics needs and can be customized to fit specific requirements. Looker handles large datasets and complex queries, making it good for enterprises with extensive data needs. Its ability to connect to a wide range of data sources ensures that data can be pulled in from wherever it is needed. Drawbacks While Looker is fantastic for data modeling and analytics, its visualization options are limited compared to other BI tools like Tableau or Power BI. Looker often requires additional tools or software before starting the analysis. Note: While Looker and Looker Studio are valuable BI tools, they cater to different needs and user bases. Looker Studio is perfect for quick, user-friendly data visualization, especially for those already using Google services. However, Looker offers advanced data modeling and analytics abilities, making it a better fit for larger enterprises and more complex data environments. Chatrio Atlassian acquired [Chartio](https://chartio.com/blog/atlassian/) in February 2021, with all its data visualization abilities. So, if a company uses Atlassian products like Jira, Confluence, and Trello, it now has more powerful tools to understand and use its data. Chartio fits perfectly into Atlassian\u2019s ecosystem, enhancing its collaboration and productivity tools suite with robust data visualization and analytics features. Benefits Chartio\u2019s user-friendly interface makes it easy to set up and use, even for non-technical users. Without extensive training, users can quickly connect data sources, build visualizations, and share insights with the team. As part of Atlassian, Chartio continues to offer an affordable option for small to medium-sized businesses or teams with limited budgets. They get top-notch data visualization abilities without a hefty price tag. With Chartio integrated into Atlassian\u2019s suite, teams can collaborate more effectively. They can create dashboards that pull data from Jira projects, visualize Confluence data, and effortlessly share insights across the organization. Drawbacks While Chartio is fantastic for basic to intermediate data visualization, advanced data analysts might need wider data modeling abilities and complex analytics features. Although Chartio offers a range of visualization options, it may not provide the same level of customization and intricate design features as other BI tools. Businesses needing customized and interactive dashboards may encounter some limitations. Sisense for Cloud Data Teams [Sisense for Cloud Data Teams](https://dtdocs.sisense.com/) , formerly known as Periscope Data, is for those who live and breathe SQL. It empowers data teams to analyze and visualize data efficiently using their favorite languages, like SQL, Python, and R. If you\u2019re an SQL expert looking for a robust platform to handle complex data analysis and create stunning visualizations, Sisense has got you covered. This tool integrates with various cloud data warehouses, making it perfect for teams working with big data. It allows businesses to explore data deeply and turn it into actionable insights. Benefits Sisense excels in handling complex queries and large datasets, making it efficient for deep data analysis. Users can write advanced SQL queries to pull and transform data, ensuring they get the necessary insights. Sisense integrates with various cloud data warehouses, such as Amazon Redshift, Google BigQuery, and Snowflake, ensuring easy access to the data sources and the ability to start analyzing without a hitch. Drawbacks Users unfamiliar with SQL might find it challenging to use this tool effectively. While Sisense\u2019s power and flexibility are great for advanced users, they can be overwhelming for those who are less experienced with data analysis or who prefer more straightforward, drag-and-drop interfaces. How to Choose the Best Tool for Your Business Picking the right BI visualization tool for your business can seem like a daunting task. The step-by-step guide below outlines the steps to choosing the perfect tool. Identify Your Business Needs What kind of data are your company working with? (e.g., sales data, customer data, financial data). Who will be using the tool? (e.g., data analysts, marketing teams, executives). What are your key objectives? (e.g., real-time insights, detailed reporting, data integration). Determine Your Budget BI tools come in all price ranges, from free to premium options. Determine your budget early on to narrow down the choices. Evaluate Ease of Use Consider the tool users and their level of technical expertise. Choose a tool that matches your team\u2019s skill level to ensure smooth adoption. Check Integration Capabilities Ensure the BI tool can integrate with the existing data sources and systems. Analyze Data Processing Needs Determine whether you need real-time data processing or [batch processing](https://skyvia.com/blog/batch-etl-processing/) is sufficient. Assess Visualization Features Look at the types of visualizations each tool offers. They are depending if you need simple charts and graphs or more complex, interactive dashboards. Consider Scalability Think about your future needs. Ensure the BI tool can handle increasing data volumes and more complex queries as your business expands. Evaluate Support and Community Check out the level of support and the user community available for the tool. A strong community and good customer support can be invaluable, especially when you\u2019re just starting. Take Advantage of Trials and Demos Many BI tools offer free trials or demos. Use these opportunities to test the tools in your environment and see how they handle data. Gather Feedback Gather feedback from the team. Make sure the tool you choose meets the team\u2019s needs and preferences. Skyvia\u2019s Integration with BI Visualization Tools Whatever tool you choose, you must understand that it requires consolidated, cleaned data. Usually, the data is scattered across different systems. However, Skyvia\u2019s integration abilities ensure that the data flows effortlessly between data sources and consolidates in DWHs for further analysis within BI tools. This flexibility means a company can choose the best tool for its needs without worrying about compatibility issues. [Skyvia](https://skyvia.com/) simplifies [data integration](https://skyvia.com/data-integration) and backup processes . It\u2019s a powerful, no-code [ETL](https://skyvia.com/learn/what-is-etl) , ELT, and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) data integration platform that provides [180+](https://skyvia.com/connectors) pre-made connectors, including cloud apps, databases, data warehouses, and CRMs. According to the last [G2 Crowd rate](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) , it\u2019s among the top 20 easiest-to-use [ETL tools](https://skyvia.com/blog/etl-tools/) . Additional Benefits of Using Skyvia Scalability and Flexibility Skyvia can easily scale to accommodate more data sources, larger datasets, and more complex data workflows as the business expands. Cost Efficiency By using Skyvia\u2019s cloud infrastructure, organizations can reduce the [costs](https://skyvia.com/pricing) of data integration and management, freeing up resources for other strategic initiatives. Enhanced Data Security Skyvia uses encryption to ensure data is secure in transit and at rest by providing secure data handling and storage solutions. Scheduling Skyvia\u2019s automation features allow scheduled data integration tasks to run at specific times, ensuring that data is always up-to-date without manual effort. Automation reduces human error risk and frees the team to focus on more strategic tasks. Find out more! [Here](https://skyvia.com/case-studies/horizons) is the story of success: how Horizons elevated its data insights and visualization abilities with Skyvia. Conclusion BI visualization tools like Power BI, Tableau, Looker, Sisense for Cloud Data Teams, etc. , stand out for their unique strengths in data integration, visualization, and analysis. Skyvia emerges as a robust platform, seamlessly integrating with these top BI tools to ensure data flows effortlessly and is always ready for analysis. This platform enables centralized data management and enhanced data security, making it an indispensable part of businesses\u2019 data strategy, not depending on the business size. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-bi-visualization-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+BI+Visualization+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-bi-visualization-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-bi-visualization-tools/&title=Top+BI+Visualization+Tools+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-big-data-analytics-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best 15 Big Data Tools for 2025: A Detailed Review By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/top-big-data-analytics-tools/#respond) 2717 November 15, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-big-data-analytics-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+15+Big+Data+Tools+for+2025%3A+A+Detailed+Review&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-big-data-analytics-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-big-data-analytics-tools/&title=Best+15+Big+Data+Tools+for+2025%3A+A+Detailed+Review) We\u2019re currently living in the epoch of the Industrial Revolution 4.0. It has introduced cloud computing, IoT, robotics, artificial intelligence, and data analytics. These things relate to real-time data collection and processing, and all that comes in large amounts. The concept of [big data](https://www.oracle.com/big-data/what-is-big-data/) evolved during this industrial revolution, where \u2018big\u2019 stands not only for the amount but for the variety and velocity at which data streams flow. Companies need lifebuoys to stay afloat in these streams and prevent themselves from sinking. Luckily, there are big data tools that come as life vests for organizations. This article focuses on various types of big data software, ranging from storage and management solutions to data analytics tools. It also discusses the necessity of implementing big data analytics tools for organizational advancement. Table of Contents Types of Big Data Software: What You Need for Your Business 15 Most Popular Big Data Tools Skyvia RapidMiner Apache Hadoop Cassandra Power BI Tableau Zoho Analytics Cloudera Qubole MongoDB Apache Spark datapine Apache Storm Matillion Talend Different Types of Big Data Analytics Uses and Examples of Big Data Analytics Conclusion Types of Big Data Software: What You Need for Your Business One of the \u2018V\u2019s characterizing big data stands for \u2018variety,\u2019 and it refers not only to the myriad of types but to the diversity of the big data solutions. So, let\u2019s look at the most widely used software categories for working with massive amounts of data. Data Storage and Management. This tool mainly involves big data cloud solutions and traditional relational databases. Modern cloud-based data storage and management tools involve well-known Google Cloud, AWS, IBM Cloud, and NoSQL databases, such as Cassandra . Data Processing. This category of big data solutions describes platforms that perform calculations and other manipulations on data. A popular [data processing tool](https://skyvia.com/blog/best-data-processing-tools/) , Apache Hadoop , takes data from distributed locations and applies the MapReduce paradigm to it. [Data Integration and ETL](https://skyvia.com/blog/data-integration-and-etl/) Tools. Skyvia , Matilion , and Talend are some of the major players in the [data integration](https://skyvia.com/data-integration/import) market. They are perfect for fast data exchange between tools or data consolidation on a single platform. Data Visualization and Analysis. One of the most famous big data visualization and analytics tools is called Tableau . Other popular solutions for big data analytics software are Apache Spark , Power BI , [Looker](https://cloud.google.com/looker?hl=en) (Google Data Studio), and RapidMiner . Those are great helpers to data analysts, BI specialists, and top managers. Apache Storm is another analytics tool suitable for working with real-time data flows in the financial and banking sectors. Workflow Management. Such tools are divided into two main subcategories: data orchestration with engineering pipelines (Apache Airflow) and task management ( [Monday.com](https://skyvia.com/connectors/monday) ). Even though there\u2019s a rigid classification of big data tools, most of them still carry out several functions simultaneously. Thus, Tableau is not only a data visualization service but also a powerful analytics solution. [Skyvia](https://skyvia.com/) is not only used for building [ETL pipelines](https://skyvia.com/learn/etl-pipeline-meaning) but also for workflow automation. See what potential Skyvia has and which useful functions it offers for your business! The list of multipurpose platforms can expand further and further. However, businesses don\u2019t have much time to decide which software instruments to use. That\u2019s why we have simplified the thing by presenting the most promising and effective big data tools for businesses. 15 Most Popular Big Data Tools Given the classification provided above, there\u2019s a variety of big data solutions. Not all of them should be obligatory present in every business. Large organizations might implement more tools, while small businesses might go fine with only several applications. So here we present the 15 most popular big data tools, most of which provide broad functionality for dealing with massive amounts of data. Skyvia One of the best big data [ETL tools](https://skyvia.com/blog/etl-tools/) is [Skyvia](https://skyvia.com/) \u2013 the most [user-friendly cloud-based platform](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) for building simple and complex data integration scenarios. Skyvia\u2019s [Integration](https://skyvia.com/data-integration/) components are Import, Export, Synchronization, Replication, Data Flow, and Control Flow. This tool also allows performing data management tasks with SQL and creating OData services. Functionality Automates data collection from disparate cloud sources and replicates it to a database or data warehouse. Sends consolidated data from a data warehouse to business applications (reverse ETL). Imports/exports data to/from cloud apps, databases, and CSV files. Simplifies data migration between applications. Backs up and protects business-critical data. Allows sharing data in real-time via REST API to connect with multiple OData consumers. Allows managing any data from the browser by querying it using SQL or intuitive visual Query Builder. Pricing Free plan Basic plan starting from $15/month Standard plan starting from $79/month Professional plan starting from $399/month Enterprise plan matching particular regulatory requirements at a custom price. To select a plan that suits you best, contact [Skyvia Sales Team](https://skyvia.com/schedule-demo) to schedule a demo and get further details. RapidMiner [RapidMiner](https://skyvia.com/data-integration/rapidminer) is a contemporary platform allowing users to build machine learning models primarily. It offers a broad range of data-related procedures, including loading, preprocessing, transformation, and visualization. Industries that heavily rely on insights provided by machine learning (healthcare, finance, automotive, etc) can consider RapidMiner as a perfect helper for their needs. Functionality Imports data from databases, SaaS platforms, cloud applications, etc. Provides necessary data transformation and cleaning options. Splits data into training and test sets automatically when working with machine learning techniques. Supports fundamental algorithms for supervised (decision trees, SVM, kNN, linear regression) and unsupervised (K-Means clustering, DBSCAN) machine learning. Grants an opportunity to add code in Python or R programming language for advanced users. Pricing It\u2019s necessary to fill in the form on the official website t o get the cost of RapidMiner. Apache Hadoop Widely known [Apache Hadoop](https://hadoop.apache.org/) consists of modules, each carrying out its function on big data. The HDFS component stores data on distributed clusters, ensuring high reliability and fault tolerance. The MapReduce operation processes distributed data. The YARN module schedules job tasks and manages resources. Functionality Distributed storage of data on commodity hardware. Parallel processing of data dramatically speeds up the entire data pipeline. Text mining and semantic analysis using the MapReduce module. Log analysis based on detailed log files generated by apps. Pricing Hadoop is free. Cassandra Named after the Greek prophet, [Cassandra](https://cassandra.apache.org/_/index.html) NoSQL database also has its superpower. It stores data of any format and structure. This solution is popular in industries producing objects beyond the standards of traditional relational databases. Functionality Handles data of any structure and type, so it\u2019s possible to store text information as well as images, sounds, and video there. Scales automatically on demand. Ensures tremendous fault tolerance due to replication strategies. Pricing Cassandra implements dynamic pricing models corresponding to the preferred computing power and storage capacity. Power BI Microsoft\u2019s tool , [Power BI](https://skyvia.com/data-integration/powerbi) , has conquered the market due to its ability to create visually appealing dashboards. This service is widely used by data scientists, business analysts, and business intelligence professionals across multiple industries. It\u2019s also popular owing to compatibility with other Microsoft products. Functionality Supports real-time stream analytics. Connects to various sources, including databases, online services, and files. Creates interactive reports with many visual options. Provides a wide range of data exploration functions: slicing, filtering, drill-through, etc. Allows users to share reports with specific user groups for enhancing collaboration. Integrates machine learning models for predictive analysis. Pricing Microsoft offers a free version of Power BI with many features. There\u2019s also a Pro version for $9.99/month and a Premium version for $20/month with advanced functions. Tableau [Tableau](https://skyvia.com/data-integration/tableau) is another analytical tool with a strong focus on visualization. It\u2019s widely used not only by professionals in the data-driven industry but also by non-technical experts, students, teachers, and so on. Tableau helps people to better understand complex things and encourages them to take advantage of data. Functionality Ensures secure sharing of data in the form of sheets and dashboards with others. Provides powerful [data transformation tools](https://skyvia.com/blog/best-data-transformation-tools/) for cleaning, duplicate removal, filtering, and missing values handling. Has pre-installed geographical maps, making it possible to add different layers there. Offers desktop, online, and mobile versions of the product. Pricing Tableau Viewer costs $15/month. Tableau Explorer costs $43/month. Tableau Creator costs $75/month. Zoho Analytics Modern BI and data exploration service [Zoho Analytics](https://skyvia.com/data-integration/zoho-analytics) is a part of the Zoho platform. It might be particularly suitable for those who use other Zoho tools in their daily workflow. Zoho Analytics is for data analysis and visualization across marketing, sales, and finance departments. Functionality Prepared data for analysis by cleansing and enriching it. Augments data-driven insights with AI and ML technologies. Provides a unique language for performing complex calculations and aggregations. Pricing The monthly cost for the service depends on the number of users and data records involved. Cloudera Since 2008, [Cloudera](https://www.cloudera.com/) has been helping enterprises to manage, process, and analyze large amounts of data. In 2022, Gartner Magic Quadrant named Clouder, a leader in cloud database management systems. Functionality Allows creating data warehouses and data lakes for storing raw data of any structure. Offers advanced data processing with either an embedded CDE module or Hadoop. Provides analytics functions either with the embedded CDV module or integrated tools such as Power BI or Tableau. Contains libraries for deploying machine learning models and artificial intelligence algorithms. Handles real-time data from IoT devices and sensors. Pricing Cloudera implements a progressive cost model, where the price is estimated based on the cloud capacities and available features. Qubole [Qubole](https://skyvia.com/data-integration/qubole) is a simple and secure data lake platform based on popular cloud providers (AWS, Google Cloud, Oracle Cloud, Microsoft Azure). It\u2019s perfect for big data analytics, machine learning, ad-hoc analytics, and cloud engineering. Functionality Big data processing in the cloud. Building and deployment of ML models. Workflow automation with automated scheduling. Ad-hoc querying. Pricing Qubole can be used for free during the 30-day trial period. Then, the payment on-demand applies, considering the compute unit used. MongoDB Similarly to Cassandra, [MongoDB](https://www.mongodb.com/) is also a NoSQL database management system. It utilizes a document-oriented approach for managing semi-structured and unstructured data. That\u2019s why MongoDB could be a good choice in gaming, entertainment, education, healthcare, logistics, and IoT industries. Functionality Supports secondary indexes for query performance optimization. Supports ACID transactions, ensuring atomicity, concurrency, isolation, and durability. Provides data sharding for distributing data across servers. Applies advanced data compression mechanisms. Pricing The cost model depends on the purpose of use: Shared \u2013 provides basic configuration options for exploring MongoDB at a $0.00/month cost. Serverless \u2013 suitable for serverless applications with infrequent traffic with the cost starting from $0.10/million reads. Dedicated \u2013 provides advanced configuration options for deploying applications at $57/month. Apache Spark [Apache Spark](https://spark.apache.org/) is an open-source solution for storing and processing data in real-time. It supports different programming languages (R, Python, Java, Scala), which makes this engine adaptable for various applications. Functionality Batch data processing in real-time streaming. Querying data with SQL for creating dashboards and reports. Training machine learning models at scale. Pricing Apache Spark is an open-source service that\u2019s free to use. datapine [datapine](https://skyvia.com/data-integration/datapine) is a modern big data analytics solution with BI and visualization capabilities. It\u2019s considered by many data scientists and BI specialists in multiple industries. Functionality Integrates with various data sources (databases, applications, data warehouses). Allows users to explore their data. Uses predictive analytics to make forecasts. Pricing datapine offers packages for startups at $219/month as well as for enterprises at a negotiable price. Apache Storm Originally developed by Twitter, present-day [Apache Storm](https://storm.apache.org/) has always had real-time data streams in mind. It aims to address petabytes of constantly generated data. Storm is usually used with other Apache solutions for comprehensive big data processing and analytics. Functionality Processes data in parallel threads, the number of which depends on the current load. Provides developer-friendly API. Support multiple programming languages (Java, Ruby). Pricing Free to use. Matillion [Matillion](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) is another [data integration software](https://skyvia.com/blog/data-integration-tools/) tool for organizing data engineering pipelines. It\u2019s often used to collect data from business apps, transform it if needed, and send it to data warehouses. Functionality Automates data workflows. Schedules job to run at a specific time. Ensures batch and real-time data ingestion. Provides monitoring and logging of integration processes. Pricing The service cost depends on the number of credit units (virtual core hours). Talend [Talend](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) is an open-source data integration tool for operating various data-related tasks. It contains a set of data collection, processing, and management components. Talend works with cloud data sources as well as on-premises and hybrid platforms. Functionality Data preparation Pipeline design Data inventory Data integration Data replication Pricing All the prices are available upon contacting the Sales team. Different Types of Big Data Analytics All the tools mentioned above somehow relate to [big data analytics](https://www.ibm.com/analytics/big-data-analytics) \u2013 directly or indirectly. Tableau or Power BI is used first-hand by data scientists and analysts. Meanwhile, [Skyvia](https://app.skyvia.com/#/404) helps to cleanse and transform data before loading it to a DWH for storage and analysis. Data solutions act like an orchestra, where each instrument carries out its specific function for the sake of ideal sounding. In the data world, the result is visible through data analytics that helps businesses reduce operational costs and make better decisions. To take advantage of big data analytics fully, it\u2019s worth noting its various types and purposes. Descriptive Analytics. It basically shows the results in metrics (turnover rate) or statistical indicators (average salary rate) based on past data from various departments. Diagnostic Analytics. Its primary purpose is to answer why a specific event has taken place. Diagnostic analytics explore the correlation of factors describing the event. Predictive Analytics. Uses large amounts of available data to derive regularities that could be used for predicting the future. It\u2019s often used in machine learning models and simulations. Prescriptive Analytics. It\u2019s the advanced form of predictive analytics where a concrete action is prescribed for each prediction. Uses and Examples of Big Data Analytics It\u2019s easy to tell that ice cream sales increase on hot summer days and drop down in autumn when the air gets chilly. However, analytical tools can clarify the case by adding predictions for product amounts based on the patterns derived from historical data. While considering seasonal and climatic correlations seems rather intuitive, this can be applied mainly to food, tourism, and some other industries. How to know when people start buying more watches or subscribing to video streaming platforms? Big data analytics can give answers to all these questions. So, let\u2019s look at some use cases and real-life examples of how different types of analytics could be applied to business environments. Risk Management Strategy Design Predictive and prescriptive analysis can shed light on potential risks that could impact business and offer preventive actions to mitigate them. Existing data of the current company and its competitors can be used for designing a risk management strategy. Product Development Approach Diagnosis analysis allows the discovery of patterns and trends based on the correlation between consumer behavior and sales rates. This gives a clearer view of the approximate production volume and human power needed. Operational Process Optimization Analytics helps to understand why turnover happens and what kind of workers are prone to change jobs frequently. This helps organizations select the appropriate candidates during the recruitment stage and improve their corporate culture. Customer Experience Improvement Survey and interview analysis aims to explore customer experiences and expectations. Such information is valuable for improving customer experience and brand loyalty. Conclusion The way of data is long in the digital world \u2013 it starts from collection to analysis. However, it\u2019s crucial to manage it properly during all stages of its journey. Here comes Skyvia for data consolidation, management, transformation, and integration. It easily ingests data from apps, SaaS platforms, and data storage platforms to be sent to a DWH for storage, transformation, and further analysis in the corresponding tools. Experience this data journey together with Skyvia, and you can start it gratis today! Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-big-data-analytics-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+15+Big+Data+Tools+for+2025%3A+A+Detailed+Review&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-big-data-analytics-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-big-data-analytics-tools/&title=Best+15+Big+Data+Tools+for+2025%3A+A+Detailed+Review) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-boomi-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best 7 Dell Boomi alternative By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/top-boomi-alternatives/#respond) 3554 May 5, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-boomi-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+7+Dell+Boomi+alternative&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-boomi-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-boomi-alternatives/&title=Best+7+Dell+Boomi+alternative) Times of costly on-premises server infrastructures and gigabytes of data have passed. Digital transformation made data processing and storage more straightforward and cheaper. Even apps and data integration doesn\u2019t seem like a mission impossible anymore. Smooth automated integration of any incompatible app and data became a reality. Cloud integration platforms perform the main work today. [Integration platforms (iPaaS)](https://skyvia.com/blog/what-is-ipaas/) integrate applications, data, and operations and build flows between systems. One of the iPaaS market leaders is Dell Boomi. Boomi is a robust, scalable, secure enterprise iPaaS offering cloud-based integration services. Boomi integrates cloud and on-premises data and apps in a visual environment using powerful mapping and transformation features. However, Dell Boomi is one of many iPaaS options for users. In this article, we describe seven powerful Boomi competitors offering various integration solutions for businesses of different sizes: Skyvia MuleSoft Workato Celigo Informatica TIBCO Oracle Integration Cloud Alternatives Comparison Before we dive into the comparative analysis details, here\u2019s a relative chart that gives you a snapshot view of [Fivetran alternatives](https://skyvia.com/blog/top-fivetran-alternatives/) . This value-added chart will help you get an overall idea of offerings from companies like Fivetran. Here\u2019s the comparison list of the ten ELT and [ETL tools](https://skyvia.com/blog/etl-tools/) . Dell Boomi Alternative Comparison Supported data sources Focus Skill level Pricing Skyvia [Boomi vs. Skyvia](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) 180+ Data ingestion, ELT, ETL, reverse ETL, data sync, workflow automation. No-code From $15 monthly. Free subscription and 14-day trial. Mulesoft [Boomi vs. Mulesoft](https://skyvia.com/etl-tools-comparison/boomi-vs-mulesoft) 250+ ETL, API Management Low-code, platform learning required Custom pricing model. 30-day trial. Workato coming soon 1000+ App/workflow automation No-code Pricing depends on features. Free limited subscription. Celigo [Boomi vs. Celigo](https://skyvia.com/etl-tools-comparison/boomi-vs-celigo) 200+ ELT, B2B integrations Low-code From $600 monthly. Informatica [Boomi vs. Informatica](https://skyvia.com/etl-tools-comparison/boomi-vs-informatica) 400+ Data Ingestion, ETL Varies from low-code to high-code, depending on the product. Custom pricing model, 30-day trial. TIBCO coming soon 200+ ETL, API Management Low-code From $400 monthly, a 30-day trial. Oracle Integration Cloud coming soon 100+ ETL/ ELT, API Management Low-code Custom pricing model calculated per hour. Free plan and 30-day trial. Skyvia [Skyvia](https://skyvia.com/) is a no-code cloud service providing solutions for data sync (one-way and bi-directional), data integration (ETL, ELT, reverse ETL), automating workflows, cloud-to-cloud backup, data management with SQL, CSV import/export, and connectivity. A user-friendly interface and flexible pricing make Skyvia convenient for small or enterprise businesses. Pros Cloud . No resources are required. You don\u2019t need to install anything. Easy to use. You can build integrations in Skyvia without coding using easy wizards and intuitive editors for different integrations. Supported data sources. Skyvia supports connections to over 180 data sources, including cloud apps, on-premises and cloud databases, and storage services. The number of supported connectors increases regularly. Mapping and transformation features. With Skyvia, you can integrate apps with different data structures and build complex flows with multiple sources and targets by applying complex transformations. Cons Supported data sources . Though Skyvia supports over 160 connectors, there are still many apps that need to be supported. Free subscription is quite limited. You can only discover some existing integration and automation features on a free subscription. Prices The paid subscriptions start at $15 monthly and vary depending on features and the number of processed records. For preliminary testing, you can use a free 14-day trial and a free plan. Start syncing and managing your cloud data today with Skyvia. Get a free trial now and discover hassle-free data integration opportunities. MuleSoft [MuleSoft](https://www.mulesoft.com/) is a global integration platform combining iPaaS and RPA (robotic process automation) capabilities. Mulesoft offers robust integration (ETL, ELT), automation, workflow services, and API management for enterprises from all industries. Pros Comprehensive and flexible . Mulesoft integration architecture allows designing, building, testing, and managing APIs and integrations in the cloud or on-premises applications and services. Supported data sources. Mulesoft supports over 140 connectors and offers pre-built integration templates saving time for learning and coding. Strong Community. Mulesoft users formed a vast community where users share experiences and best practices. Cons Pricing . Mulesoft is quite expensive, which makes it unaffordable for small and middle-sized businesses. Complexity . Platform flexibility and architecture make the platform complicated to manage and configure. Developers new to Mulesoft have to take learning and prepare to use the platform. Pricing Mulesoft has a complicated custom pricing system. Costs depend on various factors, such as integration project size, used tools and features, data volumes, and support levels. There\u2019s a 30-day trial to discover the platform and its features. Workato [Workato](https://www.workato.com/) is a no-code business workflow automation and data integration platform. Workato offers a single design for tech-savvy and citizen users and has flexible pricing making Workato convenient for mid-sized companies and enterprises. Pros Easy to use . The drag&drop interface makes Workato easy to use for anyone. Supported data sources . Workato supports more than 1000 connectors making any kind of integration possible. Visual builder and templates. Workato offers a visual automation builder for custom flows and a vast number of pre-built integration recipes, which allow you to implement various integration scenarios. Cons Performance. Difficulties with handling complex enterprise integrations and processing massive data volumes resulting in operations timeouts, making Workato not the best fit for big companies. Steep learning curve. Pre-built integrations are limited and may only partially cover the user demand. To build integrations relevant to narrow business specifics, you must learn to use more complex features. Cost. Costly for small businesses. Pricing Workato offers several subscription plans depending on the number of actions, team size, and type of support. Workato offers a free subscription with minimum features available. Paid subscriptions vary depending on the list of available features and connectors. Celigo [Celigo](https://www.celigo.com/) is a low-code iPaaS designed to connect applications and automate processes using robust mapping and transformation (ETL, Reverse ETL ) tools and eliminating manual involvement. With user-friendly guides and flexible user management, anyone can use and maintain Celigo Integration Apps. Pros Easy to use. Non-technical users can create and manage integrations from scratch with a friendly UI. Visual builder and templates. Celigo offers preset integration templates for popular applications such as Salesforce, NetSuite, and Shopify. You can create custom integrations with a drag&drop interface. Supported data sources. Celigo supports connection to nearly 200 data sources. Cons Cloud connectors only. Celigo supports connection to cloud apps. You cannot integrate on-premises databases with Celigo. Complex customization. You should learn to configure more complex settings, parameters, and formatting in the flows and integrations. Pricing Paid plans start from $600 monthly and vary according to used features and the number of connectors, flows, support needs, and integration complexity. Celigo offers a minimal free plan and a 30-day free trial. Informatica [Informatica](https://www.informatica.com/) is a robust, comprehensive data management platform for enterprise businesses. It provides end-to-end data integration services with a broad set of ETL capabilities. Pros Scalability. Informatica can handle complex data integration tasks and process large volumes of data accurately and consistently. Compatibility. Informatica supports connections to several hundreds of data sources of any type (databases, cloud data lakes, on-premises, and SaaS applications, data warehouses) and architecture (flat file, relational, or hierarchical, e.g., XML, JSON, Avro, or Parquet). Cons Complexity. Informatica is complex to learn. You may need specialized training to understand its features and functionalities fully. Cost. Informatica is expensive, especially for small startups and businesses. Resources demand. Informatica requires significant computing resources, including memory, storage, and processing power. Due to its complexity, Informatica needs to be managed by IT professionals, which can limit the ability of business users to use it effectively. Pricing Informatica has a complex customized pricing model. The solution, industry, available features, and data volume define specific pricing. Informatica offers a free 30-day trial. TIBCO [TIBCO](https://www.tibco.com/) is an extensive software platform providing various enterprise solutions for data integration, automation, analytics, event processing API management, and application development. TIBCO offers several data integration solutions for different integration objectives. Pros Scalability . With TIBCO, you can operate huge volumes of data and perform complex data integration tasks. Supported data sources . TIBCO supports connections to 250+ cloud applications, platforms, and databases. Compatibility . The variety of TIBCO [data integration products](https://skyvia.com/blog/data-integration-tools/) , tools, and features makes it almighty for connecting incompatible apps and systems. Cons Resources demand . TIBCO integration services require significant computing resources, including memory, storage, and processing power. Complexity . Although TIBCO claims that special skills are not required to leverage the platform, it still needs to be more straightforward. You need to learn to get acquainted with all data integration capabilities. Pricing TIBCO offers a flexible pricing model depending on the product and available features. TIBCO Cloud Integration prices vary from $400 to $1500 and higher. Although TIBCO does not provide a free subscription, you can use a 30-day trial. Oracle Integration Cloud [Oracle Cloud Infrastructure Integration Service](https://www.oracle.com/cloud/) is a low-code enterprise iPaaS ETL/ ELT solution designed to connect any data source to simplify business processes and centralize management. Pros Easy to use . Visual no-code tools enable smooth integration building. Compatibility . Oracle supports over 100 connectors (adapters) to data sources of different types. Although the list is not as comprehensive as other Boomi competitors offer, there is a list of specific data sources unavailable in other [iPaaS solutions](https://skyvia.com/blog/best-ipaas-solutions/) . Templates . Oracle Integration Service provides various pre-built integration scenarios. Cons Complexity . When it comes to customization, you should learn how to utilize the platform features and conduct mapping and transformation. Supported data sources . The list of supported data sources is relatively short compared to other Boomi alternatives. Pricing Oracle Integration Service has a unique pricing per-hour billing model. Prices may vary depending on the used tools and features, and infrastructure. Conclusion There\u2019s a variety of iPaaS vendors in the digital market. Each solution has its focus and specifics. All of them may meet different customer needs. The number of supported connectors and features, required skills, and learning curve also vary. The decision of what platform to choose for your specific business needs is entirely yours. But if you are looking for a no-code solution that offers maximum capabilities and requires minimum resources for a reasonable price, you should [try Skyvia](https://app.skyvia.com/register) . Take the chance and register now. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-boomi-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+7+Dell+Boomi+alternative&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-boomi-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-boomi-alternatives/&title=Best+7+Dell+Boomi+alternative) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-analysis-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Data Analysis Tools: Free and Paid Options for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-analysis-tools/#respond) 3081 August 28, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-analysis-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Analysis+Tools%3A+Free+and+Paid+Options+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-analysis-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-analysis-tools/&title=Best+Data+Analysis+Tools%3A+Free+and+Paid+Options+for+2025) The reality is flexible and changing every moment, so modern business has to check its changes to grow and develop. It would be almost impossible without analytical tools that help visualize where we are and what we really need. Here are a few reasons why. Decision-making based on the correct picture of your business realities. Clear identification of business trends and patterns, including some hidden ones. Business strategies adoption. Customer targeting improvement. Current operational efficiency enhancements. Cost-saving. Let\u2019s review the top 10 data analytical tools to help you choose the best solution for your business. Table of Contents Skyvia Microsoft Power BI Tableau Qlik Sense Looker Klipfolio Zoho Analytics Domo RapidMiner Apache Spark Solutions comparison Skyvia [Skyvia](https://skyvia.com/) is the perfect tool for data analytics presented in the market, and the first reason is its usability . The solution is cloud-based [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) . Skyvia supports 180+ connectors, including the most popular data warehouses like [Amazon Redshift](https://skyvia.com/connectors/redshift) , [Google BigQuery](https://skyvia.com/connectors/google-bigquery) , and [Snowflake](https://skyvia.com/connectors/snowflake) . The platform offers a no-code, intuitive, user-friendly GUI, allowing the transfer of data between different sources for further analysis. The data import, migration, and loading processes are easy to set up and don\u2019t require the staff\u2019s additional technical knowledge. Key Features [Skyvia](https://skyvia.com/data-integration/import) allows database/data warehouse replication with incremental updates. In case you need to configure the analytics, for example, in some CRM, Skyvia allows downloading data from the data warehouse to third-party cloud applications \u2013 the reverse process of ETL. The solution allows you to create real-time data using the [OData endpoint](https://skyvia.com/connectors/odata) (Skyvia Connect product). This endpoint can accept almost all BI tools and receive real-time data from the system through it. [Direct connector to Looker Studio](https://skyvia.com/connect/google-data-studio) . You may also use [Query](https://skyvia.com/query/) , a tool that allows you to build SQL requests to data in a web browser and upload the result to CSV or through an [add-on to Google Sheets](https://skyvia.com/google-sheets-addon/) / [Excel](https://skyvia.com/excel-add-in/) . The [pricing](https://skyvia.com/pricing/) is pretty convenient and includes the freemium plan and a set of payable options starting from $15 per month until the custom Enterprise plan. You may try Skyvia in action immediately. Microsoft Power BI [Microsoft Power BI](https://powerbi.microsoft.com/en-us/) is a popular platform for data analytics and visualization. Actually, it\u2019s the set of applications, services, and connectors that transforms your disparate data zoo into one unified clear view ready for analysis. But despite pretty good data visualization, connectivity, and regular updates, there are a few cons according to the solution\u2019s usability. The UI is bulky enough and requires rigid formulas and Data Analytics Expressions (DAX) knowledge. The free version is presented but has about 2 GB limitation. Key Features The solution provides pretty strong data connectivity (Excel, flat files, SQL server, and cloud sources like Google and Facebook analytics). An ability to create visual reports and dashboards for further analytics. The [pricing](https://powerbi.microsoft.com/en-us/pricing/) range starts from $10 per user per month and may take $4995 per month, depending on the capacity. And you still may try a free trial. Tableau [Tableau](https://www.tableau.com/) is a good choice if you\u2019re seeking a data visualization tool to create dashboards and worksheets for your perfect analytics. Let\u2019s check the reasons why. It\u2019s speedy enough and interactive. Visualization is clear. Data source choice isn\u2019t poor (databases, spreadsheets, cloud services, etc.). To talk about usability, you have to know Python or R to manipulate the data to import it to Tableau, which is inconvenient. The solution doesn\u2019t support the pre-processing data or\u00a0some more complex custom calculations. Key Features Data transformation into a visual format for clear user understanding. Real-time interactive dashboards help you to know what\u2019s really going on. The [price](https://www.tableau.com/product-and-pricing-selector?_ga=2.126509389.1989347760.1692208897-1438732749.1691697604&_gl=1*194eiz3*_ga*MTQzODczMjc0OS4xNjkxNjk3NjA0*_ga_8YLN0SNXVS*MTY5MjIwODg5Ni40LjEuMTY5MjIwOTIyMC4wLjAuMA..) is complex and starts from $ 15 for the viewer and depends on your chosen plan. The more you need, the more you pay. Qlik Sense If your organization needs a solution that is intuitive for usage and robust in data analysis, look at [Qlik Sense](https://www.qlik.com/us/products/qlik-sense) . The platform uses an associative analytics engine , and this approach differs a bit from other analytical tools presented in the market. With Qlik, you may analyze data selected from a set of sources, like: Cloud-based apps. Databases. Spreadsheets. Key Features The user interface is intuitive, so you don\u2019t need coding or additional technical knowledge to create visualizations and dashboards. But take into consideration that advanced features need strong technical skills. An ability to find hidden insights and relationships with the help of an associative analysis engine. AI solutions are available here to create diagrams, charts, associations, etc. To talk about the [pricing](https://www.qlik.com/us/pricing) , it\u2019s pretty expensive, but you may start with a Business plan ($30 mo) or select the flexible Enterprise model, depending on your company\u2019s needs. You may also use the free trial. Looker [Looker](https://cloud.google.com/looker) is a nice choice for those seeking flexible BI data analytics solutions for different business scenarios that may process, analyze and visualize data in real time. In other words, there is a set of a few products to raise your decision-making abilities depending on your purposes. You can create your own app with appropriate metrics here or use the solution in your existing BI conditions. You can work with it as a governed BI or a self-service. But remember, specific customization options may be quite limited. Key Features With LookML, data modeling, and exploration functionality, you may describe various data parameters and relationships in your SQL DB. This option is helpful if it\u2019s necessary to analyze a large data set quickly. The BI collaboration abilities, especially between different departments of the same company. In other words, multiple company employees may work simultaneously with the same data models, dashboards, etc. Dashboards, reports, graphs, etc., are flexible and may be presented however needed. You can automate the report distributions and set custom alerts here. The free trial is available, so you can start with it or choose a pay-as-you-go [pricing](https://cloud.google.com/pricing) model. Klipfolio Another competitive data analytics tool is [Klipfolio](https://www.klipfolio.com/) . Its best option is the ability to access data mutually. So all the team members can see the same view simultaneously and share and analyze the data collected from the pool of different sources and metrics. Except for it, the solution is convenient and doesn\u2019t need any coding to work with. Key Features An ability to collect, monitor, and analyze data in the real-time mood to track KPI. Here you can easily select the data sources and structure you wish and create any metrics because of the intuitive UI. The solution is no-code or low-code, and pre-built templates are handy while dashboard creation to simplify the working process and save time. But despite being no-code and easy to set up, the solution doesn\u2019t fit beginners and users with non-technical backgrounds. Customization options are also limited here. The solution\u2019s [price](https://www.klipfolio.com/pricing) is flexible, and the range varies from free to business ($800 mo). Zoho Analytics There are some hidden data insights in each business. You may even know about some of them, but you have to visualize the complete picture to understand what decision to make and what business strategy to use. [Zoho Analytics](https://www.zoho.com/analytics/) is a substantial market competitor that helps search for hidden information quickly, visualize, prepare, and analyze it. With Zoho Analytics, users can automatically combine data from multiple applications, define metrics that span data from multiple departments, and create reports and dashboards with those metrics. But on the advanced level, the visualization and customization features are restricted a bit. Key Features Data cleansing and preparing for analysis abilities based on AI solutions. 100+ pre-built reports and dashboards for Zoho Sprints activity data and Zoho Bigin sales data. The solution easily integrates with other Zoho tools, like Zoho Creator, Zoho Projects, and Teamwork, or with external ones (Smartsheet and social media analytics tools) to share data for visualization and analysis. You may compare the [pricing](https://www.zoho.com/analytics/pricing.html) that depends on cloud or on-premise selection. The cloud option offers a 15-day free trial and ranges from basic to enterprise. The on-premise provides you with the personal free version or offers the payable professional one. You may also select the 30-day free trial. Domo [Domo](https://www.domo.com/) is a self-service, BI, cloud-based data analytics platform where you can discover and analyze your data quickly and smoothly, even if you\u2019re not a technical expert. Here you may do a bit of data analytics\u2019 usual magic. Key Features With Domo, you may visualize your data and create reports to understand better where your business is going and growing. Select data from your needed sources, automate real-time reports, and create predictive models to help your organization make a perfect decision for the next steps. The custom options, unfortunately, are limited here, and the price may be a bit higher according to the solution\u2019s abilities. So, please contact the Domo Sailes team to check their [pricing](https://www.domo.com/pricing?nm) factors, or you may try the free trial to understand better how it goes. RapidMiner [RapidMiner](https://rapidminer.com/) offers full automation for non-coding domain experts, an integrated JupyterLab environment for seasoned data scientists, and a visual drag-and-drop designer. RapidMiner\u2019s project-based framework helps to ensure that others can build off their work using visual workflows or automated data science. The possible drawbacks are the solution\u2019s complexity and customization options limit. Key Features The amount of data it can interpret. Visual workflow designer. ML and predictive analytics abilities. Models deployment and monitoring. There are no fixed pricing plans. Please connect [here](https://rapidminer.com/pricing/) to create your own. Apache Spark The [Apache Spark](https://spark.apache.org/) team describes their platform in 4 words: simple , fast , scalable , and unified . There is only one open-source solution presented in this list. It\u2019s easy to install and set up and offers user-friendly APIs and libraries, so even non-technical users can work with it. But on the other side, it\u2019s complex in implementation and maintenance and is a memory eater for big datasets processing. Key Features Supported languages (Java, Scala, Python, and R). In-memory data processing. Fast large data volume processing. Solutions comparison The table below compares the most popular data analytics solutions by the most important criteria. Solution Data Visualization Ease of Use Scalability Data integration Pricing Skyvia No Yes Yes Yes Volume-based and feature-based pricing. The Freemium model allows you to start with a free plan. Microsoft Power BI Yes The software needs some technical expertise for usage. The scalability depends on many factors (data complexity and size, number of users, etc.) Yes The pricing is volume-based. A free trial is available. Tableau Yes Python or R language knowledge is required. Yes Yes The pricing depends on your needs and includes a 14-day free trial. Qlik Sense Yes The drag-and-drop interface is user-friendly. Advanced features need strong technical skills. Yes Yes The price depends on the user\u2019s request. You may also use the free trial. Looker Yes Doesn\u2019t fit for beginners and non-technical users. Yes Yes The price is pay-as-you-go. Or use the free trial first. Klipfolio Yes Doesn\u2019t fit for beginners and non-technical users. Yes Yes The price depends on the set of metrics and includes a free model. Zoho Analytics Yes Advanced features need strong technical skills. Yes Yes The pricing is flexible and affects the cloud-based or on-premise versions. The cloud one offers a free trial. The on-premise one has a free pricing model. Domo Yes Yes Yes Yes The source system type, license number, data volume, and organization size affect pricing. A free trial is also available. RapidMiner Yes The software is complex and needs advanced users to work with. Yes Yes The pricing is customizable according to your business needs. There is no free version here. Apache Spark No To work with it, you must be familiar with Java, Scala, Python, and R programming languages. Yes Yes The open source solution. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-analysis-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Data+Analysis+Tools%3A+Free+and+Paid+Options+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-analysis-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-analysis-tools/&title=Best+Data+Analysis+Tools%3A+Free+and+Paid+Options+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-extraction-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 10 Data Extraction Tools for 2025: Detailed Comparison By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-extraction-tools/#respond) 4105 February 25, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-extraction-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Extraction+Tools+for+2025%3A+Detailed+Comparison&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-extraction-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-extraction-tools/&title=Top+10+Data+Extraction+Tools+for+2025%3A+Detailed+Comparison) Our reality deals with tons of info spread across databases, websites, cloud applications, and various documents. Manually gathering is slow and inefficient and increases the risk of errors, making it harder for companies to keep up with the competition. Extraction software simplifies this process by automatically collecting structured and unstructured information from multiple sources to help businesses decide quicker, improve analytics, and seamlessly integrate information into their existing platforms In this guide, we explore the top 10 extraction tools for 2025, making finding the right solution for optimizing workflows and gaining valuable insights easier. Table of Contents What is Data Extraction? Comparison and Tool Selection Criteria List of Top 10 Data Extraction Tools Skyvia Import.io Hevo Data Octoparse ParseHub Mailparser DocParser Nanonets Mozenda Rossum Data Extraction and ETL Types of Data Extraction Benefits of Using Data Extraction Tools How to Select the Best Data Extraction Software? Summary What is Data Extraction? Businesses worldwide leverage their operational insights to increase efficiency and optimize their processes. In order to do that, there\u2019s a constant need to analyze the information being generated in a way that can be interpreted well by the management. All these might sound familiar, but things get more challenging in practice. Extraction is the process of automatically retrieving [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) from multiple sources and converting it into a usable format . It can involve [scraping web pages](https://blog.apify.com/what-is-web-scraping/) , pulling information from APIs, pulling details from scanned documents using Optical Character Recognition (OCR), or integrating it from different software systems. Advanced extraction tools use AI, machine learning, and automation to process and transform it in real-time, eliminating manual effort and reducing errors. Comparison and Tool Selection Criteria The best possible system doesn\u2019t exist; therefore, it\u2019s always a good idea to research the market for solutions that suit most of your needs. In order to gain better insights into some of the top 10 data extraction tools, the following comparison and selection criteria were chosen. Tool Free Plan Pricing (USD) Deployment Skyvia Free plan available Basic: 15/mo Cloud Import.io 14-day Free Trial NA Cloud Hevo Data Free plan available Starter: 239/mo Cloud Octoparse Free plan available Standard: 89/mo Cloud & MacOS ParseHub Free plan available Standard: 189/mo Cloud & Desktop MailParser Free plan available Professional: 33.95/mo Cloud DocParser 21-day Free Trial Starter: 32.50/mo Cloud Nanonets Free plan available Starter: Pay-as-you-go at $0.30/page Cloud, On-Premise Windows, On-Premise Linux Mozenda Free Trial available NA Cloud Rossum Free Trial available Custom pricing based on business needs Cloud List of Top 10 Data Extraction Tools In this section, let\u2019s take a look at each of the above-mentioned platforms in some detail. Skyvia [Skyvia](https://skyvia.com/) is one of the leading data integration providers in modern days, covering all possible integration scenarios: [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) , data [sync](https://skyvia.com/data-integration/synchronization?utm_source=chatgpt.com) , [CSV data loading automation](https://www.youtube.com/watch?v=ld6JdIk8HA4&t=6s) , and building advanced data pipelines with complex business logic. With its universal no-code cloud data platform, Skyvia provides a powerful data extraction web interface that is very easy to set up and use. There are [200+](https://skyvia.com/connectors) collectors, including SaaS applications, from which data can be extracted and fetched. Skyvia is a one-stop shop for all business analytics needs, starting from data extraction and transformation and then loading it into one of the popular data warehouses like Redshift or BigQuery. Rating [G2 Crowd](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) 4.8/5 (based on 242 reviews) Key Features Integrated data extraction platform. Web-based interface for setting up connections. Provides rich capabilities such as backup and data management. Pricing Skyvia provides a [free](https://skyvia.com/pricing) plan to get started, with some limitations on the number of records that can be processed. However, with paid plans such as the Basic or Standard, many options are possible. They also offer a Business and Enterprise plan for heavy usage. Import.io Import.io is a web-based extraction tool that allows users to pull or scrape data from various websites into a CSV or Excel file. The interface is intuitive, making it a great candidate for non-technical people to use. Info on websites can be structured or unstructured; therefore, it has specific algorithms that can convert the unstructured data into a structured schema and make it ready for use. Rating G2 Crowd 4.5/5 (based on 1 review) Key Features Data extraction from online retailers and brands. Product sentiment analysis. Scheduled data collection service. Pricing The platform offers a free product trial so that anyone can use it for a specified amount of time before purchasing it. Hevo Data Hevo Data is an end-to-end cloud-based data pipeline platform allowing users to extract insights from the most popular sources, including SaaS applications and databases. It comes with two popular products: Hevo Pipeline for data extraction and Hevo Activate for reverse ETL. Hevo Pipeline provides an easy-to-use interface to connect to source systems and start extracting data. It has connections to the most common sources, including Google BigQuery, AWS Redshift, Databricks, etc. Additionally, the app allows users to monitor their data pipelines so that the root causes can be easily detected and mitigated in case of any issues. Rating G2 Crowd 4.4/5 (based on 254 reviews) Key Features No-code platform. In-depth documentation. More than 150+ data source connectors. Observability and monitoring. Pricing There\u2019s a free tier available that allows users to begin processing up to 1 million records. Later, you can migrate to any of the usage-based plans if required. Octoparse Octoparse is a modern SaaS application that allows users to extract and scrape web data. It has rich algorithms that allow easy navigation and extraction of structured and unstructured information from multiple online platforms. With a well-designed user interface, Octoparse is suitable for technical as well as non-technical folks. Rating G2 Crowd 4.3/5 (based on 15 reviews) Key Features Easy-to-use web interface. Scrape data based on a schedule. Automatic IP rotation to prevent being blocked. Pricing Octoparse provides a free plan that allows users to run 10 tasks at maximum with an upper capping of 10,000 data rows per export. Other plans include Standard, Professional, and Enterprise for heavy usage. ParseHub ParseHub is a free web scraper that offers powerful data extraction with just a few clicks. In addition to that, it allows users to convert online content into a spreadsheet or API for future extractions. The tool supports dynamic pages, handling JavaScript and AJAX-based elements with ease. With its user-friendly interface and cloud-based automation, ParseHub simplifies web scraping for both beginners and advanced users. Rating G2 Crowd 4.3/5 (based on 10 reviews) Key Features Easy one-click web data extraction. Extraction on a schedule. Deliver data to Dropbox or Amazon S3. Pricing ParseHub has several pricing models from which users can choose. In the Free plan, users can extract around 200 pages in less than an hour. Mailparser As the name suggests, MailParser is a robust parsing tool that helps extract meaningful insights from email messages. It allows users to automatically extract order details, invoices, leads, and attachments from incoming emails. With its custom parsing rules and easy integration into other applications, MailParser streamlines email data processing and workflow automation. Rating G2 Crowd 4.7/5 (based on 10 reviews) Key Features Extract and parse email messages. Deliver data to Google Sheets, Excel, Slack, etc. Over 1500 integrations are available via Zapier. Pricing There is a Free plan available that allows users to extract 30 emails per month from 10 different inboxes. However, with Professional or Business ones, there are many more options to explore. DocParser DocParser is a popular document extraction tool that parses texts from documents and delivers them in Excel or JSON format. It allows seamless integration with cloud services from where documents can be extracted. You can also train DocParser to extract only the valuable data, leaving out the rest. Integrations with other platforms are available via Zapier, Workato, etc. Rating G2 Crowd 4.6/5 (based on 51 reviews) Key Features Extract data from documents. Deliver data to cloud services as Excel or JSON. Integration with multiple platforms via Zapier. Pricing You can try DocParser for free for 14 days and then switch to one of the paid plans. For beginners, there is a Starter plan, followed by Professional, Business, and Enterprise plans. Nanonets Nanonets is an AI-powered extraction tool that automates the processing of unstructured documents such as invoices, receipts, purchase orders, contracts, claims, and forms. It integrates with Google Drive, Dropbox, SharePoint, Gmail, and other cloud services, facilitating effortless document imports. The solution also provides API access for custom integrations, allowing businesses to tailor the tool to their specific workflows. Rating G2 Crowd 4.8/5 (based on 96 reviews) Key Features Extracts data from unstructured documents using AI-driven Optical Character Recognition (OCR) and deep learning models. Customizable machine learning models to extract only pertinent information. Flexible, usage-based pricing with volume discounts for higher usage. Pricing The price starts with a free plan that includes $200 worth of credits upon sign-up. Subsequent usage follows a pay-as-you-go structure, with costs based on the number of workflow blocks executed. Volume-based discounts are available for higher usage tiers. Mozenda Mozenda is an online web extraction tool that allows users to scrape data from multiple websites and PDF files. It offers a SaaS platform to retrieve and integrate data with a full-scale BI ecosystem. The user-friendly interface provides interesting business intelligence functionalities such as market sentiment analysis, competitive price generation, and some other major data analysis. Rating G2 Crowd 4.1/5 (based on 4 reviews) Key Features Extract web data from multiple websites and PDF documents. Price extraction from online e-commerce stores. Provide various kinds of business analysis for better market growth. Pricing It offers a Trial package that allows unlimited robots to extract data with 1 concurrent process for up to 1.5 hours. This approach can be a good starting point for users to evaluate the tool. For more extensive usage, there are Standard, Corporate, and Enterprise packages available. Rossum Rossum is an AI-driven extraction tool that automatically processes invoices, purchase orders, bills of lading, and customs documents. It efficiently captures and interprets data from diverse document formats and layouts. The platform offers seamless integration capabilities with numerous enterprise resource planning (ERP) systems such as SAP, QuickBooks, and Microsoft Dynamics AX, as well as other applications via API access. It also provides customizable machine learning models that can be trained to pull only pertinent information from documents, ensuring accuracy and relevance in data capture. Rating G2 Crowd 4.4/5 (based on 89 reviews) Key Features AI-powered Optical Character Recognition (OCR) for accurate data extraction from various document types and layouts. Automated data validation and real-time processing. User-friendly interface for reviewing and correcting extracted data. Pricing Price here is tailored to individual business needs based on estimated annual document volume and the complexity of data fields for extraction. The platform offers a free 14-day trial for new users to evaluate its capabilities before committing to a subscription. Data Extraction and ETL Given the vast amount of information organizations produce today, analyzing details in real-time is simply not feasible. A traditional process known as [Extract, Transform, and Load](https://skyvia.com/learn/what-is-etl) (ETL) needs to be put in place so that this data can be taken from source systems, transformed as per the business definitions, and then loaded into a centralized database or warehouse separate from the operational DB. Extraction is the first part of the entire process of ETL, where information from various sources, like files, databases, web APIs, CRM tools, etc., is extracted and fetched. The format and structure of insights vary based on their source. It can be structured, semi-structured, or unstructured and is often inconsistent and unusable in the raw form, requiring collection, cleaning, and transformation before processing. At this stage, the [ETL](https://skyvia.com/blog/etl-tools/) tool connects with source systems, extracts raw data, and stores it in a staging area, ensuring it is appropriately structured and ready for further processing. Types of Data Extraction Not all data extraction techniques work the same way. The right approach depends on the company\u2019s requirements. Whether you need real-time insights, periodic updates, or a mix of both. Some organizations extract entire datasets, while others prefer only the latest changes to save time and resources. Understanding these types helps users choose the best method to boost efficiency and optimize workflows. Let\u2019s explore the main types of extraction and how they work. Full Extraction This process pulls all the details from a source in one go. It\u2019s ideal to receive a complete snapshot of the dataset at a particular point in time. While it\u2019s thorough and ensures you capture everything, it can be time-consuming and resource-heavy for large datasets. Real-Time\u00a0 Extraction As the name suggests, this approach pulls insights as they\u2019re generated. It\u2019s perfect for businesses needing immediate access to the latest information: stock updates, real-time analytics, or live customer info. It keeps all systems always fresh without any delays. Delta or Incremental\u00a0 Extraction Instead of pulling everything from scratch, this method focuses only on the changes (new or updated insights) since the last extraction. It\u2019s faster and more efficient because you\u2019re not always reprocessing the entire dataset. Perfect for ongoing syncs without overloading the system. Batch Extraction This one involves collecting insights at set intervals, such as daily, weekly, or monthly. It\u2019s a great option when real-time extraction isn\u2019t necessary and when businesses prefer to handle large amounts of information in manageable chunks. Such an approach can save resources and simplify processes. Hybrid Extraction This method combines elements of multiple ones, like real-time and batch extraction, to balance efficiency with timeliness. For example, a business might use real-time pulling for critical data and batch gathering for less time-sensitive information. It offers the best of both worlds. Benefits of Using Data Extraction Tools While it\u2019s easy to start retrieving information from source databases, building and maintaining such a system requires too much effort and technical expertise. Also, when the number of sources and different data structures increases, it\u2019s advisable to use an extraction tool that is already available rather than reinventing the wheel again. Such platforms ensure businesses can collect, process, and analyze information efficiently. Whether you\u2019re working with customer records, financial details, or industry insights, they speed things up, improve accuracy, and make information instantly accessible. Let\u2019s consider the key benefits of using data extraction software. Faster Retrieval for Improved Productivity No more digging through multiple sources or manually entering information. These tools automate retrieval in seconds. So, employees can focus on analyzing and using the insights rather than wasting time collecting them. Faster data access leads to better decision-making and increased efficiency across teams. Improved Accuracy and Reduced Errors Manual data entry may cause mistakes, duplicates, and inconsistencies. Extraction tools eliminate human error, ensuring the data is clean, consistent, and reliable. This automation is especially crucial for the finance, healthcare, and logistics industries, where accuracy is everything. Real-Time Data Access for Immediate Insights With businesses moving at lightning speed, waiting for data updates is no longer an option. Real-time extraction tools provide instant access to the latest info, helping companies monitor trends, track performance, and make quick, informed decisions. Whether it\u2019s livestock updates, real-time customer interactions, or dynamic reports, staying updated means staying ahead. Seamless Integration for Better Info Synchronization Most businesses use multiple software platforms, like CRMs, ERPs, databases, and cloud storage systems. Data extraction software provides smooth integration across different platforms, ensuring all systems stay in sync. This approach eliminates data silos, improves collaboration, and ensures that everyone works with the most up-to-date information. Scalable Solutions for Growing Data Needs As businesses expand, so does the amount of data they generate. Solid extraction software grows with your organization, handling increasing data volumes without slowing operations. You might be a small startup or a large enterprise, but scalable extraction tools allow each company never to outgrow the data processes. How to Select the Best Data Extraction Software? With so many powerful data extractor tools, choosing the right one might feel overwhelming. However, the perfect solution depends on the company\u2019s business needs, budget, and technical expertise. Let\u2019s break it down so you can make the best choice. Define Your Data Extraction Needs What kind of data are you dealing with? Different teams and industries work with various data sources, each requiring a tailored extraction approach. Are you scraping competitor websites for pricing insights, extracting lead details from emails, processing invoices from PDFs, or syncing structured CRM and finance data between platforms? Example 1 Marketing teams work with social media platforms, ad networks, and website analytics to track campaign performance and audience engagement. They might need to scrape data from Facebook Ads, Google Analytics, or LinkedIn campaigns for deeper insights. Example 2 Sales teams extract and sync structured data from CRM platforms (Salesforce, HubSpot), lead generation tools, and financial systems to streamline sales processes and revenue tracking. They may need to parse emails for new lead details or contracts. Example 3 E-commerce businesses scrape competitor pricing, product descriptions, and customer reviews from marketplaces like Amazon, eBay, and Shopify stores to optimize pricing strategies and improve customer experience. Example 4 Logistics and Supply Chain extract shipment details, supplier invoices, and tracking data from ERP systems, transport management platforms, and warehouse reports to ensure smooth operations. Tools like Import.io, Octoparse, ParseHub, and Mozenda are great options if you need web scraping. For document processing (invoices, PDFs, emails, etc.), look at MailParser, DocParser, Nanonets, and Rossum. These specialize in OCR and intelligent document extraction. Need to sync databases and cloud apps? Skyvia and Hevo Data provide automated integration and extraction between platforms. Consider Automation & Ease of Use Do you need a no-code data extraction technique, or are you comfortable with some technical setup? Different teams and industries require varying levels of automation and customization, depending on their data complexity and technical expertise. Example 1 Marketing, sales, and operations teams often require plug-and-play solutions, eliminating manual data extraction. Example 2 Organizations that process invoices, contracts, receipts, and financial reports need AI-powered document processing. Example 3 Businesses that track competitor pricing, monitor product listings, or analyze customer reviews rely on web scraping tools for data extraction. However, complex websites with dynamic content, JavaScript rendering, or CAPTCHA protection may require manual adjustments. Example 4 Technical teams handling data pipelines, API integrations, and enterprise-wide data management may require more flexibility and control over extraction and transformation. Skyvia, Hevo Data, and MailParser are great for business users who want a no-code, plug-and-play solution. Rossum and Nanonets use AI-powered OCR but may require some customization and training to fine-tune data extraction. Octoparse, ParseHub, and Mozenda are great for point-and-click web scraping, but may need some tweaking for complex sites. Look at Scalability & Performance Is your business small and just getting started, or do you need an enterprise-level solution that can handle high volumes of data? The scalability and performance of the insights extraction tool should match the current needs and future growth. Example 1 Smaller companies with limited data needs often prefer cost-effective, easy-to-use solutions that allow them to start small and scale later. Example 2 Growing companies that manage multiple data sources, such as CRM, finance tools, and marketing platforms, need tools that support larger datasets and automated workflows. Example 3 Large-scale businesses handling millions of data points daily require enterprise-grade performance, high-speed processing, and scalability. Example 4 E-commerce, finance, and logistics companies often require real-time data updates to make quick decisions. Hevo Data and Skyvia are excellent for scaling data integration and replication, supporting large businesses. Rossum and Nanonets are great if you need AI-powered document processing that improves over time. Import.io and Octoparse work well for businesses that rely on frequent web data extraction. Integration Capabilities Where is the extracted info going? If your goal is seamless integration across multiple platforms, it\u2019s essential to choose a tool that effortlessly integrates with CRM, ERP, analytics tools, or a centralized DWH to maintain a single source of truth. Example 1 Marketing and sales departments typically want data to flow directly into CRM platforms (Salesforce, HubSpot) or [marketing analytics tools](https://skyvia.com/blog/top-marketing-analytics-tools/) . They use tools that automate syncing, ensuring insights are always current. Example 2 These teams need data integrated into ERP systems (SAP, Oracle ERP) or a centralized data warehouse (BigQuery, Snowflake). This approach helps them manage financial reporting, compliance, and budget forecasting with a single source of truth. Example 3 Tech companies often consolidate data from multiple cloud applications into a centralized data warehouse or analytics platform, establishing a trusted source of truth for all internal reporting and strategic decisions. Example 4 Product and analytics teams prefer integrations with BI tools like Tableau, Power BI, or Looker, where data from multiple sources is centralized in a DWH. This approach supports complex modeling, analysis, and decision-making. If you need to connect multiple databases and cloud services, go with Skyvia or Hevo Data. They facilitate easy integration into DWHs, helping establish a centralized source of truth for reliable, company-wide analytics. Need API access for custom workflows? Nanonets, Rossum, and Import.io offer strong API integrations. For Zapier-style automation, tools like MailParser and DocParser work well with third-party apps. Pricing and Budget Not all data extraction tools are priced the same, so balancing cost with scalability is essential. A tool might be affordable now, but as soon as the data needs to grow, organizations will want a cost-effective solution without sacrificing performance. Example 1 Smaller companies with limited budgets often need affordable, pay-as-you-go solutions that allow them to scale as needed. Example 2 Companies that expand their data usage need solutions to handle larger volumes without exponentially increasing costs. Example 3 Enterprises processing millions of records daily require enterprise-grade plans that support high-frequency data extraction and large-scale integrations. Example 4 Businesses anticipating growing data needs should invest in a tool that offers flexibility without locking them into expensive contracts. A cost-effective approach is to start with a freemium or mid-tier plan and upgrade as data complexity increases. Skyvia, Hevo Data, MailParser, DocParser, Octoparse, ParseHub, and Nanonets offer the best free plans and trials. Octoparse, ParseHub, and MailParser provide affordable plans for startups and small teams. Rossum, Hevo Data, and Skyvia provide scalable pricing based on usage, which is good for enterprise-level solutions Summary At the end of the day, the best tool is the one that fits your business needs, budget, and workflow. Take advantage of free trials and demos, test the features, and choose a solution that makes data extraction effortless. Since Skyvia provides more advanced integration features, we highly recommend trying it to connect business data from anywhere. FAQ for Data Extraction Tools Which ETL tools are best for data extraction? The best [ETL tools](https://skyvia.com/blog/etl-tools/) for data extraction depend on your needs. Skyvia and Hevo Data offer no-code automation for databases and SaaS apps. Rossum, MailParser, and DocParser specialize in OCR and document parsing. Octoparse, ParseHub, and Mozenda excel in web scraping, while IBM InfoSphere and Oracle GoldenGate support enterprise-scale extraction. What are the main data extraction techniques? Data extraction techniques fall into three main categories: \u2013\u00a0Full Extraction.\u00a0Extracting the entire dataset without tracking changes. \u2013\u00a0Incremental Extraction.\u00a0Extracting only new or updated data since the last extraction. \u2013\u00a0API-based Extraction.\u00a0Using APIs to pull structured data from applications and platforms. \u2013\u00a0Web Scraping.\u00a0Extracting data from web pages using automated scripts or tools like BeautifulSoup or Scrapy. \u2013\u00a0Database Query Extraction.\u00a0Using SQL queries to fetch data from relational databases. What are the different types of data extraction? There are three primary types of data extraction: \u2013\u00a0Structured Extraction.\u00a0Extracting data from databases, APIs, and structured files (e.g., CSV, JSON). \u2013\u00a0Semi-Structured Extraction.\u00a0Extracting data from XML, JSON, or other loosely structured sources. \u2013\u00a0Unstructured Extraction.\u00a0Extracting text, images, and other unstructured data from documents, emails, and PDFs using AI or NLP techniques. What challenges can arise in data extraction? Some common challenges in data extraction include: \u2013\u00a0Data Format Variability.\u00a0Different formats (JSON, XML, CSV) are available across sources. \u2013\u00a0Access Restrictions.\u00a0API limitations or blocked scraping attempts. \u2013\u00a0Large Data Volumes.\u00a0Processing and storing massive amounts of data efficiently. \u2013\u00a0Real-Time Needs.\u00a0Ensuring data is extracted and processed quickly for real-time analytics. How does data extraction work in ETL? In ETL, data extraction is the first step. It involves: \u2013\u00a0Identifying data sources\u00a0like\u00a0databases, applications, cloud platforms, or web sources. \u2013\u00a0Extracting Data.\u00a0\u00a0Using APIs, SQL queries, or web scraping tools to collect raw data. \u2013\u00a0Cleaning and Preprocessing.\u00a0Handling missing values and formatting inconsistencies. \u2013\u00a0Storing or Loading.\u00a0\u00a0Sending extracted data to a data warehouse, data lake, or another destination for analysis. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-extraction-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+10+Data+Extraction+Tools+for+2025%3A+Detailed+Comparison&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-extraction-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-extraction-tools/&title=Top+10+Data+Extraction+Tools+for+2025%3A+Detailed+Comparison) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-data-ingestion-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Selecting the Best Data Ingestion Tool By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-ingestion-tools/#respond) 3227 September 15, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-ingestion-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Ingestion+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-ingestion-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-ingestion-tools/&title=Selecting+the+Best+Data+Ingestion+Tool) The whole modern business world revolves around data, as each step of the company\u2019s strategy involves analyzing information from various sources. In a way, data ingestion is similar to some bee jobs. Collecting various data types (structured and unstructured) from a set of data sources like databases, data lakes, IoT, cloud apps, etc. Storing it in one centralized destination to analyze. You may ingest data in real-time , batch it on schedule , or combine both using design patterns like Lambda Architecture. The first approach is the fastest, and the last one is fast for real-time but not for batch. The data type defines the data ingestion tool selection, and here are the best 10 ones. Table of Contents Skyvia Apache Kafka Apache NiFi Talend Apache Flume Amazon Kinesis Wavefront Precisely Connect Airbyte Dropbase Tools comparison Skyvia If you are looking for a compatible, scalable, and fast solution for data ingestion, [Skyvia](https://skyvia.com/) is first on this list. The tool is popular because of its useability , strong list of data sources and destinations supported, and great [pricing](https://skyvia.com/pricing/) with the Freemium model available. In other words, the tool is pretty balanced between provided services and saved costs. So, what can it offer? Wide range of data integration scenarios: ETL, ELT, reverse ETL. Data sync, automation, etc. Easy-to-use UI without coding. 180+ data sources and destinations supported, like [HubSpot](https://skyvia.com/connectors/hubspot) , [Salesforce](https://skyvia.com/connectors/salesforce) , [Dynamics CRM](https://skyvia.com/connectors/dynamics-crm) , [Shopify](https://skyvia.com/connectors/shopify) , and other cloud apps and relational databases, including [MySQL](https://skyvia.com/connectors/mysql) , [PostgreSQL](https://skyvia.com/connectors/postgresql) , [Oracle](https://skyvia.com/connectors/oracle) , etc. Ability to solve both simple integration scenarios and build more complex data pipelines with multiple sources involved. The robust security options (HIPAA, GDPR, PCI DSS, ISO 27001, and SOC 2 by Azure). Try Skyvia in action by yourself, or schedule a demo. Apache Kafka Another competitive player in the market is [Apache Kafka](https://kafka.apache.org/) . It\u2019s open-source so you may use its advantages for free. And the main of them are: The ability to stream fast large volumes of data in real time. The easy connection with a set of external sources and destinations for import and export data. Wide range of different tools and APIs for developers creating custom data pipelines. Although Apache Kafka is scalable and fast, it has a few drawbacks. The solution is unfriendly for non-technical users. You need a technical background to configure it and create brokers and topics. A Broker is a server on a cluster for receiving and processing data with a unique ID and a corresponding topic platform. Kafka brokers store data in a directory on the disk of the server they run on, and each topic section gets its own subdirectory with an associated section name. The lack of built-in management and monitoring abilities. Although the platform is free, you must pay for connectors on a Confluent subscription. Apache NiFi Similarly to Apache Kafka, [Apache NiFi](https://nifi.apache.org/) is an open-source data ingestion platform in the Apache products line. Let\u2019s see how it works? Imagine some delivery system like FedEx that automatically transfers your ordered stuff from the store to the home and monitors it from the source to the destination in real-time. Apache NiFi just does the same but with your data. The solution is not as complicated as Apache Kafka and provides 280+ different processors responsible for each part of the work (Get, Put, Fetch, etc.). The key advantages of this tool are: Flexible scaling model. Flow management. Extensible architecture. But while working with large data volumes, the solution eats a lot of processing power and memory, which makes it not so cost-effective, especially for small businesses. Talend Talend Data Fabric , presented by [Talend](https://ua.talend.com/) , looks like a pretty adult solution that fits more for big enterprises. The amount of supported connectors is impressive \u2013 1000 various sources! It also supports a cool set of destinations like Google Cloud Platform, Amazon Web Services, Snowflake, Microsoft Azure, Databricks, etc. Compared to other data ingestion platforms, the main advantages here are: An ability to drag and drop the data to quickly create pipelines for further usage. Data quality, error checking, and fixing functionality. Large data amount management ability (in cloud or on-premise). But the solution is not user-friendly and needs additional time for learning, which might be a blocker for unfamiliar users. Another possible drawback is unclear [pricing](https://ua.talend.com/pricing/) . You may select a subscription plan, but the details look like a big secret, so connecting with the management team would be better. Apache Flume [Apache Flume](https://flume.apache.org/) is one more Apache open-source solution mainly used to ingest data from different sources into Hadoop Distributed File System (HDFS). Its job is to extract, aggregate, and move large log data volumes. Except for Hadoop, you may set it up to work with Hbase and Solr. The main advantages of this tool are: Fault-tolerance. Usability. Reliability set-up. The product is accessible for free, but it\u2019s complex in configuration, especially for newcomers. And there is a lack of technical documentation to relay in case of any issues. Except for it, the solution is not so flexible, and it would not be easy to customize it for some specific needs. Amazon Kinesis If your business cases require data extraction, processing, and analysis as soon as it comes, and make decisions now, not waiting for the whole data collection, [Amazon Kinesis](https://aws.amazon.com/kinesis/) is the perfect choice. This cloud solution is about fast and qualified processing and analysis of real-time data and video streaming. Its key advantages are: Impressive data processing speed (terabytes per hour). An ability to extract a high volume of data from a wide range of sources (financial docs, IoT, social media, location-tracking, telemetry, etc.). Despite the pay-as-you-go [pricing](https://calculator.aws/#/addService/KinesisDataStreams) model, the service becomes costly depending on the data ingested and the level of throughput. If you need more, you\u2019ll pay more. The configuration and setting of the service might be a quest for those who see it for the first time. And, at last, the integration with non-AWS is quite limited. Wavefront The most powerful side of [Wavefront](https://www.vmware.com/products/aria-operations-for-applications.html) is its observability. If your business cases are about what\u2019s wrong here and how to fix it immediately, it might be a good idea to try it. This cloud-based tool is created for all types of work with metric data like ingesting, storing, visualizing, monitoring, etc. Google-invented stream processing approach used here makes the platform scalable and ready for the rising of query loads. Let\u2019s review the advantages list. There are 200+ sources and services (DevOps tools, cloud service providers, data service, etc.) available here. Easy real-time data manipulation ability \u2013 the solution may ingest millions of data points per second. Alerting, based on data collected on custom dashboards and even forecasting troubles. The back side of the solution\u2019s advantages is its complexity. You need a solid technical background to configure and set it up. And remember that working with unique metrics, sources, and tags complicates data point ingestions and slows the environment down. The price here depends on data point per second (PPS), starting from $1.50. For instance, you\u2019ll pay $15 per month with a condition of 100 data points for each host per 10 seconds monthly. Precisely Connect If your data zoo lives in various ranges of sources like databases, clouds, and even legacy systems, [Precisely Connect](https://www.precisely.com/product/precisely-connect/connect) is a helpful tool for ingestion and integration. Here, you may build streaming data pipelines and, what\u2019s very important, share the data with the company members to see the whole picture. The platform\u2019s main advantages are: Strong data integration abilities, including sources like [RDBMS](https://www.devart.com/what-is-rdbms/) , mainframe, NoSQL, and cloud. The real-time data replication abilities. Regular data silos cleaning. The backstage of the benefits is the solution\u2019s complexity. It needs a strong tech background to configure it and set it up. Except for it, it\u2019s not cheap. Similar to Talend, pricing here is also a big secret and depends on the list of factors. But be ready to pay from $99 to $2000+ monthly for the most popular [ETL solutions](https://skyvia.com/blog/etl-tools/) . Airbyte [Airbyte](https://airbyte.com/) is a secured open-source data ingestion tool focused on fast data extraction from various sources and loading it into different destinations like data warehouses, data lakes, and databases. The solution is intuitive and provides pre-built and custom connectors to speed up the replication to minutes. Its main advantages are: Scalability. Easy to understand UI. Number of connectors ( 300+ SaaS apps, cloud apps, databases, etc.). But despite the intuitive interface, it might be a blocker to set the tool up and configure for non-technical users. Another drawback is the lack of technical documentation and support. You may use the open-source version for free or choose pay-as-you-go [pricing](https://airbyte.com/pricing) . Also, you may use the 14-day free trial to decide which version fits for your company. Dropbase If you work with offline data and need to transform it into real-time online databases, [Dropbase](https://www.dropbase.io/) is a pretty good service for such operations and workflow automation. With the unified spreadsheet, you can quickly and safely edit, clean, and validate customer data across databases, web apps, and APIs. In other words, Dropbase is just a data editor where you may import and export data from CSV, Excel, XML, and JSON and load it to Postgres DB for sharing. The main advantage of this service is the ability to make the offline data live and accessible in real time for all team members on different project levels. The service isn\u2019t compatible enough with a wide range of similar tools, which might be a bottleneck. But if your company uses Apache Kafka and Apache NiFi, there\u2019s no problem. The [pricing](https://www.dropbase.io/pricing) ranges from standard to custom, starting from $150 per workspace per month, and doesn\u2019t offer any free plan. Tools comparison The table below compares the tools listed to select the best choice for comfortable work. And, in case you haven\u2019t yet found the chosen one for your business needs, coma and check the big [ETL/ELT Tools Comparison](https://skyvia.com/etl-tools-comparison/#Skyvia) made by Skyvia with 25+ services and platforms compared side-by-side. Tool Usability Scalability Security Pricing Skyvia Easy-to-use, visual wizard. Yes HIPAA, GDPR, PCI DSS. ISO 27001 and SOC 2 (by Azure). Volume-based and feature-based pricing. The Freemium model allows you to start with a free plan. Apache Kafka Tech background is required. Yes SOC 1/2/3, ISO 27001, and GDPR/CCPA. An open-source solution. Apache NiFi Depends on the tech expertise level. May be difficult for beginners. Yes SOC2, GDPR, HIPAA, CCPA, and TSL. An open-source solution. Talend Tech background is required. Yes GDPR, HIPPA, SOC 2, CSA STAR. Subscription-based pricing. 14-day free trial for cloud. Free Talend Open Studio. Apache Flume Partly. Needs experience in configuration and set up. Yes HITRUST, SOCII, and ISO 27000 [2]. HITRUST. An open-source solution. Amazon Kinesis Easy-to-use. Yes SOC 1/ISAE 3402, SOC 2, SOC 3, FISMA, DIACAP, and FedRAMP. The pay-as-you-go price. Wavefront Easy-to-use. Yes HIPAA, PCI-DSS, and FedRAMP, ISO 27001/27017/27018, SOC 2 Type 1, and CSA. PPS Pricing. Precisely Connect Tech background is required. Yes CDP, CDC. The price is flexible depending on your needs. Airbyte Tech background is required. Yes SOC 2 TYPE 1/2, ISO 27001. Pay-as-you-go model with a 14-day free trial. Dropbase Easy-to-use. Yes SOC 2 Type 2, ISO 27001, and HIPAA. Flexible price without a free plan. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-ingestion-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Ingestion+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-ingestion-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-ingestion-tools/&title=Selecting+the+Best+Data+Ingestion+Tool) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-lake-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Comparing the Best Data Lake Tools By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/top-data-lake-tools/#respond) 971 November 22, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lake-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Comparing+the+Best+Data+Lake+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lake-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-lake-tools/&title=Comparing+the+Best+Data+Lake+Tools) Data lakes are common for centralized storage of large amounts of data in its native format. According to [Fortune Business Insights](https://www.fortunebusinessinsights.com/data-lake-market-108761) , large enterprises hold a dominant share of the global data lake market since they typically manage big data. However, small businesses have also started discovering the benefits of data lakes, which help them identify trends out of data and enhance customer service. Contrary to the popular belief that all data lakes are hosted in the cloud, many companies also prefer on-premises data lakes. Another myth is associated with the high costs of data lakes, which is not true. In this article, you will discover other curious things about data lakes and see which solutions will help you create one for your company. Table of Contents What Is a Data Lake? What Value Does a Data Lake Bring? Data Lake Architecture Popular Data Lake Tools Data Integration with Skyvia How to Choose a Data Lake Tool? Conclusion What Is a Data Lake? \u201cA data lake is a low-cost storage environment, which typically houses petabytes of raw data.\u201d \u2013 IBM \u201cA data lake is a place to store [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) , as well as a method for organizing large volumes of heterogeneous data from diverse sources.\u201d \u2013 Oracle We\u2019d stick rather to the second definition as it emphasizes the dual purpose of a data lake. It\u2019s not only a centralized repository for storing unstructured, semi-structured, and unstructured data but an [approach to data management](https://skyvia.com/blog/data-mesh-vs-data-lake/) . According to [Fortune Business Insights](https://www.fortunebusinessinsights.com/data-lake-market-108761) , data lakes are widely used in banking and finance (21%), IT and telecom (24%), retail and e-commerce (20%), healthcare and life sciences (18%), and in other industries. Data lakes make up a solid basis for data-intensive tasks, such as real-time analytics and machine learning. It\u2019s possible to implement a data lake either in the cloud or on-premises. The latter choice provides better control over data, so it\u2019s widespread among financial and healthcare institutions. However, note that data lake tables aren\u2019t transactional; they show either success or failure results and cannot roll back operations. In this article, we will also discuss data lakehouses, a solution that combines the best of two worlds \u2013 data warehouses and data lakes. It also overcomes the challenges of these two systems since it supports unstructured data, which is unavailable in data warehouses, and transaction management, which is restricted in a data lake. What Value Does a Data Lake Bring? Many organizations have already shifted to data lakes, while others are on their way to implementing them. Such popularity isn\u2019t occasional since data lakes bring value to companies. Here are some of their key benefits: Data consolidation. Data is collected from multiple sources in various formats and amounts with the [data aggregation tools](https://skyvia.com/blog/data-aggregation-tool/) . That way, data storage is centralized, which makes it easier to manage and govern. Regulatory compliance. Centralized management enables admins to set up role-based access, which strengthens overall data security. Advanced analytics. With data lakes, you can perform complex analytical operations without the need to [move data](https://skyvia.com/blog/top-data-movement-tools/) to a dedicated analytics system. Machine learning. It\u2019s easy to build machine learning models using a pile of data in its original format. IoT data support. Data lakes also can accept data from high-speed data streams. Data Lake Architecture As mentioned above, data lake is about storage, but it\u2019s not just a repository but a number of connected tools empowering centralized data management. The architecture of each data lake may vary slightly from one company to another. Anyway, the typical scheme of a data lake looks something like this: Layer Description Data ingestion layer Initially, data is gathered from various sources and sent to the central repository in its raw format. [Data ingestion tools](https://skyvia.com/blog/top-data-ingestion-tools/) can help with collecting data at this stage. Distillation layer At this point, it\u2019s necessary to convert unstructured data to a structured format to be used for analytical purposes. Data processing layer With the help of query tools, users can execute queries on structured data. Analytical applications also run at this layer and consume the structured data. Insights layer This tier acts as an output interface of a data lake. SQL and NoSQL queries are used to fetch data. The results are usually provided in the form of dashboards and reports. Unified Operations layer This layer is designed for system monitoring. Popular Data Lake Tools Amazon S3 There are 1000+ data lakes already running on AWS. [Amazon S3](https://aws.amazon.com/s3/) is a decent storage solution for your data lake foundation, thanks to its availability, scalability, durability, security, and compliance. With AWS, you can create a data lake rather fast and then use AWS Glue to move data to such analytics services as AWS Athena. It allows you to perform advanced analytics tasks, streaming analytics, BI operations, machine learning, and other data-demanding operations. There is a popular question of whether Redshift can be a basis for data lakes as well. Get to know the [differences between AWS Redshift and S3](https://skyvia.com/blog/redshift-vs-s3/) and discover why the latter is a better choice. Pros Cons \u2013 Encryption and security. \u2013 High scalability. \u2013 Ease of use. \u2013 A diverse set of tools. \u2013 Confusing billing. \u2013 AWS limits resources based on location. \u2013 The interface is a bit laggy. [Learn more about Amazon S3 Integration](https://skyvia.com/connectors/amazon-s3) Google Cloud Storage [Google Cloud Storage](https://cloud.google.com/storage?hl=en) is a service designed to store mostly unstructured data, including video, audio, media files, etc. Obviously, it makes the first layer of a data lake. So, other Google Cloud services are needed to build a data lake and extract value out of data. Depending on your analytics needs, use Google Cloud Dataprep or Dataflow for the distillation layer. Dataflow is a service for batch and stream data processing, while Dataprep is a service for data exploration, cleansing, and preparation. Then, preprocessed data can be loaded into BigQuery for analysis or SQL queries can be used to fetch data from large datasets. Pros Cons \u2013 Excellent documentation and detailed API reference guides. \u2013 Affordable prices for different storage classes. \u2013 Easy to integrate with other Google Cloud services. \u2013 Costly support service. \u2013 Complex pricing scheme. [Explore Google Cloud Storage](https://cloud.google.com/storage?hl=en) Snowflake [Snowflake](https://www.snowflake.com/) allows users to bring together structured, semi-structured, and unstructured data in their Data Cloud. With its Elastic Performance Engine, different users can run independent jobs on the same data. This feature powers up data science, analytics, and application creation without the need to manage the underlying infrastructure. Snowflake relies on state-of-the-art compression algorithms to reduce the amount of data stored, which directly leads to cost savings. It also simplifies data governance by managing privileges with the RBAC (role-based access control) mechanism. Pros Cons \u2013 Decoupled architecture of Storage and Compute. \u2013 Centralized data security. \u2013 Integration with many apps and services. \u2013 Vague pricing model. \u2013 Limited support of unstructured data. \u2013 Graphical interface for app development. [Best Practices of Snowflake Data Integration](https://skyvia.com/blog/loading-data-into-snowflake/) Azure Data Lake [Azure Data Lake](https://azure.microsoft.com/en-us/solutions/data-lake) is a solution for organizations that want to take advantage of big data. It helps data scientists and analytics professionals to extract value from large amounts of data. Azure Data Lake supports multiple popular programming languages and integrates with other data lakes and [data warehouses](https://skyvia.com/learn/what-is-data-data-warehouse) . Azure Data Lake consists of three main components: Storage makes a base for a secure data lake platform. Analytics allows users to run massively parallel processing (MPP) and transformation tasks on petabytes of data. HDInsight cluster deploys Apache Hadoop on the Microsoft platform so that users can optimize analytics clusters for Apache Spark, Kafka, Storm, and other tools working with data streams. Pros Cons \u2013 Cost-efficient \u2013 Enhanced data security \u2013 Integration with other Microsoft and open-source products \u2013 Setup complexity that requires strong technical proficiency \u2013 Slower performance compared to other data lakes [Explore Azure Data Lake](https://azure.microsoft.com/en-us/solutions/data-lake) Databricks [Databricks](https://www.databricks.com/) is a data intelligence platform suitable for building lakehouses. It allows companies to craft lakehouses containing the following key components: Delta Lake is a storage layer that also brings ACID guarantees to transaction data. Apache Spark is the module for data transformation and preparation for complex analytical tasks. Photon is an engine for data query acceleration, especially when addressing large datasets. Databricks SQL is an analytics engine for interactive query and reporting. Integrated data science tools are integrated for building, training, and deploying machine learning models. In particular, TensorFlow, PyTorch, and scikit-learn libraries are available within Databricks. All together, these tools create a comprehensive ecosystem for data ingestion, storage, processing, and analysis. Pros Cons \u2013 Support of ACID transactions \u2013 Fast data processing \u2013 End-to-end ML support \u2013 High cost \u2013 Vendor lock-in [Explore Databricks Integration](https://skyvia.com/data-integration/databricks) Dremio [Dremio](https://www.dremio.com/) is a lakehouse platform designed for self-service analytics and AI. Data analysts can use it to visualize and explore data, enjoying low response times. Data engineering can ingest and transform data directly in a data lake, enjoying the full support of DML operations. A huge advantage of Dremio is that it works both with cloud and on-premises data. Dremio uses open-source Apache Arrow, proprietary Columnar Cloud Cache (C3), and Reflection Query Acceleration technologies, which makes it a fast query engine. Pros Cons \u2013 Simplified data management \u2013 50% lower TCO compared to other lakehouse solutions \u2013 High-performing SQL query engine \u2013 Doesn\u2019t connect to legacy data sources \u2013 Limited documentation and support [Explore Dremio](https://www.dremio.com/) Data Integration with Skyvia [Skyvia](https://skyvia.com/) is not a data lake or a data lakehouse, but it can assist companies with populating data lake storage with data. It\u2019s a universal cloud platform designed for multiple data-related tasks, such as data integration, data query, SaaS backup, workflow automation, etc. Skyvia\u2019s [Data Integration](https://skyvia.com/data-integration/) product has everything needed to populate data lakes with analysis-ready data with the following tools: [Import](https://skyvia.com/data-integration/import) is the ETL-based tool for ingesting data from cloud apps, data warehouses, and databases. It also allows users to perform filtering and transformation operations on data. [Replication](https://skyvia.com/data-integration/replication) is the ELT-based tool for copying data from cloud apps into data warehouses and databases. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) allows users to build more complex data pipelines, including several sources, and apply multistage data transformations. [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) creates logic for task execution, performs preparatory and post-integration activities, and configures automatic error processing logic. Pros Cons \u2013 Intuitive user interface \u2013 Integration with [200+ data sources](https://skyvia.com/connectors) \u2013 Powerful data transformations \u2013 Flexible [pricing model](https://skyvia.com/pricing) \u2013 Unclear error messages \u2013 No unstructured data support Overall, Skyvia could be a good solution for capturing data from different sources and moving it into supported data lakes. [Explore Skyvia Platform](https://app.skyvia.com/) How to Choose a Data Lake Tool? Choosing the right tools for building your data lake is not that difficult. Just pay attention to several factors: Integration capabilities. Perform an audit of your existing infrastructure and see whether it integrates well with a data lake tool. For instance, if you have many processes running over the Google Cloud infrastructure, then Google products could be a good choice. Features. Explore the functionality of data lake tools and decide whether they can help to accomplish your business objectives. Explore querying and analytical capabilities. Each data lake has a processing layer where queries and analytics take place. Some tools offer faster queries, while others can\u2019t boast high performance. It\u2019s up to you to decide which option to prefer, depending on the required analytics speed. Price. Well, estimating data lake tool price is challenging since it depends on the data storage amounts and processing times. However, each cloud service provider has a price calculator, allowing you to estimate a data lake cost. That way, you can get an idea of the approximate spending on a data lake and compare it to the allocated budget. Metadata management. Since metadata is crucial for understanding and organizing data within a data lake, check whether a specific tool ensures effective metadata management. Here is a comparison table for data lake tools, which may help you make the right choice. Data lake tool Rating Pros Cons Amazon S3 4.6 / 5 \u2013 Encryption and security \u2013 High scalability \u2013 Ease of use \u2013 Diverse set of tools \u2013 Confusing billing \u2013 AWS limits resources based on location \u2013 Interface is a bit laggy Google Cloud Storage 4.6 / 5 \u2013 Excellent documentation and detailed API reference guides \u2013 Affordable prices for different storage classes \u2013 Easy to integrate with other Google Cloud services \u2013 Costly support service \u2013 Complex pricing scheme Snowflake 4.5 / 5 \u2013 Decoupled architecture of Storage and Compute \u2013 Centralized data security \u2013 Integration with many apps and services \u2013 Vague pricing model \u2013 Limited support of unstructured data \u2013 Graphical interface for app development Azure Data Lake 4.5 / 5 \u2013 Cost-efficient \u2013 Enhanced data security \u2013 Integration with other Microsoft and open-source products \u2013 Setup complexity that requires strong technical proficiency \u2013 Slower performance compared to other data lakes Databricks 4.6 / 5 \u2013 Support of ACID transactions \u2013 Fast data processing \u2013 End-to-end ML support \u2013 High cost \u2013 Vendor lock-in Dremio 4.6 / 5 \u2013 Simplified data management \u2013 50% lower TCO compared to other lakehouse solutions \u2013 High-performing SQL query engine \u2013 Doesn\u2019t connect to legacy data sources \u2013 Limited documentation and support Conclusion The compound architecture of a data lake contains not only storage but also querying and analytical modules. Its advanced version, a data lakehouse, even supports ACID transactions and embeds smart modules for analytics. Modern cloud service providers can help build a well-performing data lake. Azure, Amazon, and Google offer a combination of tools for constructing a data lake, which helps to extract value from your data and perform advanced analytics. Databricks and Dremio can extend data lake functionality even further. Regardless of the chosen data lake, Skyvia can help you fill it with data from multiple sources. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lake-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Comparing+the+Best+Data+Lake+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lake-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-lake-tools/&title=Comparing+the+Best+Data+Lake+Tools) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-lineage-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top Data Lineage Tools for Businesses in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-lineage-tools/#respond) 833 December 20, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lineage-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Lineage+Tools+for+Businesses+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lineage-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-lineage-tools/&title=Top+Data+Lineage+Tools+for+Businesses+in+2025) Let\u2019s dive into the world of data lineage, a lifesaver for businesses that need to manage dataflows effectively. In this guide, we\u2019ll look at the Top Data Lineage Tools for Businesses in 2025 , exploring how they can help organizations stay on top, improve transparency, and ensure compliance. Table of Content What Are Data Lineage Tools? Key Benefits of Using Data Lineage Tools for Businesses Types of Data Lineage Tools Top Data Lineage Tools in 2025 OpenMetadata OpenLineage + Marquez Apache Atlas Egeria Splin [e](https://docs.google.com/document/d/1FB5fd5mXZhZT1OobtXbXkIYfcPjGMrnCdCN2rQz4hP8/edit#heading=h.liqzt9pmruq8) Best Practices for Choosing a Data Lineage Tool Enhancing Data Lineage with Skyvia Conclusion What Are Data Lineage Tools? Data lineage tools are specialized software that automatically track the movement and transformation of data across different systems, databases, and applications. They provide a visual map that helps teams see where data originates, where it travels, and how it\u2019s altered along the way. These tools: Optimize workflows. Help businesses dealing with complex ecosystems. Fit companies that need to improve transparency. Resolve issues quickly. Why is Data Lineage Essential? Imagine trying to figure out why your business reports suddenly show odd results. It\u2019s like trying to solve a mystery without any clues, without knowing where the data came from, how it was processed, and whether there were any transformations along the way. That\u2019s where data lineage tools come in. They help track the entire data lifecycle, ensuring you know exactly what happens at every step. Overall data lineage is essential for: Data accuracy. Understanding what parts have changed over time helps maintain consistency and avoid errors. Compliance and auditing. With regulations like GDPR and CCPA, companies must prove where information comes from and how it\u2019s handled. Data governance. Effective data management means knowing who is responsible for what and ensuring that the data is being used properly. How Do Data Lineage Tools Help? According to a survey by [IBM](https://www.ibm.com/data-management) , 75% of businesses struggle to maintain data quality across their systems, and many of these challenges are due to a lack of visibility into workflows. The lineage solutions solve this by clearly showing how information moves and transforms, ensuring higher accuracy and better decision-making. A report from [Gartner](https://www.gartner.com/en/data-analytics/topics/data-management) shows that companies using data lineage tools see a 30% reduction in time spent on root cause analysis when issues arise. Instead of spending hours figuring out where the problem lies, teams can quickly pinpoint where errors occurred and resolve them faster. Key Benefits of Using Data Lineage Tools for Businesses In a world where businesses rely heavily on data for decision-making, having visibility of its movement and transformation is crucial. Data lineage tools are game-changers for companies looking to enhance data governance , ensure regulatory compliance , and improve data quality . They provide the transparency needed to track dataflows across systems, fix problems before they escalate, and ensure that businesses meet regulatory standards. Key Benefit Description Impact Area Tracking data ownership Identifies data owners, ensuring quality management. Governance Improving data transparency Visualizes data flow, maintaining consistency. Governance Boosting collaboration Facilitates teamwork on data projects. Governance Root cause analysis Pinpoints data issues, fixing errors at the source. Quality Detecting inconsistencies Highlights unexpected data changes for standardization. Quality Data tracking for audits Documents data movement for regulatory audits. Compliance Enhancing Data Governance Tracking ownership. Companies can identify who owns specific files, making it clear who is responsible for managing them. Improving transparency. By visualizing how data flows through the ecosystem, everyone involved can see where data comes from, how it\u2019s transformed, and how it\u2019s used, helping to maintain consistency across the organization. Boosting collaboration. Collaborating on data-related projects becomes more manageable for teams across departments. With a clear understanding of how information is shared and processed, users may enforce data governance policies more effectively. Improving Data Quality Root cause analysis. When a quality issue arises, it\u2019s easy to pinpoint where the problem occurred. Users can trace the files to its origin and fix errors at the source, leading to more reliable insights. Detecting inconsistencies. If data is being transformed or manipulated unexpectedly, lineage tools can highlight those changes, allowing teams to standardize processes and ensure system consistency. Preventing future issues. By regularly tracking the flow and transformations, such tools can help identify patterns that may lead to data quality problems, allowing businesses to address them before they affect operations. Ensuring Regulatory Compliance Data tracking for audits. Clear documentation of data movements, which is vital for audits and demonstrating compliance. It shows regulators where personal information comes from, where it\u2019s stored, and who has access to it. Managing consent and access. These tools allow businesses to track where sensitive or personal data is being used, helping to ensure that data is handled according to consent rules and that no unauthorized access occurs. Types of Data Lineage Tools As usual, there\u2019s no one-size-fits-all solution. Businesses have different needs, and luckily, there are two types of tools to suit various requirements: Embedded. Standalone. Standalone tools focus solely on data lineage tracking and are independent of other platforms. They provide features like: Visualization of dataflow across multiple platforms. Automated tracking of data movement and transformations. They\u2019re ideal for businesses needing detailed, cross-platform visibility of their data\u2019s journey and can be: Open-source, like [OpenLineage](https://openlineage.io/) or [Spline](https://spline.design/) , offers users the flexibility to customize and build their solutions. Commercial, like [MANTA](https://manta.network/) , [Collibra](https://www.collibra.com/us/en) , or [Octopai](https://www.octopai.com/) , with full support and advanced features and often used by enterprises. Embedded tools are part of a larger data management platform, such as a database, ETL tool, or DWH. These services provide lineage functionality as an additional feature but are not dedicated solely to lineage tracking and include: [ETL tools](https://skyvia.com/blog/etl-tools/) , like [Informatica PowerCenter](https://www.informatica.com/) and [Talend](https://www.talend.com/) , offer built-in lineage tracking as part of their integration capabilities. Databases, like [Snowflake](https://www.snowflake.com/en/) or [SQL Server](https://www.microsoft.com/en-us/sql-server/) , offer lineage features to track transformations within the database. Embedded tools are typically more convenient if you already use the platform for other purposes, but they may offer limited functionality compared to standalone ones. These can also be: Open-source, like [Apache Atlas](https://atlas.apache.org/) in the [Hadoop](https://hadoop.apache.org/) ecosystem. Commercial, like [Informatica](https://www.informatica.com/) or [Talend](https://www.talend.com/) , with lineage as part of their premium offerings. Comparison: Open-Source vs. Commercial Lineage Tools Feature Open-Source Lineage Tools Commercial Lineage Tools Cost Free to use, which is great for small businesses or startups. Paid, often with high licensing fees, but worth it for enterprise-level support and functionality. Customization Highly customizable, which is perfect if you have the technical team to modify it to your specific needs. Limited customization, but comes with pre-built, easy-to-use features and professional support. Ease of Setup Can be tricky to set up, usually requiring technical expertise and ongoing maintenance. Easy to set up with full support, and no need for technical know-how. Support Community-based support (which can be slower or inconsistent). Professional support and regular updates are provided, ensuring quick resolution of issues. Scalability Can be scalable if configured correctly but might require significant effort. Designed to scale easily, especially for large enterprises with massive amounts of data. Advanced Features Often lacks advanced features such as real-time tracking, automated reports, and AI-driven analytics. Includes advanced features such as automated lineage tracking, real-time updates, and enterprise-level analytics. Integration with Other Tools Integrating with other systems can be difficult and require custom coding. Typically integrates easily with existing data management systems, databases, and cloud platforms. Top Data Lineage Tools in 2025 OpenMetadata [OpenMetadata](https://open-metadata.org/) is an open-source tool that simplifies metadata management and data governance across an organization. It offers an intuitive interface for managing data lineage and integrates easily with popular data sources like Apache Kafka , Snowflake , and MySQL . Its extensibility makes it suitable for businesses with diverse data architectures, enabling effortlessly mapping and visualization of data lineage. Benefits Enhanced Data Governance. OpenMetadata\u2019s flexible framework helps businesses organize and control their data, improving team governance efforts. The tool promotes transparency and enables stakeholders to track how data is used and transformed. Ease of Use. Its user-friendly interface makes it accessible to a range of users, even those without technical expertise, ensuring broader adoption across teams. Integration-Friendly. OpenMetadata\u2019s wide range of connectors makes it easy to integrate into existing infrastructures, ensuring seamless data flow and lineage tracking across multiple platforms. Drawbacks Limited Advanced Features. OpenMetadata offers great features for basic data lineage. However, it might lack the advanced automation and real-time tracking features that larger enterprises might need. OpenLineage + Marquez The combination of [OpenLineage and Marquez](https://openlineage.io/docs/guides/dbt/) provides an open-source solution for capturing and visualizing data lineage. [OpenLineage](https://openlineage.io/) defines a standard for tracking data lineage, while Marquez acts as a metadata service that collects, stores, and visualizes data flow in real-time. They enable organizations to track data movement across complex data ecosystems with precision and accuracy. Benefits Standardized Lineage Tracking. OpenLineage sets a consistent standard for capturing metadata, ensuring traceability across various data systems, and reducing discrepancies in data flow. Real-Time Visibility. When paired with Marquez, users gain real-time insights into data pipelines, making it easier to spot issues or bottlenecks and address them quickly. Customizable. OpenLineage and Marquez provide flexibility, allowing businesses to tailor lineage tracking to fit their specific workflows and architectures. Drawbacks Complex Integration. Integrating OpenLineage with Marquez and configuring the ecosystem can be challenging, requiring strong technical expertise to leverage the toolset fully. Steep Learning Curve. Due to the technical nature of the tool, teams without experienced data engineers may find it difficult to deploy and manage effectively. Apache Atlas [Apache Atlas](https://atlas.apache.org/#/) is a powerful, open-source tool for metadata management and data governance within Hadoop and other distributed data environments. Its robust framework allows businesses to define, track, and query lineage information across various systems. Apache Atlas excels in helping organizations map out complex data flows cohesively while integrating well with platforms like Hive , HBase , and Spark . Benefits Strong Metadata Management. Apache Atlas handles complex metadata and lineage management across large-scale, distributed environments. It\u2019s perfect for organizations with diverse data architectures. Flexible Integration. Seamless integration with the Hadoop ecosystem makes Apache Atlas a top choice for companies already using Hadoop-based tools, enabling complete visibility into data flows across systems. Advanced Governance. Its detailed governance features allow businesses to control data usage, access, and compliance, providing a comprehensive view of how data is handled across the organization. Drawbacks Learning Curve Apache Atlas can be difficult to implement and master, especially for organizations unfamiliar with Hadoop and its tools. Scalability Issues. Apache Atlas may not be as effective in tracking lineage across other systems or platforms for enterprises that need to scale beyond Hadoop environments. Egeria [Egeria](https://egeria-project.org/) is also an open-source solution designed for enterprise-level data governance and lineage tracking. Built for collaboration, Egeria enables organizations to share and manage metadata across various platforms, ensuring that all teams are aligned on data usage and governance standards. \u00a0Its governance framework allows multiple teams to collaborate on data lineage projects, promoting transparency and compliance. Benefits Collaboration. Its open governance approach ensures all stakeholders can see how data is used and transformed. Open Governance. The platform\u2019s open-source framework allows for extensive customization, making it suitable for businesses with complex data governance needs. Enterprise-Level Scalability. It handles large-scale environments, allowing enterprises to track and manage data across multiple platforms. Drawbacks Complex Deployment. Setting up and deploying the platform can be complicated, particularly for organizations without dedicated data engineering resources. Maintenance Requirements. Ongoing maintenance and customization require technical expertise, and smaller teams may find it resource-intensive. Spline [Spline](https://absaoss.github.io/spline/) is a simple yet effective open-source tool for tracking and visualizing real-time data lineage in data processing frameworks like Apache Spark . It helps users capture and display how data is transformed through various stages, offering clear visibility into its lifecycle. Spline is lightweight and focuses on providing a streamlined solution for visualizing data flows without unnecessary complexity. Benefits Real-Time Lineage Tracking. Spline excels at real-time data lineage tracking, giving businesses immediate insights into how pipelines transform data. Simplicity. Its lightweight design and easy setup make Spline an excellent choice for teams that need quick and simple lineage tracking without the overhead of more complex tools. Integration with Spark. Spline integrates seamlessly with Apache Spark, making it an ideal choice for organizations already using Spark for data processing. Drawbacks Limited Customization. While Spline is great for basic lineage tracking, it lacks the advanced customization options some enterprises might need. Scalability : Spline works well for small to medium-sized data environments but may not scale as effectively for larger organizations with more complex data ecosystems. Best Practices for Choosing a Data Lineage Tool Choosing the right data lineage tool can feel overwhelming, especially with so many options. However, there are a few best practices to help users search for the perfect tool for their business needs. 1. Assess Your Current Data Infrastructure Before diving into any tool, take a step back and assess your current setup. Think of it like taking a look on everything before organizing a big closet \u2014 you need to know what you have to make the right decisions. Map Out Your Systems. What data platforms, databases, and tools are you already using? Are you working with cloud-based systems, on-premise databases, or a hybrid? Understanding your infrastructure is critical because your chosen tool must integrate smoothly with these systems. Determine Your Data Complexity. How complex is your data environment? If you\u2019re dealing with multi-platform, cross-functional data systems, you\u2019ll need a tool with advanced integration and tracking capabilities. For simpler setups, a more straightforward tool might do the job just fine. Consider Existing Gaps. Where are the pain points in your dataflows? If you\u2019re struggling with siloed data or poor visibility into how it moves across systems, you\u2019ll need a tool that gives end-to-end insights. 2. Choose a Tool That Scales Data needs rarely stay the same. As your business evolves, so does your requirements. That\u2019s why it\u2019s crucial to pick a scalable tool for your stack. You don\u2019t want to invest in a service only to outgrow it a year later. Think Long-Term. How do you expect your data needs to change over the next few years? If your business plans to scale, the service should be able to handle larger volumes of data, more complex transformations, and additional integrations with new systems. Future-Proof Your Choice. Look for a tool that meets your current requirements and offers advanced features you might need later, like real-time lineage tracking, AI-powered insights, or more robust governance capabilities. You want a platform that\u2019s ready to grow alongside your data strategy. Flexible Integrations. Ensure that your chosen service can easily integrate with new platforms or data sources. A scalable tool should be flexible enough to handle whatever changes come its way, whether you\u2019re adding new cloud platforms, databases, etc. 3. Go for User-Friendly Tools and Training The best tool is the one your team will actually use. No matter how powerful the it is, people won\u2019t take full advantage of it if it\u2019s too complex. This is why ease of use and adequate training are so important. User-Friendly Interface. Look for services with intuitive dashboards and visualization features. If your team can easily track, monitor, and visualize data lineage without digging through endless configurations, they\u2019re more likely to adopt it into their daily workflows. Training and Support. Even the most user-friendly platform will require some training. Ensure the vendor provides good documentation, tutorials, and customer support to help your team get up to speed. If the tool offers hands-on training sessions or has an active user community, that\u2019s a huge plus. Involve Your Teams Early. Don\u2019t wait until the service is fully implemented to get your team involved. Engage them early in the decision-making process. Let them test the platforms during free trials and ask for their feedback. 4. Consider How Data Lineage Tools Gather Data Typically, they connect to databases, data warehouses, data lakes, ETL processes, and business intelligence platforms to track how it flows and transforms throughout its lifecycle. Here\u2019s what services usually do: Direct Database Connection. To extract metadata, connect directly to databases and DWHs (e.g., [Amazon Redshift](https://aws.amazon.com/redshift/) , [Snowflake](https://www.snowflake.com/en/) , [MySQL](https://www.mysql.com/) ). They capture details like table structures, relationships, stored procedures, and dataflows between tables, which helps map out dependencies. Integration with ETL Tools. Services can integrate with ETL platforms like [Skyvia](https://skyvia.com/) , [Informatica](https://www.informatica.com/) , [Talend](https://www.talend.com/products/data-fabric/) , [Apache NiFi](https://nifi.apache.org/) , or [AWS Glue](https://aws.amazon.com/glue/) to understand how data is transformed. They monitor ETL jobs, mapping out the transformations and dataflow logic, which provides visibility into each step from data ingestion to the final load. Logs and Metadata Extraction. Data lineage solutions often ingest logs and metadata from [data processing platforms](https://skyvia.com/blog/best-data-processing-tools/) , BI tools ( [Tableau](https://www.tableau.com) and [Looker](https://lookerstudio.google.com/u/0/navigation/reporting) ), or data lakes ( [AWS S3](https://aws.amazon.com/s3/) ). By analyzing this metadata, lineage tools can track dataflows across various platforms, identifying transformations and aggregations. APIs and Custom Connectors. For environments where direct connections aren\u2019t possible, services may rely on APIs or custom connectors to access the required metadata. This approach is common in modern microservices architectures, where data may be spread across numerous systems. Enhancing Data Lineage with Skyvia [Skyvia](https://skyvia.com/data-integration) is a powerful data integration platform that takes data management to the next level. It doesn\u2019t just move data from one place to another; it ensures the process is smooth, reliable, and traceable. By connecting a wide range of sources and syncing them with your data lineage tool, Skyvia helps ensure your dataflows are always visible and organized. Here\u2019s how Skyvia integrates with other platforms to enhance overall data management: Seamless Connectivity. Skyvia offers pre-built connectors with [200+](https://skyvia.com/connectors) databases, cloud platforms (like Salesforce, Google BigQuery, and Amazon Redshift), CRMs, marketing, and e-commerce platforms, etc., ensuring that your data lineage tool can track dataflows from start to finish, no matter how many systems are involved. Consistent Data Sync. Skyvia automates data synchronization between systems, ensuring you can get all changes and updates. This way, you get accurate, up-to-date information about data\u2019s journey across systems. Easy Data Migration. Skyvia can also help extract and move data across SaaS applications, databases, or cloud services where native connectors are unavailable. Additional Benefits Skyvia Brings to the Table Skyvia doesn\u2019t just stop at helping with data lineage. It offers a ton of extra features that make any data-related task even more straightforward. Simplified Data Integration : Its standout feature is the no-code interface, which makes integrating different systems a breeze. You don\u2019t need a team of developers to get your systems talking \u2014 Skyvia\u2019s user-friendly platform makes it easy to set up integrations and start syncing your data across platforms. Automation and Scheduling. The platform provides scheduling automatic data syncs so that information is always up-to-date. Data Transformation. Skyvia doesn\u2019t just move data; it can transform it on the fly, ensuring that the data entering your systems is formatted correctly and ready for analysis. This is key for maintaining clean, organized files across multiple platforms, directly impacting the quality of the data lineage tracking. Rating G2 Crowd: [4.8/5](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) TrustRadius: [9.8/10](https://www.trustradius.com/products/skyvia/reviews#overview) Capterra: [4.8/5](https://www.capterra.com/p/146167/Skyvia/reviews/) Conclusion Modern businesses need data lineage tools to track, visualize, and manage dataflows through various ecosystems. Such tools provide the visibility and control businesses need to ensure data compliance and improve quality and governance. Choosing the best service may sometimes be a headache and depends on the company\u2019s current: infrastructure, scalability needs, team requirements. Whether you opt for open-source, commercial, standalone, or embedded tools, each type can offer unique benefits. [Data integration platforms](https://skyvia.com/blog/data-integration-tools/) like Skyvia help companies simplify integration, automate updates, and ensure seamless connections across platforms. It makes it easy for businesses to maximize the value of their data, unlocking it\u2019s full potential. FAQ What is a data lineage tool, and why is it important for businesses? A data lineage tool helps businesses track the flow of data from its source to its final destination, showing how data transforms along the way. It\u2019s crucial to understand data dependencies, ensure compliance with regulations like GDPR, and improve data accuracy and trustworthiness in analytics and reporting. What is data lineage in SQL? Data lineage in SQL refers to tracking and documenting the flow of data as it moves through SQL databases, queries, and transformations. It shows where data originates (its source), how it is processed or transformed (through SQL operations like joins, aggregations, or filters), and where it ends up (its destination, such as tables, reports, or analytics dashboards). What key features should I look for in a data lineage tool? Look for tools with robust visualization capabilities, real-time tracking of data flows, integration with your existing tech stack (e.g., databases, ETL tools, and BI platforms), and strong compliance features for regulatory requirements. Automation and scalability are also crucial for growing businesses. Which industries benefit most from data lineage tools? Industries that rely heavily on data, such as finance, healthcare, retail, and technology, benefit the most. These tools help in regulatory compliance (e.g., GDPR, HIPAA), auditing, ensuring data quality, and optimizing data-driven decision-making. Are data lineage tools only for large enterprises? While large enterprises often use data lineage tools due to their complex data ecosystems, many tools cater to small and medium-sized businesses. Scalable solutions like open-source or SaaS-based tools allow businesses of any size to manage data lineage effectively. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lineage-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Lineage+Tools+for+Businesses+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-lineage-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-lineage-tools/&title=Top+Data+Lineage+Tools+for+Businesses+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-modeling-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top Data Modeling Tools for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-modeling-tools/#respond) 1292 September 10, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-modeling-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Modeling+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-modeling-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-modeling-tools/&title=Top+Data+Modeling+Tools+for+2025) If you\u2019ve ever worked with databases or heard about database design, you have come across the term \u201cdata modeling tools\u201d \u2013 software applications that allow users to create a visual representation of a database\u2019s structure. They help map out how different pieces of data relate to each other and define the database\u2019s structure. Why are they Important in Database Design Blueprint for Databases. Like architects use blueprints to design buildings, data engineers use these tools to provide a clear plan of what the database will look like and how it will function. Visual Representation. Visualizing data lets users see relationships and dependencies between different entities to design effective databases easily. Improved Data Management. A solid data model gives users a clear structure showing how data is stored, accessed, and manipulated. Facilitates Communication. As it provides a common language and visualization that everyone can understand, it makes communication for technical and non-technical users easier. Such an approach is crucial during the design phase: enhance collaboration, and allow to keep clear documentation. In this article, we\u2019ll review the best data modeling tools for 2025, their key features, and the benefits for businesses to choose the right tool for your organization. Table of contents Key Features of Data Modeling Tools Top Data Modeling Tools for 2024 ER/Studio Oracle SQL Developer Data Modeler DbSchema MySQL Workbench Comparison of Data Modeling Tools Benefits of Using Data Modeling Tools How to Choose the Right Data Modeling Tool Work Together with Data Modeling Tools Conclusion Key Features of Data Modeling Tools Good data modeling tools streamline the process, improve collaboration, and ensure that data structures are well-documented and ready for implementation. Let\u2019s review the key features of such tools. 1. Diagramming Capabilities It\u2019s the heart of data modeling tools, allowing the creation of visual representations of database structure, usually in the form of Entity-Relationship Diagrams (ERDs) that help map out tables, relationships, and data flows. 2. Reverse Engineering This feature is handy for understanding legacy databases or systems users didn\u2019t initially design. The tool analyzes the database and generates a model to visualize and document the current structure, making it easier to plan changes or integrations. 3. Collaboration Support Collaboration allows multiple users to work simultaneously on the same data model, often with real-time updates and change tracking. Moreover, data modeling tools often include automatic documentation features, generating clear, detailed descriptions of the database structure as it\u2019s being designed. All team members can easily share and reference this documentation, providing a reliable source of truth that facilitates ongoing communication and project continuity. 4. Integration with Databases Integration capabilities allow the tool to connect directly to various database management systems (DBMS), such as MySQL, PostgreSQL, Oracle, etc. So, users can easily push their model to the database or pull existing structures into the tool to save time and reduce errors when moving from design to implementation. 5. Forward Engineering It takes the data model and generates the SQL code to create the database structure in the chosen DBMS. This feature bridges the gap between design and implementation, allowing the model to be deployed quickly without manually writing all the SQL scripts, ensuring that the database is built exactly as designed. 6. Data Type and Constraint Specification This feature directly defines data types, primary keys, foreign keys, and other constraints in the model. Specifying data types and constraints ensures data integrity and consistency, which are crucial for maintaining a reliable database. 7. Version Control Version control features provide tracking changes to data models over time, often integrating with existing version control systems like Git. It\u2019s essential for managing changes and collaborating on complex projects. It keeps track of different iterations of the model and reverts to previous versions if needed. 8. Reporting and Documentation Many tools offer features to automatically generate documentation and reports based on data models to ensure everyone understands the data model and its purpose. It\u2019s handy for onboarding new team members or sharing models with users. Top Data Modeling Tools for 2024 If you\u2019re working in the database design area or searching for a tool to improve data management processes, here\u2019s a list of the top data modeling tools for 2024. Each of them provides unique features catering to different needs and preferences. ER/Studio [ER/Studio](https://erstudio.com/) is a powerful data modeling tool designed to help businesses and organizations create, manage, and optimize complex database architectures. Developed by IDERA, it\u2019s particularly popular in enterprise environments where managing large-scale data structures is essential. It\u2019s widely used by large enterprises that need to manage extensive, complex databases and organizations that require detailed data integrity. Key Features Data Modeling. Supports logical, physical, and conceptual data modeling, allowing users to design complex database structures. Collaboration. Offers team collaboration features, enabling multiple users to work on models simultaneously. Data Lineage. Tracks data flow across systems to ensure data integrity and transparency. Integration. Seamlessly integrates with various DBMS and business intelligence tools. Benefits The tool enhances productivity with automation features and model-driven development. It provides robust reporting and documentation abilities to streamline communication across teams. The solution facilitates compliance with data governance standards. Use Cases It\u2019s perfect for businesses looking to manage extensive data architectures. The solution is helpful for companies requiring detailed data lineage and impact analysis. Oracle SQL Developer Data Modeler [Oracle SQL Developer Data Modeler](https://www.oracle.com/database/sqldeveloper/technologies/sql-data-modeler/) is a free, comprehensive data modeling and design tool that supports Oracle databases and other DBMS and provides a full suite of design tools for creating and modifying database models. It allows users to create models from existing databases and generates DDL scripts to implement the design on a database. The tool also offers detailed diagrams and visualizations to aid in understanding complex structures. Key Features Support of multi-level designs, including logical and relational models. The ability to integrate with Oracle SQL Developer for seamless database management. The tool offers data type and constraint management for precise data definition. Benefits The tool streamlines database design and implementation processes. It enhances collaboration with version control and model comparison features. The solution simplifies complex data structures with intuitive visualizations. Use Cases This platform is best suited for organizations using Oracle databases. It\u2019s useful for database administrators and developers who are focusing on Oracle database design and management. DbSchema [DbSchema](https://dbschema.com/) is a robust data modeling tool with a user-friendly interface for designing and managing database schemas across various DBMS. It supports a wide range of databases, including MySQL, PostgreSQL, MongoDB, and more. So, it\u2019s an excellent choice for multi-database projects and effortless collaboration among team members. Key Features It allows users to create and edit database schemas visually with drag-and-drop functionality. The solution supports schema synchronization and versioning to ensure consistency across environments. The tool works with a wide range of databases, including MySQL, PostgreSQL, MongoDB, and more. Benefits The solution simplifies the design process with intuitive visual tools and interactive layouts. It enables easy collaboration and sharing with built-in documentation features. The system supports offline schema design, allowing changes to be deployed later. Use Cases DbSchema is perfect for small to medium-sized businesses seeking a cost-effective modeling solution. It\u2019s suitable for teams needing to collaborate on multi-database projects. MySQL Workbench [MySQL Workbench](https://www.mysql.com/products/workbench/) is a comprehensive and free visual tool designed for database architects, developers, and DBAs. It provides a full-featured modeling environment to create and manage complex data models, includes SQL, and offers a range of functionalities, including data modeling, SQL development, and database administration. Key Features The tool supports forward and reverse engineering of MySQL databases. It involves visual performance dashboards and reporting tools. The solution offers utilities to migrate databases from Microsoft SQL Server, Microsoft Access, Sybase ASE, and more to MySQL. Benefits An ability to facilitate seamless integration with MySQL databases for streamlined operations. Users can enhance productivity with automated design and management features. MySQL Workbench provides a comprehensive set of tools for all stages of database development. Use Cases It\u2019s ideal for MySQL database developers and administrators. The tool is helpful for teams looking to manage MySQL databases effectively. Comparison of Data Modeling Tools Choosing the right data modeling tool depends on the company\u2019s specific needs, the complexity of projects, and the environment in which users work. Here\u2019s the comparison table based on the data modeling tools\u2019 features, pricing, ease of use, and suitability for different projects and organizations. Feature ER/Studio Oracle SQL Developer Data Modeler DbSchema MySQL Workbench Key Features \u2013 Comprehensive data modeling (logical, physical, conceptual) \u2013 Team collaboration. \u2013 Data lineage and impact analysis \u2013 Integration with various DBMS and BI tools \u2013 Supports multi-level designs (logical, relational) \u2013 Reverse and forward engineering \u2013 Data type and constraint management \u2013 Integration with Oracle SQL Developer \u2013 Visual design tools \u2013 Schema synchronization. \u2013 Cross-platform support (MySQL, PostgreSQL, MongoDB, etc.). \u2013 Interactive layouts \u2013 Unified visual tool \u2013 Forward and reverse engineering. \u2013 SQL development \u2013 Server configuration and user administration \u2013 Visual performance dashboards Pricing Starts at approximately $1,500 per user per year Free $127 (perpetual license) Free Ease of Use Advanced features require a learning curve User-friendly for Oracle environments but may need some learning User-friendly interface, easy to use User-friendly, especially for MySQL users Suitability for Projects \u2013 Large enterprises \u2013 Complex database architectures \u2013 Teams requiring detailed data lineage and impact analysis \u2013 Projects using Oracle databases \u2013 Database administrators and developers focusing on Oracle environments \u2013 Teams working on multi-database projects \u2013 Users needing visual design and schema synchronization \u2013 MySQL database developers and administrators \u2013 Small to medium-sized projects \u2013 Users looking for comprehensive MySQL management tools Suitability for Organizations \u2013 Large organizations with extensive data management needs \u2013 Enterprises requiring robust collaboration and governance features \u2013 Oracle-centric organizations \u2013 Companies with extensive Oracle database infrastructure \u2013 Small to medium-sized companies \u2013 Organizations needing multi-database support and visual design tools \u2013 Organizations heavily using MySQL. \u2013 Businesses looking for a free and comprehensive tool for MySQL management Benefits of Using Data Modeling Tools Data modeling tools can transform how users design, manage, and interact with databases. They can improve data quality, facilitate collaboration, enhance efficiency, and ensure compliance. Let\u2019s examine them more closely to see how they may benefit your business. Improved Data Quality Data modeling tools help ensure that data structures are designed accurately from the outset. By defining clear data types, constraints, and relationships, these tools help maintain data consistency and integrity. Automated validation features in data modeling tools can catch errors early in the design phase, reducing the risk of data inconsistencies and inaccuracies when the database is implemented. Better Collaboration Among Team Members Many data modeling tools support collaboration features that allow multiple team members to work on the same model simultaneously. It includes real-time updates, version control, and change tracking. Visual models and diagrams make it easier for technical and non-technical users to understand the database structure to foster better communication. More Efficient Database Design Processes Data modeling tools automate many aspects of database design, including generating SQL scripts, reverse engineering existing databases, and synchronizing schemas. It speeds up the design process and reduces manual work. Tools that offer visual design capabilities allow users to create and modify database schemas easily. Drag-and-drop interfaces and visual representations simplify design of complex structures. Enhanced Data Integration Data modeling tools help create a unified view of data across different systems by defining how data is structured and related. Such an approach is beneficial in environments with multiple databases or data sources. These tools facilitate smoother integration with other systems and applications by providing clear models and schemas, ensuring that data flows seamlessly across the organization. Clearer Documentation and Compliance Many data modeling tools automatically generate documentation based on the data models, including tables, columns, relationships, and constraint descriptions. Clear documentation and defined data structures help ensure databases comply with industry regulations and standards, such as GDPR or HIPAA. How to Choose the Right Data Modeling Tool Selecting the perfect data modeling tool involves evaluating project complexity, team size, and specific feature needs while balancing these against the company\u2019s budget. Here\u2019s a guide to help businesses make the best choice. Evaluate Your Specific Requirements A basic tool might suffice if you\u2019re working on smaller projects with straightforward data structures. Look for tools that offer easy-to-use interfaces and essential features without overwhelming you with options. You\u2019ll need a more robust tool for more complex projects involving large-scale databases, intricate relationships, or multiple data sources. Look for features like advanced diagramming, data lineage, and impact analysis. Consider the Size of Your Team Simplicity is essential if you\u2019re in a small team. Tools with intuitive interfaces and minimal learning curves are ideal. For larger teams, collaboration features become crucial. Look for tools that support real-time collaboration, version control, and role-based access. Assess the Features of Different Tools Smooth integration with existing DBMS (e.g., MySQL, PostgreSQL, Oracle), cloud platforms, and other tools organizations already use reduces manual work and ensures consistency across data environments. So, ensure that all team members can collaborate seamlessly and keep track of changes without conflicts. Think about Ease of Use Look for tools with drag-and-drop functionalities, comprehensive help resources, and community support. So, a tool that\u2019s easy to use will save time and reduce the learning curve for teams, leading to quicker implementation and fewer errors. Balance Features with Budget Open-source tools can be a great option if you\u2019re on a tight budget. They often provide much functionality for free or at a lower cost. However, they might require more technical expertise to set up and use. MySQL Workbench, DbSchema has a paid version but also offers a free option. Commercial tools generally offer more advanced features, better support, and easier integration. They can be more user-friendly but come at a higher cost. Evaluate Cost vs. Benefits Look at the total cost of ownership, including licensing fees, training costs, and any additional costs for support or upgrades. Weigh these costs against the benefits of the tool\u2019s features to see if it saves you time, improves data quality, or enables better collaboration and productivity. Trial and Feedback Take advantage of free trials offered by many commercial tools. It allows businesses to test the tool in their environment and see how it fits their workflow. Get input from your team members who will be using the tool. Their feedback is invaluable in making sure the tool meets everyone\u2019s needs. Work Together with Data Modeling Tools Modern businesses usually use the orchestra of various services that are excellent in themselves. Still, companies can only evolve once their heads can view the whole market picture, including top trends and clients\u2019 behavior. That\u2019s why they need all the information in one place (DWH, data lake, etc.) and [data integration tools](https://skyvia.com/blog/data-integration-tools/) to provide this kind of connectivity. Data modeling tools are essential for designing and visualizing database structures, but they become even more powerful when integrated with tools like [Skyvia](https://skyvia.com/) . This universal cloud-based [data integration](https://skyvia.com/data-integration) platform works hand-in-hand with popular data modeling tools to smoothly manage data. Let\u2019s explore how such solutions really play the changing game in the world of data management. Building and Maintaining Data Structures Data modeling tools help users design and visualize a database\u2019s structure. They\u2019re like the blueprint for storing, organizing, and related data within a database. Users define all the foundational elements of data infrastructure, like tables, columns, relationships, and constraints. Once users model the data structure, the next step is to populate it with actual data using data integration tools like Skyvia. After designing the blueprint with a data modeling tool, we may use Skyvia to integrate data from [190+](https://skyvia.com/connectors) sources into a new or existing database, ensuring that the data fits the designed structure. Data Import and Synchronization Tools like Skyvia allow moving data between different systems, synchronizing data across platforms, and ensuring that data is consistent and up-to-date across all users\u2019 databases and applications. When data structure changes (e.g., users add new tables or fields based on their updated data model), Skyvia helps keep the data in sync with these changes. It can import data into the newly designed structures, ensuring that data is well-organized and accurate according to the latest model. Data Transformation and Consistency Data modeling tools ensure that data is stored in a consistent, logical format. They help users define rules for how data should be structured and how different data elements should relate to one another. Data integration tools like Skyvia move data and transform it as needed during the integration process. Suppose the data modeling tool defines a specific format for dates. In that case, Skyvia can ensure that all incoming data is transformed to match this format before loading it into the database. Conclusion Selecting the right data modeling tool isn\u2019t just about picking the one with the most features; it\u2019s about finding the one that fits your specific needs. Whether working on a simple project or managing a complex database architecture for a large enterprise, such a tool should make your life easier and offer the needed features, like collaboration, integration, and automation, while fitting within your budget. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-modeling-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Modeling+Tools+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-modeling-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-modeling-tools/&title=Top+Data+Modeling+Tools+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-movement-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Selecting the best\u00a0Data Movement Tool in 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-movement-tools/#respond) 2256 February 13, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-movement-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+best%C2%A0Data+Movement+Tool+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-movement-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-movement-tools/&title=Selecting+the+best%C2%A0Data+Movement+Tool+in+2025) What are the data movement tools, and why do modern businesses use them daily? The definition is quite simple: to access data where and when needed, in the required format, facilitating its integration, analysis, and reporting. Data movement solutions transfer data between different locations, formats, or applications, saving users time and resources. Table of Contents Skyvia AWS Data Migration Fivetran Integrate.io Matillion Stitch Hevo data Talend IBM Informix IRI NextForm Methodology Summary Key data movement tools and their functions ETL (Extract, Transform, Load) Tools : Extract data from source systems, transform it into a suitable format, and load it into a destination like a data warehouse or data lake. ELT (Extract, Load, Transform) Tools : Loading data into the target system before transforming it, leveraging the processing power of modern data warehouses needed\u00a0for efficient data analytics. [Data Replication Tools](https://skyvia.com/blog/top-data-replication-tools/) : Copy data from one database to another, ensuring consistency and enabling backup and recovery options. Data Synchronization Tools : Keep data updated and consistent across different systems in real-time. [Data Integration Platforms](https://skyvia.com/blog/data-integration-tools/) : Combine different data movement functionalities, offering a more comprehensive approach to data integration. Data movement tools ensure companies many benefits, such as: Data availability, accessibility, and accuracy. Data loss and corruption prevention with data backup, validation, and error-handling functionality. Easy integration process with automated data collecting and merging into one unified system. Data security. Let\u2019s consider the top 2024 data movement tools, their key features, pros, and cons to find your unique solution. Skyvia [Skyvia](https://skyvia.com/data-integration/import) is a solid choice for companies that are searching for a data movement tool that handles a lot of business scenarios, including data warehousing, CRM, ERP integration, etc. The platform is cloud-based, no-code, and supports an impressive list of [data integration scenarios](https://skyvia.com/learn/what-is-data-integration) , like ETL, ELT, reverse ETL, data migration, one-way and bidirectional data sync, workflow automation, data sharing via REST API, backups for cloud apps, mapping, recovery, and more. Key Features Skyvia is the universal data management tool, allowing data import, export, replication, and synchronization between different sources. Skyvia offers the ability to create complicated data pipelines ( [Data Flow visual designer](https://docs.skyvia.com/data-integration/data-flow/) ) with logic control ( [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) ). Automated and scheduled data movement reduces manual effort and ensures timely data availability. The platform\u2019s flexibility in mapping and transformation supports dynamic data flows that adapt to changing business needs. Skyvia maintains data integrity during movement by applying validations and transformations that preserve the quality and relevance of data. Pros Skyvia handles a data-related scenario of any complexity, not depending on the flow and transformation. Skyvia supports [180+](https://skyvia.com/connectors/) sources (databases, cloud services, and CRMs like Salesforce, etc.) The solution\u2019s useability. According to G2 Crowd, Skyvia leads among the [easiest-to-use ETL tools](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . Near real-time data processing ensures that data in different systems is always up to date. Skyvia\u2019s secure handling, data movement, and storage comply with standard data protection regulations. Cons The solution is highly user-friendly and covers a lot of FAQs with good-quality tech docs included directly in the app. Still, the video tutorials here are partly in work. Pricing Skyvia\u2019s [pricing](https://skyvia.com/pricing/) is flexible and depends on your usage. You can start with the Freemium plan to see how it goes or select from the payable options, ranging from Basic ($15/mo) to Professional ($399/mo). And, of course, there\u2019s the Enterprise option, tailoring your unique needs. AWS Data Migration If your data movement concerns databases\u2019 migration to the cloud or between different AWS services, [AWS Data Migration Service](https://aws.amazon.com/dms/) is the ideal choice. This cloud-based solution migrates data from the following most widely used databases: Oracle. MySQL. PostgreSQL. Microsoft SQL Server. MongoDB. It supports migrations to and from AWS services like Amazon RDS, Amazon DynamoDB, Amazon Redshift, and Amazon S3. Key Features The service is scalable to accommodate large-scale migrations and complex database environments. AWS DMS ensures data security during migration with features like encryption in transit, etc. AWS DMS enables the databases\u2019 migration with minimal downtime, allowing businesses to continue operations during the migration process. The data replication abilities help consolidate databases into a centralized data warehouse or keep databases in sync during migration. Pros User-friendly UI, accessible for those without deep technical expertise in database migration. An ability to support various migration strategies, including homogeneous migrations (like-to-like databases) and heterogeneous migrations (different source and target databases). A Schema Conversion Tool for certain database types to convert the source database schema and code automatically to be compatible with the target database. Cons The large-scale migration complexity. Limited support for specific data types and databases, especially when dealing with different source and target database migration. Complexity when migrating from/to less common database platforms. Pricing The [pricing](https://aws.amazon.com/dms/pricing/) here is pay-as-you-go, including a free tier. Fivetran If your business is seeking to leverage its data for insights and decision-making without the complexity of managing custom data pipelines, you may select [Fivetran](https://www.fivetran.com/) . It\u2019s about data ingestion and ETL and supports [300+](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) sources. Key Features Fully managed and automated data pipelines, reducing the need for manual setup and maintenance. The solution supports real-time data replication, ensuring up-to-date data availability for analytics and decision-making. The ETL approach allows leveraging the power of modern data warehouses for efficient data processing. Pros The app is accessible to users with varying levels of technical expertise. Real-time data synchronization ensures that the data in the warehouse is always current, enhancing data reliability for analytics. The app easily integrates with multiple cloud platforms and data warehouse technologies. Cons Cost consideration for small businesses. Data transformation complexity and limitations for unusual scenarios. Pricing The volume-based [pricing](https://www.fivetran.com/pricing) includes Monthly Active Rows (MAR) with a 14-day free trial. Integrate.io [Integrate.io](https://www.integrate.io/) , formerly Xplenty, is a cloud-based data integration platform specializing in simplifying the process of preparing and transferring data. It focuses on data integration, ETL, ELT, reverse ETL processes and API data management scenarios. Key Features [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) functionalities enable users to extract data from [150+](https://skyvia.com/etl-tools-comparison/integrateio-alternative-skyvia) sources, transform it as needed, and load it into their desired destination, such as data warehouses or databases. [Data transformation tools](https://skyvia.com/blog/best-data-transformation-tools/) allow users to clean, format, and enrich data during integration. Integrate.io ensures high levels of data security and compliance with regulations like GDPR, making it suitable for businesses concerned with data privacy and security. Pros A low-code approach and intuitive interface make setting up and managing data pipelines easy, broadening their accessibility. Being cloud-based, it scales well to handle increasing data loads and complex integration requirements. Cons Cost considerations for smaller businesses or those with limited data integration needs. The platform is user-friendly for basic tasks, but complex data transformations require a deeper understanding of data processes. Not all integrations may support real-time data processing, which could be a limitation for specific use cases. Pricing The [pricing](https://www.integrate.io/pricing/) depends on the model and ranges from $15,000/year for the Starter to $25,000/year for the Professional plan, both with a 14-day free trial. You may also use the Enterprise plan for some custom needs. Matillion [Matillion](https://www.matillion.com/) is a cloud-native data integration and transformation tool for data movement that allows ETL, ELT, reverse ETL, and data replication scenarios. The solution is suitable mainly for businesses heavily invested in cloud data warehousing and analytics. Key Features ETL/ELT capabilities enable users to move and transform data from various sources to cloud data warehouses. Cloud-native design allows it to integrate seamlessly with cloud data warehouses like Amazon Redshift, Google BigQuery, Snowflake, and Microsoft Azure Synapse. Data transformation tools prepare data for analytical and reporting purposes. The platform allows for orchestrating complex data workflows and automating data integration tasks. Pros Integration with significant cloud data warehouses makes it a preferred choice for cloud-based data workflows. The optimization for high-performance data processing in the cloud fits large-scale data operations. The intuitive graphical interface simplifies the design and management of data transformation workflows. Matillion offers flexibility in handling [150+](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) data sources and transformation requirements. The system processes data quickly and efficiently, which is crucial for time-sensitive analytics. Cons The Matillion\u2019s cost may be a significant consideration for smaller businesses or those with limited data integration needs. Complexity for advanced scenarios might require a deeper technical understanding. Dependency on Cloud Environment may not suit businesses looking for on-premises solutions. The system can be resource-intensive for extensive data processing tasks, which could impact cloud resource costs. Pricing The [pricing](https://www.matillion.com/pricing) is consumption-based with a 30-day free trial + 500 Matillion credits. The solution doesn\u2019t offer a free plan. Stitch [Stitch](https://stitchdata.com/) is a cloud-based data integration service offering an ETL data pipeline. It suits businesses seeking data consolidation from various sources into a single cloud-based data warehouse or data lake. As a data movement tool, Stitch is ideal for companies looking for simplicity in their data integration efforts. Key Features Stitch follows an ELT model, where data is first extracted and loaded into the target data warehouse, and transformations are performed afterward. It supports real-time data replication, allowing businesses to have up-to-date data in their warehouses for timely analytics and decision-making. The solution offers automated data sync, reducing manual effort and ensuring data consistency across systems. Pors A user-friendly interface makes setting up and managing data integrations straightforward. Users can quickly set up and start data integrations, making it efficient for businesses looking to consolidate data without delay. It supports [140+](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) data sources, providing flexibility in data integration strategies. Cons Stitch may offer limited customization options for complex data transformations and specialized integration scenarios. Depending on the pricing plan, there may be limits on data volume, which could be a constraint for larger enterprises. The Stitch\u2019s effectiveness may rely on the capabilities and compatibility of the selected target data warehouse. Pricing The usage-based [pricing](https://stitchdata.com/pricing/) ranges from $100/mo for Standard to $2500/mo for Premium, with a 14-day free trial. A free plan is unavailable here. Hevo Data The one more data movement platform offering ETL, ELT, and reverse ETL is [Hevo](https://hevodata.com/) . It\u2019s cloud-based and might be a good choice for businesses looking to simplify integrating, transforming, and loading data into a data warehouse or database for analysis. Key Features Hevo offers a fully automated pipeline for data integration, eliminating the need for manual intervention and extensive coding. The solution provides real-time data replication, ensuring the latest data is always available for analysis in your warehouse. It offers ETL and ELT functionalities, providing flexibility in processing and moving data. Users can transform and enrich their data using Hevo\u2019s in-built transformation capabilities before loading it into the destination system. The solution automatically detects schema changes in the source data and replicates these changes in the destination system. Pros It supports [150+](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) data sources, including databases, SaaS applications, cloud storage, SDKs for custom sources, and streaming services. Hevo can efficiently manage large volumes of data, which suits businesses of all sizes. The user-friendly UI may simplify complex data integration tasks for users with varying technical expertise. Cons Depending on the scale of use and the range of features required, the cost can be a stop factor for small businesses or startups. Highly complex transformation requirements might need additional processing or tools. Pricing The [pricing](https://hevodata.com/pricing/pipeline/) is usage-based on Hevo events and allows a 14-day free trial. There is also the connector\u2019s limited free plan. Talend [Talend](https://talend.com/) is a cloud-based system that offers cloud and on-premise solutions for businesses with diverse and complex data integration needs. This low-code app collects data integration and governance into a single point for various data movement scenarios. There are two versions of their products: Talend Open Studio , accessible for free. The fully featured payable solution Talend Data Fabric . Key Features ETL and ELT abilities enable the movement and transformation of data from various sources. Data Quality Management includes features for improving data quality, like data cleansing, deduplication, and validation, which are crucial for ensuring the accuracy and reliability of data. Talend offers specialized tools for big data integration, enabling businesses to effectively leverage data from big data platforms. Pros The user-friendly graphical UI allows users to design and manage data integration processes visually. The platform includes the ability for data governance, ensuring that data handling complies with regulatory standards. [1,000](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) connectors and components make it suitable for complex and diverse data environments. The solution is scalable to handle large data volumes, which would be comfortable for enterprises. Cons More complex implementations may require significant resources, both in terms of hardware and expert workforce. The cost is the bottleneck for small businesses or those with limited integration needs. Pricing The [pricing](https://talend.com/pricing/) is subscription-based, with a 14-day free trial for the Talend Data Fabric cloud solution. Talend Open Studio features are available for free. IBM Informix [IBM Informix](https://www.ibm.com/products/informix) is not primarily a data movement tool but rather a database management system for relational and non-relational data. However, you may use its performance, reliability, and data replication features for specific data movement requirements, especially within an IBM-centric or hybrid technology environment. Key Features Its high-performance database engine can handle large volumes of data efficiently. The system works with structured, unstructured, and time series data, offering flexibility in data management. You can use Enterprise Replication and HDR features for data synchronization and movement across different systems or environments. Pros Informix integrates with other IBM data management and analytics tools and various third-party solutions, facilitating data movement and analytics. The platform is fast, reliable, scalable, and fits transaction-heavy environments with growing data needs. It can manage various data types, which is beneficial for complex data landscapes. Cons Since it is primarily a database management system, its data movement capabilities might not be as extensive or straightforward as data integration or [ETL tools](https://skyvia.com/blog/etl-tools/) . The complexity of Informix\u2019s broader feature set might be a stop factor for users focused on data movement. Depending on the deployment and selected feature set, the cost and licensing can be a big question. Pricing The pricing is complicated a bit and based on the following: The licensing options, like perpetual licenses and subscription-based models. Edition options depend on features and capabilities. Editions are tailored for small to medium businesses and more advanced options for larger enterprises. Pricing may also rely on the scale of deployment, such as the number of cores, processors, or the volume of managed data. IRI NextForm As a data movement tool, [IRI NextForm](https://www.iri.com/products/nextform) is a good choice for large businesses requiring data migration, conversion, and transformation across various databases, file formats, and systems. Key Features NextForm converts data between flat files, XML, JSON, and traditional database formats. It handles data migration between databases, warehouses, and platforms, which is suitable for system upgrades, consolidations, or cloud migrations. The solution offers data replication features, allowing data synchronization across different systems, which is essential for backup and disaster recovery strategies. NextForm provides data transformation during movement, such as cleansing, remapping, and aggregation. Pros Compatibility with various data sources and targets, including popular databases like Oracle, SQL Server, MySQL, and cloud-based solutions. The user-friendly UI allows easy setup and management of data movement tasks, even for non-tech ones. Large data volumes handling ability, which suits enterprises and big data projects. Cons Despite its user-friendly UI, the range of features and capabilities might need some time to learn, especially for complex data movement tasks. The cost may be challenging for small businesses or projects with limited data movement requirements. You may need additional customization and resources for integration with the third-party platforms. Pricing The [pricing](https://www.iri.com/products/go) is indicative and may vary based on specific requirements, additional features, or changes in the pricing structure, but you can start with a free trial. Methodology The table below displays the methodology of selecting the best data movement tool using the most popular criteria and definitions. Criterion Definitions Frequency of Mentions in Industry Publication \u2013 Recognition and Credibility: Tools frequently mentioned in reputable industry publications are often recognized as leading solutions in their field. \u2013 Trends and Popularity: A high frequency of mentions can indicate a tool\u2019s popularity and relevance to current industry trends. \u2013 Expert Opinions and Reviews: Articles and reports in these publications often include expert opinions and reviews, providing an in-depth analysis of the tool\u2019s strengths and weaknesses. Expert Interviews \u2013 Insider Insights: Interviews with industry experts, such as data architects, IT managers, and analysts, can provide insights based on practical experience and professional expertise. \u2013 Use Case Exploration: Experts may provide information on specific use cases and scenarios where certain data movement tools excel. \u2013 Future Outlook: Interviews can also shed light on emerging trends and developments in data movement technologies, highlighting tools well-positioned for future growth. Additional Factors \u2013 User Testimonials and Case Studies: Insights from users and detailed case studies can provide real-world evidence of a tool\u2019s effectiveness and reliability. \u2013 Awards and Recognitions: Awards or recognitions from industry bodies or technology events can also contribute to the tool\u2019s standing. \u2013 Community and Ecosystem: The size and activeness of the tool\u2019s user community and its ecosystem of partners and integrations can indicate its utility and adoption. Summary In today\u2019s reality, data movement tools are like public transport, transferring data between various storage systems, applications, and cloud environments in time. Such tools facilitate data integration, replication, and synchronization across different platforms, ensuring that data is accessible for analysis, reporting, and other operations. This \u2018transport\u2019 selection depends on our general needs, sources, destinations, and current scenarios. In this case, Skyvia offers a combination of ease of use, comprehensive integration capabilities, and a focus on security and reliability. Its adaptability to various data movement scenarios, from simple data transfers to complex integrations and transformations, makes it a good choice for businesses looking for simple solutions for complicated tasks. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-movement-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+best%C2%A0Data+Movement+Tool+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-movement-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-movement-tools/&title=Selecting+the+best%C2%A0Data+Movement+Tool+in+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-data-replication-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Selecting the Best Data Replication Tool By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-data-replication-tools/#respond) 3152 February 24, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-replication-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Replication+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-replication-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-replication-tools/&title=Selecting+the+Best+Data+Replication+Tool) Dealt with inconsistencies between databases? Struggled with slow performance when accessing critical information? Or maybe you\u2019re just looking for a way to ensure insights are always backed up and secure? That\u2019s where data replication tools step in to save the day. So, what does this term mean, and why is such a process so important for businesses in our modern, digital, competitive world? What benefits can it bring to various business areas? And, lastly, what exactly is the data replication software to select to fit, especially for your company? Let\u2019s research a bit. Table of Contents What is Data Replication? Key Data Replication Techniques Business Success With Data Replication Software Skyvia Hevo Data Fivetran IBM InfoSphere Dell RecoverPoint Acronis Cyber Backup Zerto Qlik Replicate Oracle GoldenGate NAKIVO Backup & Replication Tools Comparison Choosing the Right Tool for Data Replication Conclusion What is Data Replication? To explain in a few words, data replication is the process of copying it from one location to another, ensuring that it remains consistent with the source. For instance, there can be cloud-based hosts, on-premises ones, hosts with different locations, the same host storage, and so on. The replication process is flexible enough so that we can: Replicate data in real-time. Copy on demand. Transfer in batches or bulk on schedule. Key Data Replication Techniques It\u2019s all about making copies of the information and storing them in different locations for better performance, reliability, and disaster recovery . However, not all replication methods work the same way. Here are the key techniques: Full Replication copies everything from the source database to the target system. Great for backup, but can be resource-heavy. Snapshot Replication takes a snapshot of data at a specific point in time. Perfect for periodic updates but not real-time. Transactional Replication continuously updates the target as changes happen in the source. Ideal for real-time data sync. Log-based Replication uses database logs to track changes and replicate them efficiently. Best for low-latency, high-traffic systems. Each [method](https://skyvia.com/blog/connect-salesforce-to-sql-server/) has pros and cons, so choosing the right one depends on data volume, update frequency, and system complexity. Business Success With Data Replication Software Implementing replication technology can be a game-changer for any organization, improving efficiency, reliability, and resilience in handling critical information. Let\u2019s consider how it helps. Prevents Loss and Ensures Security. Maintaining multiple copies across different locations guarantees high accuracy and consistency, protecting against accidental deletion, system failures, or cyber threats. Saves Time and Reduces Errors . Automation eliminates manual transfers, accelerates processes, and minimizes human mistakes, leading to smoother workflows and reduced downtime. Keeps Information Synchronized . It ensures real-time updates, so teams always work with the most accurate and up-to-date records, which is essential for effective decision-making. Strengthens Disaster Recovery . With a robust replication strategy, businesses can recover swiftly from system crashes or disruptions, minimizing operational impact. Boosts Performance and Scalability. Efficiently managing data flow improves system responsiveness and enables business growth without performance slowdowns. The next step is to choose the perfect [solution](https://skyvia.com/blog/salesforce-to-salesforce-integration/) that suits any organization\u2019s needs and solves business pain points. This [article](https://skyvia.com/blog/best-data-pipeline-tools/) reviews the top 10 platforms to help companies in the selection, depending on the most important criteria. Skyvia [Skyvia](https://skyvia.com/data-integration/replication) is one of the best ELT, ETL, and reverse [ETL platforms](https://skyvia.com/blog/etl-tools/) for cloud data replication that has been presented in the market for 2025. It\u2019s cloud-based, so you can quickly get started with the device and wifi connection. The solution is no code, so it\u2019s pretty simple to use. That saves costs because of the unnecessity of spending additional time and financial resources on staff training. Skyvia also offers [200+](https://skyvia.com/connectors#marketing) various connectors, depending on users\u2019 needs. With [Skyvia replication](https://skyvia.com/learn/what-is-data-replication) , you can: Quickly and effortlessly create an exact copy of the information in a database or warehouse. Set up and automate the transfer process in just minutes. Ensure seamless incremental updates , keeping your target system consistently refreshed. Rating [G2 Crowd](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) 4.8/5 (based on 242 reviews) Pricing The platform perfectly balances the [price](https://skyvia.com/pricing/) and offer. You can choose the Freemium pricing model to get started. Skyvia Data Replication Software: 3 Easy Steps to Get Started Create connections to the data source and destination. Create the package for replication and choose the objects. Set up the schedule to automate the replication process. Go [here](https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html) for more details on how to configure the replication package. For instance, you may quickly replicate Zoho Desk data to a database for further analysis and reporting using Skyvia. Hevo Data Competitor number two in this data replication rate is [Hevo Data](https://hevodata.com/) . The solution is no-code or low-code, so depending on the task, it saves companies\u2019 costs and time and allows zero data loss. The platform provides about 150+ sources like SaaS apps, databases, file storages, and streaming sources in near real-time. (if [compared with Skyvia](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) , a bit less). The pipeline is user-friendly and allows health monitoring, meaning you\u2019ll know what\u2019s going wrong with insights in real-time. Additionally, schema errors are avoided with automatic source schema mapping. Rating [G2 Crowd](https://www.g2.com/products/hevo-data/reviews?source=search) 4.4/5 (based on 254 reviews) Pricing To compare [pricing](https://hevodata.com/pricing/pipeline/) , Hevo Data starts at $239 per month, and Skyvia starts at $15 for the same time. For Hevo Data, you may also use the free plan with limitations like 50+ sources and up to 5 users. Fivetran The next strong competitor in the data replication tools market is [Fivetran](https://www.fivetran.com/) . Of course, users like the intuitive interface, but the backside of the coin is the high price. The solution is low-code, and the UI is pretty convenient, allowing users to analyze what they\u2019re doing easily. The impressive number of connectors: 300+ is accessible for replication data, including a set of databases, data warehouses, etc. Also, the automated data replication in real-time is available here. Rating [G2 Crowd](https://www.g2.com/products/fivetran/reviews) 4.2/5 (based on 406 reviews) Pricing The [pricing](https://www.fivetran.com/pricing) pay-as-you-go model starts with a free plan and offers a set of subscriptions depending on the user\u2019s needs. You can try each plan with a 14-day trial. IBM InfoSphere The [IBM InfooSphere](https://www.ibm.com/information-server) is also an exciting data replication market competitor. So what may it offer for your business? The first point is the data delivery from IBM Db2, Oracle, SQL Server, PostgreSQL, and more. Dynamic, near-real-time delivery of transactional relational data is the next one. It also provides real-time inventory, product and customer, and sales information integration. The platform allows users 24/7 worldwide operations with reliable, exceptional performance support and synchronization for continuous data availability. Rating [G2 Crowd](https://www.g2.com/products/ibm-infosphere-information-server/reviews) 4.1/5 (based on 23 reviews) Pricing The pricing here depends on a list of factors like user license type and subscription plan. Remember about PVUs while selecting your most convenient pricing story. It would be better to connect directly to find the best selection. Dell RecoverPoint [Dell RecoverPoint](https://www.dell.com/en-us/dt/data-protection/recoverpoint.htm) is one of the best data safety solutions presented in the current market. It may protect users\u2019 data immediately, anywhere, and anytime, not depending on the storage supported. The platform offers 30000+ appliances to protect and recover insights in case of any disaster. The entire storage portfolio supports (ScaleIO, etc.). It also provides asynchronous and synchronous data replication for block-based storages, can restore apps to any time point, and allows replicating information over any distance (to decrease the traffic capacity consumption). Dell RecoverPoint also features a 3-site MetroPoint topology, providing disaster recovery capabilities with consistent access to VPLEX Metro for uninterrupted operations Rating [G2 Crowd](https://www.g2.com/products/dell-recoverpoint/reviews) 4.2/5 (based on 29 reviews) Pricing Dell RecoverPoint uses flexible [models](https://www.dell.com/en-us/dt/payment-solutions/index.htm#tab0=0) depending on each company\u2019s needs. You may select the subscription model or one-time payment licensing. The final price turns on the chosen features and abilities in both cases. And remember about the additional fees according to the selection. Acronis Cyber Backup [Acronis Cyber Backup](https://www.acronis.com/en-us/products/cyber-protect/backup/) is one more competitive protection and backup solution focused on real-time data replication. One of its standout features is AI-based and blockchain notarization technology, which provides strong protection against ransomware. This platform supports over 20 physical, virtual, cloud, and mobile environments, ensuring comprehensive security. No matter what incident occurs, users can back up files and disk images as a single file, allowing for seamless restoration on a new device without dependency on prior system conditions. With 24/7 protection, Acronis\u2019 agent continuously monitors applications, ensuring that any recent changes are backed up and ready for restoration at any time. Rating [G2 Crowd](https://www.g2.com/products/acronis-cyber-protect-cloud/reviews) 4.7/5 (based on 1038 reviews) Pricing The [pricing](https://www.acronis.com/en-us/products/cyber-protect/purchasing/#step=1) starts with the Standard plan ($85) and includes a 30-day free trial. Zerto [Zerto](https://www.zerto.com/) is a robust enough real-time data protector. Any troubleshooting recovery and ransomware resistance are its strongest sides. With it, you may forget about information loss during replication and downtime because of Zero RPO. The platform also enhances data replication efficiency and security through integrated WAN optimization, encryption, and QoS. Zerto provides automated testing and compliance reports, making it easier to maintain regulatory standards. Additionally, its advanced analytics help organizations predict future needs, enabling smarter insights management and long-term resilience. Rating [G2 Crowd](https://www.g2.com/products/zerto/reviews?source=search) 4.6/5 (based on 73 reviews) Pricing There are no fixed [models](https://www.zerto.com/try-or-buy/pricing-and-licensing/) here, so it\u2019s better to connect the company directly to choose the convenient one. But you may select between buying it or using it for free. Qlik Replicate The first is what [Qlik Replicate](https://www.qlik.com/us/products/qlik-replicate) is about: the app\u2019s simplicity, and you don\u2019t need to worry about how to work with it and control the replication process in detail. It enables real-time replication, ensuring seamless and efficient data streaming and ingestion across various warehouses and databases. Additionally, the tool supports large-scale data replication to any desired destination while providing detailed insight control for enhanced accuracy and flexibility. Rating [G2 Crowd](https://www.g2.com/products/qlik-replicate/reviews) 4.3/5 (based on 109 reviews) Pricing The [price](https://www.qlik.com/us/pricing/data-integration-products) here depends on the chosen functionality, but you may start with a free test drive. Oracle GoldenGate If to talk about the [Oracle GoldenGate](https://www.oracle.com/pl/integration/goldengate/) , just be ready to work with the real-time [data mesh](https://skyvia.com/blog/data-mesh-vs-data-lake/) solution that ensures continuous information availability for in-depth analysis whenever needed. It simplifies replication and stream processing by eliminating the need for complex computing management to design, execute, and monitor workflows. High-performance integration enables seamless real-time data movement across diverse platforms, ensuring businesses can act on insights instantly. Its scalability and reliability make it a go-to solution for organizations handling large volumes of critical info. Rating [G2 Crowd](https://www.g2.com/products/oracle-goldengate/reviews?source=search) 3.9/5 (based on 34 reviews) Pricing The [price](https://www.oracle.com/pl/integration/pricing/) here depends on what you choose. NAKIVO Backup & Replication [NAKIVO](https://www.nakivo.com/) replication solution is a pretty good competitor to save users time and secure insights with an efficient and friendly approach. It offers a versatile backup selection, supporting virtual machines, physical machines, cloud workloads, and Microsoft 365. So, it\u2019s a comprehensive choice for various IT environments. Additionally, its NAS backup solution allows businesses to protect file share data on NAS, Windows, and Linux systems, while incremental backups help optimize storage and improve efficiency. With fast recovery options and automated backup management, NAKIVO ensures organizations can minimize downtime and maintain seamless operations. Rating [G2 Crowd](https://www.g2.com/products/nakivo-backup-replication/reviews?source=search) 4.7/5 (based on 279 reviews) Pricing The [price](https://www.nakivo.com/how-to-buy/pricing/) depends on the chosen features and varies from fixed packages to Enterprise Plus. Tools Comparison Let\u2019s compare all the listed platforms, taking into consideration the essential data replication criteria. Platform Real-Time Replication Security Ease of Use Cost Skyvia Yes HIPAA, GDPR, PCI DSS. ISO 27001 and SOC 2 (by Azure). No-code and easy-to-use wizard. Cost-effective [pricing](https://skyvia.com/pricing/) with the Freemium plan. Hevo Data Yes SOC 2, HIPAA, GDPR, CCPA. No-code, low-code solution. Usage-based pricing based on Hevo events. Full-feature 14-day free trial. With a free plan. Fivetran Yes HIPAA, GDPR, SOC 2 Type II, Privacy Shield, ISO 27001, PCI DSS Level 1. Low-code solution. Volume-based pricing through Monthly Active Rows (MAR) with a 14-day free trial. Has a free plan. IBM InfoSphere Yes ISO 27001, SOC 2 Type II, HIPAA. Low-code solution. Pricing depends on your license type and subscription plan. Dell RecoverPoint Yes ISO 27001, SOC 2 Type II, HIPAA. No-code solution. Pricing includes the subscription model or one-time payment license. Acronis Cyber Backup Yes ISO 27001, HIPAA, SSAE 16. The solution requires technical knowledge and configuration level. Pricing offers a 30-day free trial and 3 types of payable models starting from $85. Zerto Yes ISO 27001, SOC 2 Type II, GDPR. No-code solution. Pricing is not fixed. The Free plan is available. Qlik Replicate Yes ISO 27001, SOC 2 Type II, GDPR. Intuitive GUI, but configuration steps are required. Flexible pricing with a free trial is available. Oracle GoldenGate Yes ISO 27001, GDPR, HIPAA. No-code solution. Flexible pricing depending on user needs. NAKIVO Backup & Replication No ISO 27001, GDPR, HIPAA. The code-based solution requires the appropriate level of configuration and setup. Flexible pricing model up to your needs starting with $1,95 per month. Choosing the Right Tool for Data Replication Selecting the best solution depends on each business\u2019s needs, budget, and technical requirements . While all the platforms reviewed offer powerful replication capabilities, the right choice will depend on several key factors: Real-Time Replication. Does your business require instant data synchronization , or can it operate with scheduled updates? Security & Compliance. Ensure the tool meets industry standards like ISO 27001, GDPR, HIPAA, and SOC 2 to protect sensitive data. Ease of Use. Some platforms provide no-code or low-code solutions , while others require extensive configuration and technical expertise. Pricing & Scalability. Consider whether the platform offers a freemium model, usage-based pricing, or flexible plans to match the growing needs. Conclusion So, after reviewing this set of solid and competitive data replication tools, you may make your choice easily. If you\u2019re looking for a solution that offers data replication with automatic scheduling and incremental updates , an intuitive, user-friendly UI , and a competitive pricing model with the ability to use the platform for free, try Skyvia in action. FAQ for Data Replication Tools What is data replication, and why is it important? Data replication involves copying data from one location to another to ensure system consistency and availability. It\u2019s crucial for disaster recovery, load balancing, and enhancing data accessibility. Organizations can prevent data loss and ensure continuous operations by maintaining synchronized copies. How does data replication differ from data synchronization? While both processes involve copying data, replication focuses on creating exact copies across systems, whereas synchronization ensures that data remains consistent across multiple locations, updating changes in real-time. Replication is often used for backup and redundancy, while synchronization is vital for real-time data consistency. What are the main types of data replication? The primary types include: \u2013\u00a0Full-Table Replication.\u00a0Copying entire datasets. \u2013\u00a0Transactional Replication.\u00a0Replicating individual transactions as they occur. \u2013\u00a0Snapshot Replication: Creating point-in-time copies of data. \u2013\u00a0Merge Replication.\u00a0Combining data from multiple sources into a single dataset. Each type serves different purposes, depending on organizational needs. What factors should I consider when choosing a data replication tool? Key considerations include: \u2013\u00a0Real-Time Replication Capabilities.\u00a0Does the tool support immediate data updates? \u2013\u00a0Security Compliance.\u00a0Does it adhere to standards like GDPR, HIPAA, or SOC 2? \u2013\u00a0Ease of Use.\u00a0Is the interface user-friendly? \u2013\u00a0Supported Data Sources.\u00a0Can it integrate with your existing systems? \u2013\u00a0Cost.\u00a0Does it fit within your budget? Can data replication impact system performance? Yes, improper replication strategies can lead to increased system load, latency, or conflicts. It\u2019s essential to design replication processes that minimize performance issues, such as scheduling replications during off-peak hours or using efficient replication methods. How does real-time data replication work? Real-time replication continuously monitors and copies data changes as they occur, ensuring that all systems have the most up-to-date information. This approach is vital for applications requiring immediate data consistency across platforms. Are there security concerns with data replication? Yes, replicating sensitive data can pose security risks if not handled properly. It\u2019s crucial to ensure that replication processes comply with security standards and that data is encrypted during transfer and at rest. How do I monitor and manage data replication processes? Many data replication tools offer dashboards and alerts to monitor replication status, performance, and potential issues. Regular audits and performance tuning can help maintain efficient replication processes. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-replication-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+Data+Replication+Tool&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-replication-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-replication-tools/&title=Selecting+the+Best+Data+Replication+Tool) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-data-visualization-conferences/", "product_name": "Unknown", "content_type": "Blog", "content": "[Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) Top Data Visualization Conferences 2025 By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/top-data-visualization-conferences/#respond) 1352 September 4, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-visualization-conferences%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Visualization+Conferences+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-visualization-conferences%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-visualization-conferences/&title=Top+Data+Visualization+Conferences+2025) Images are worth thousands of words. Instead of writing long reports, you can create a dashboard that is clear for anyone on the international team. Data visualization provides mechanisms to create such dashboards. It\u2019s an art to convert raw data into something meaningful and visually appealing. One way to learn and master this art is to attend data visualization conferences. There, you can discover the best practices and recent trends in visual analytics. This article provides a list of popular data visualization conferences along with all the essential details about them. Table of Contents Benefits of Attending Data Visualization Conferences Key Data Visualization Conferences in 2025 Emerging Trends in Data Visualization Tips for Making the Most of Data Visualization Conferences Conclusion Benefits of Attending Data Visualization Conferences Along with the opportunity to master your skills, data visualization conferences have other positive collateral effects. These range from discovery of new tools in the industry to business growth. Networking opportunities. As most data visualization events take place in physical locations now, they help to get to know new people. You can find business partners, clients, suppliers, etc. there. Most data visualization organizers put general breakfasts, lunches, and even afterparty on the agenda. Industry trends and innovations. Any data visualization conference aims to highlight industry tendencies. Speakers from global companies or university researchers present their theoretical and practical findings to listeners. New tools, features, and technologies. The mission of conferences is also to introduce new features in the existing [data visualization tools](https://skyvia.com/blog/top-bi-visualization-tools/) and present new services in the industry. That way, you can learn how to streamline your visual analytics. Business growth. Data visualization advances the overall business analysis. From a single dashboard, you can understand the areas for improvement within business processes and the opportunities to take advantage of. Best practices. At data visualization conferences, experts share their knowledge and experience. In 2023, the no-code data integration solutions empowering more effective data visualization were widely discussed at different conferences. [Skyvia](https://skyvia.com/) cloud platform is one such data integration tool for no-code data preparation for visual analysis. Key Data Visualization Conferences in 2024 and 2025 1. Forrester Technology & Innovation Summit Date: September 9-12, 2024 Place: Austin, Texas, US and online Registration fee: starts from $2795 Event website: [https://www.forrester.com/event/technology-innovation-north-america/](https://www.forrester.com/event/technology-innovation-north-america/) Forrester has been helping businesses gain a clear vision of the technological landscape. It also empowers companies to put customers at the center of everything, from operations to global strategies. Forrester is also famous for its events organized for business and technology leaders, and this summit is among them. The Technology & Innovation Summit is dedicated to emerging technologies, enterprise architecture, cloud computing, business intelligence, data visualization and analytics, etc. It will include breakout sessions on these topics, workshops, and case studies. This conference will also provide multiple opportunities for networking during general breakfasts, coffee breaks, and lunches. A cherry on the pie of this summit is a one-day women\u2019s leadership program promoting women in organizations. 2. PyBay Conferences Date: September 21, 2024 Place: San Francisco, California, US Registration fee: starts from $250 Event website: [https://pybay.org/](https://pybay.org/) The Python libraries Matplotlib and Seaborn are widely used for data visualization. In contrast to standard data visualization tools, such as Power BI and Tableau, Python offers a higher degree of customization and flexibility. At the PyBay conference, hundreds of experts proficient in Python will share their experiences. Speakers will present best practices for Python in software development, data science, data visualization, and other application areas. 3. IEEE VIS 2024 Date: October 13-18, 2024 Place: St. Pete Beach, Florida, US Registration fee: starts from $520 Event website: [https://ieeevis.org/year/2024/welcome](https://ieeevis.org/year/2024/welcome) This is probably the most notable and important event in data visualization. IEEE VIS 2024 will convene researchers from universities and practitioners from governmental and commercial organizations to exchange experience in using data visualization tools. This conference lasts for the entire week in mid-October and includes a range of tutorials, workshops, symposiums, colloquiums, award sessions, and even exhibitions. Most activities will be dedicated to innovations and trends in data visualization and visual analytics. For instance, there will be tutorials on large language models (LLM) for information visualization and workshops on data visualization techniques for different industries. 4. Power BI & Fabric Summit 2025 Date: February 18-19, 2015 Place: Online Registration fee: $159 Event website: [https://globalpowerbisummit.com/](https://globalpowerbisummit.com/) Power BI has multiple features and visualizations that are suitable for data analysis. An annual Power BI & Fabric Summit aims to introduce the latest best practices and new features of this tool. This event is held 100% virtually in the online environment and has a rather affordable participation fee. So, this summit is a good opportunity for data analysts and BI specialists, both beginners and advanced professionals, to get acquainted with the recent developments. The summit speakers are mainly Microsoft Power BI product team members and community experts. They will talk about DAX, Dataflow, new visualization options, Power Query, mobile and desktop apps, Power BI Premium, and other popular Power BI-related topics. 5. International Conference on Information Visualization Theory and Applications (IVAPP) Date: February 26-28, 2025 Place: Porto, Portugal, and online Registration fee: starts from \u20ac210 Event website: [https://ivapp.scitevents.org/](https://ivapp.scitevents.org/) Even though IVAPP has returned to offline mode in Portugal, it\u2019s possible to participate online from a remote location. This conference gathers researchers and business workers to exchange theoretical and practical experiences. The principal topics covered are data visualization, visual analytics, visualization techniques, and scientific visualization. If you want to present your findings and participate as a speaker at this conference, you must follow the academic paper submission deadline. As a listener, you can attend workshops and tutorials to gain more practical experience with data visualization techniques. 6. DATA Festival 2025 Date: March 26-27, 2025 Place: Munich, Germany Registration fee: \u20ac990 Event website: [https://barc.com/events/data-festival/](https://barc.com/events/data-festival/) This event covers many data-related aspects and modern technologies associated with them. Speakers from renowned German and global companies share their best practices in data visualization, which supports analytics and informed decision-making. In 2024, there was a case study that showed how a 150-year-old logistics company became data-driven. This has inspired many tech leaders to try out GenAI and data intelligence platforms for their businesses. 7. Tableau Conference 2025 Date: April 29 \u2013 May 1, 2025 Place: San Diego, California, US Registration fee: starts from $1550 Event website: [https://www.salesforce.com/tableau-conference/](https://www.salesforce.com/tableau-conference/) Tableau is probably the best choice for data visualization due to the incredible number of visuals available. Tableau conference is held annually to present the new product features and best practices to data professionals. The conference comprises a series of events, including breakout sessions, keynotes, tutorials, and workshops. On the first day of the conference, there will also be Tableau Certification exam add-ons for converting into a certified Tableau professional. Emerging Trends in Data Visualization Since the IT industry progresses, new technologies for data visualization appear. They give birth to new trends and approaches for visual analytics. Conferences are usually centered around new technologies and emerging trends. In 2024, AI in data visualization and data storytelling will be popular topics discussed in data visualization conferences. AI in Data Visualization Popular data visualization tools, such as PowerBI and Tableau, implement AI to enrich existing features and add new ones. For instance, Power BI extensively promotes Natural Language Query (NLQ), allowing non-technical users to clearly understand their data. They just need to write a question in simple language and get a response from the AI-based assistant. Another example of the AI implementation in modern data visualization tools is the anomaly detection. This primarily refers to outliers far beyond the usual data range values. Such anomalies significantly impact the results of visual analysis, so they should be detected and removed. AI can also be used in predictive analytics and forecasting. It takes a series of data from different timelines and apps, detects tendencies, and forms patterns. Based on these findings, AI can shape probable scenarios on user behavior, market trends, and other crucial business areas. Interactive and Real-time Data Visualization Instead of creating different dispersed charts, you can consider comprehensive dashboards. You can design them on your own or take pre-configured templates: [Tableau dashboard templates](https://public.tableau.com/app/search/vizzes/dashboard%20templates) [Looker Studio dashboard templates](https://lookerstudio.google.com/gallery) That way, you can observe departmental or the overall business performance at a glance. You can even set up the real-time data supply to have the most recent and accurate data at hand. For that, consider [data integration tools](https://skyvia.com/blog/data-integration-tools/) . [Skyvia](https://skyvia.com/) is a universal cloud platform designed for a wide range of data-related tasks. It\u2019s suitable for data integration, SaaS backup, and other operations on data. To supply your data visualization tools with the most recent data, explore the following Skyvia tools: [Import tool .](https://skyvia.com/data-integration/import) This visual wizard allows you to build [ETL](https://skyvia.com/learn/what-is-etl) and [Reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) pipelines. Collect data from [190+ sources](https://skyvia.com/connectors) and apply [transformations](https://skyvia.com/learn/what-is-data-transformation) to it. Then, you can send this data to the preferred BI or data visualization tool. [Replication tool .](https://skyvia.com/data-integration/replication) This visual wizard allows you to create [ELT pipelines](https://skyvia.com/learn/what-is-elt) . With this tool, you can send data copies from chosen apps to the preferred database or a data warehouse. This scenario suits those who perform analysis directly in the data warehouse environment. [Data Flow.](https://docs.skyvia.com/data-integration/data-flow/) It allows users to build more complex data pipelines, including several sources, and apply multistage data transformations like lookup and expression. Skyvia has a range of advantages for data preparation before analysis and visualization: Intuitive interface, which makes Skyvia one of the [top user-friendly ETL solutions](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) . Connectors to 190+ databases, data warehouses, SaaS apps, storage systems, and other sources. The number of connectors is constantly growing. Scheduled updates with up to 1-minute intervals, which makes your data visualization dashboards near real-time. Web-based access with no need for infrastructure management or additional software installations. Flexibility in pricing plans, ranging from the free version to the enterprise edition. Collect, prepare, and load your data into data visualization tools with Skyvia. Data Storytelling This is the next level of data visualization. It explains complex data in digestive format through infographics or interactive maps combined with descriptive text. Data storytelling is used within business teams as well as for external communications. It helps to discover gaps in business performance and find points for improvement through visuals. At the same time, it can communicate business success to clients and stakeholders. Tips for Making the Most of Data Visualization Conferences \u0421onferences are usually large-scale events involving hundreds if not thousands of people. Since they include lots of activities, such events are planned long beforehand. You also need to put some effort into preparation before any data visualization conference and think of how you can benefit from it. Planning and Preparation It\u2019s highly recommended that you prepare in advance for the selected data visualization conference. This applies to administrative procedures and subject-oriented preparation. Here are some tips that can be useful for getting ready for a conference: Explore the event website to get details about the location and agenda. See whether you need a visa to enter the country where the conference will take place. Book a flight and make a room reservation for your stay during the conference. Monitor news in the data field, focusing on data visualization topics. Watch the highlights from previous years of the same conference. Make sure you follow the paper submission deadlines if you participate as a speaker at the scientific conference. Networking Strategies Conferences are places where one usually meets new people who can potentially convert into business partners or employees. Communicating with people you hardly know isn\u2019t easy, so it\u2019s a good idea to explore networking tips and use them during an event. Prepare some conversation starters. Think of some phrases about the common interests that unite the conference members. That could be the event-related subjects, such as topics discussed, data visualization trends, conference location, etc. Also, explore some [ideas for small talk](https://youtu.be/8vPXIsAqmjg?si=JpAjL3V4uiTUybD3) . Use formal language. Using formal language in a professional environment is a rule of thumb. Get the list of networking events. The conference agenda usually includes general breakfast or lunch times as well as table discussions, which are suitable for networking. Prepare a brief presentation of yourself and your projects. Write down several sentences about your professional experience and interests on paper. Have an exit strategy. The conversation might go in the wrong direction from your point of view. For instance, people may start discussing topics or projects that don\u2019t interest you. In such cases, have an exit phrase or two in your pocket. For instance, you may say that you need to go to the next workshop or training session. Take business cards with you. At present, business card exchange is the fastest way to share your business contact details. Visit afterparties. Such events help people to interact with each other in a less formal environment. This helps to build human-to-human relationships, which adds trust in professional collaboration. Follow up the conversations. Make sure to add people who interest you on LinkedIn or other social media. That way, you can maintain connections in the future. Note that most networking opportunities are available in the offline mode. So, if you expect to get to know new people, your physical presence at a conference is a must. Post-Conference Follow-Up During data visualization events, you learn a lot of new information. However, it would make no sense if you didn\u2019t put it into practice. When you are back at the office, try the new experiences you gained during the conference. This should better be done in test mode so you\u2019ll know exactly which things work for you and which don\u2019t. Take a look at the video records and other materials available on the conference website. This will help you to refresh the recently gained knowledge and share it with your colleagues. Conclusion Data visualization itself is already a breakthrough phenomenon in the IT industry. It translates complex data into comprehensive dashboards. As the industry evolves, new approaches and trends in data visualization appear. To be the first to discover them, consider attending conferences. Get ready in advance by exploring the data visualization trends and planning your trip. Take advantage of the networking opportunities and experience sharing during the conference. This will help to lead your business ahead of competitors and gain success in the market. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-visualization-conferences%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Data+Visualization+Conferences+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-data-visualization-conferences%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-data-visualization-conferences/&title=Top+Data+Visualization+Conferences+2025) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [How to Connect Tableau with Jira (Step-by-Step Guide)](https://skyvia.com/blog/how-to-connect-tableu-and-jira/) [Analytics & Reporting](https://skyvia.com/blog/category/analytics-and-reporting/) [SOQL vs SQL: Best Practices to Query Salesforce Database](https://skyvia.com/blog/soql-vs-sql-best-practices-to-query-salesforce-database/)" }, { "url": "https://skyvia.com/blog/top-etl-tools-for-postgresql/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best Open Source and Paid ETL Tools for\u00a0PostgreSQL By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-etl-tools-for-postgresql/#respond) 8251 January 28, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-etl-tools-for-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Open+Source+and+Paid+ETL+Tools+for%C2%A0PostgreSQL&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-etl-tools-for-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-etl-tools-for-postgresql/&title=Best+Open+Source+and+Paid+ETL+Tools+for%C2%A0PostgreSQL) Are you struggling to keep your workflows running smoothly with PostgreSQL? The right [ETL tools](https://skyvia.com/blog/etl-tools/) can remove the headache of moving and transforming data between systems, letting businesses focus on their growth. If your company values customization and control, open-source tools are perfect for hands-on developers. However, premium platforms can streamline the integration and save time if you\u2019re looking for something easy to use and packed with advanced features. This guide will explore the top ETL platforms for PostgreSQL, highlighting their unique strengths and the types of businesses that can benefit most. Table of \u0421ontents 3 Steps of What is ETL 3 Steps of What is PostgreSQL Integrating Postgres with ETL Tools: Key Benefits Open Source ETL Tools for PostgreSQL Apache Airflow Apache Kafka Apache Nifi HPCC Systems Singer.io KETL Talend Open Studio Paid PostgreSQL ETL Tools Skyvia Talend Cloud Informatica PowerCenter Stitch Hevo Integrate.io Pentaho Data Integration Finding Your Ideal Postgres ETL Tool Optimizing Workflows with PostgreSQL ETL Conclusion 3 Steps of What is ETL [ETL](https://skyvia.com/learn/what-is-etl) is the backbone of how data moves and gets organized digitally. Think of it as a three-step process: Extract data from different sources (like databases, APIs, or spreadsheets). Transform it (cleaning, formatting, or reshaping it to fit your needs). Load it into a target system, like a database or analytics tool, where it\u2019s ready to be used. ETL helps businesses turn messy, scattered data into something valuable and actionable. Whether syncing sales numbers, cleaning customer records, or preparing reports, it ensures the correct information is in the right place at the right time. 3 Steps of What is PostgreSQL PostgreSQL is a powerful, open-source relational database management system known for its flexibility and reliability. Whether you\u2019re managing a small app or running massive enterprise systems, such a tool can easily handle these requests. Supports complex queries. Handles tons of information. Lets you create custom data types. Plus, it has advanced features like JSON support, full-text search, and robust security. Developers love its versatility; businesses trust it to keep their datasets organized and accessible. Integrating Postgres with ETL Tools: Key Benefits When paired with ETL tools, PostgreSQL unlocks powerful opportunities to streamline and enhance operational workflows. This combination allows businesses to automate processes, ensuring seamless integration with various platforms and applications. PostgreSQL is a central hub for managing diverse tasks, transforming raw inputs into structured, actionable insights. Key Benefits Automated Workflows. Eliminate manual intervention by streamlining extraction, transformation, and loading processes. Consistency Across Systems. Maintain accurate, synchronized information across platforms and applications, avoiding human errors. Custom Transformations. Adapt structures and formats to specific business requirements with flexible mapping capabilities. Scalability. Efficiently handle different volumes and increasingly complex operations as your business grows. Improved Analytics. Deliver clean, well-organized inputs to analytics tools for better insights and decision-making. Cost and Time Savings. Minimize manual effort and reduce operational expenses with automated, repeatable workflows. Enhanced Security. Ensure secure transfers and compliance-ready configurations for handling sensitive information. Open Source ETL Tools for PostgreSQL If you chose PostgreSQL as the platform, it\u2019s likely because of its open-source nature and cost-effectiveness. Searching with these no-cost options is a practical way to explore powerful abilities without a financial commitment. Such tools offer flexibility and customization, making them a great fit for developers and small businesses. Let\u2019s kick off our list with free ETL solutions. Apache Airflow [Apache Airflow](https://airflow.apache.org/) is a platform for creating, scheduling, and monitoring batch processes. Its modular architecture makes it highly scalable, allowing it to handle even the most complex data pipelines. It relies on a message queue (a mechanism that facilitates asynchronous communication by storing various messages), such as requests , responses , errors , or general information . These messages remain in the queue until they are processed by a designated consumer, ensuring smooth coordination across distributed systems. With Apache Airflow, you can use Python to define processes and outline data extraction, transformation, and loading steps. Here, the power of ETL truly comes into play, enabling efficient and reliable data workflows. Rating [G2 Crowd](https://www.g2.com/products/apache-airflow/reviews) 4.3/5 (based on 87 reviews) Pros A very extensible, plug-and-play platform. It can process thousands of concurrent workflows, [according to Adobe.](https://airflow.apache.org/use-cases/adobe/) It can also be used for [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) and other data-processing tasks. 75+ provider packages to integrate with third-party providers. Uses a web user interface to monitor workflows. Cons Requires Python coding skills to create pipelines. There\u2019s a steep learning curve if you are transitioning from a GUI tool to this setup. The community is still small but growing. Uses a command-line interface to install this product. Apache Kafka [Apache Kafka](https://kafka.apache.org/) is a distributed platform designed for real-time data streaming. It captures information from various sources, such as databases , sensors , mobile devices , and applications , and then directs it to various destinations. Kafka operates with producers that write data to the system and consumers that read and process it. It also offers an [API](https://cwiki.apache.org/confluence/display/KAFKA/Clients) compatible with multiple programming languages, enabling integration with tools like Python and Go or database libraries such as [ODBC](https://skyvia.com/connect/odbc-driver) and JDBC. This approach makes the platform a versatile option for working with PostgreSQL as both a source and a target for data. Additionally, GUI tools are available [here](https://medium.com/enfuse-io/gui-for-apache-kafka-e52698b00c42) and [here](https://dzone.com/articles/kafka-administration-and-monitoring-ui-tools) to simplify management and monitoring. For those who enjoy coding and scripting, It\u2019s an excellent choice for PostgreSQL ETL processes. Below is an example of a console script for writing data: $ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092\nThis is my first event\nThis is my second event And here\u2019s a sample script for reading events: $ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092\nThis is my first event\nThis is my second event Rating [G2 Crowd](https://www.g2.com/products/apache-kafka/reviews) 4.5/5 (based on 121 reviews) Pros 80% of all Fortune 100 companies trust and use Kafka. Handle [millions of messages per second](https://developer.confluent.io/podcast/handling-2-million-apache-kafka-messages-per-second-at-honeycomb/?_ga=2.28693877.895122394.1668595136-541733979.1668152127) . Very fast and highly scalable, with a latency of up to 2 ms. High availability and fault-tolerant. Large [community for support](https://lists.apache.org/list.html?users@kafka.apache.org) . Cons Requires technical skills to create producers and consumers. Like Apache Airflow, there\u2019s a steep learning curve if you transition from a GUI ETL tool to this tool. Apache Nifi If you want a free, open-source ETL with a GUI tool, [Apache NiFi](https://nifi.apache.org/) is the best bet. It boasts no-code data routing , transformation , and s ystem mediation logic . The user-friendly platform allows even non-developers to create complex workflows easily. NiFi supports various data sources and destinations, offering flexibility for different integration needs. Its drag-and-drop interface streamlines the process of building and managing pipelines, saving time and reducing the risk of errors. So, if you hate coding and scripting but still need powerful ETL capabilities, this is the perfect open-source solution. Rating [G2 Crowd](https://www.g2.com/products/apache-nifi/reviews) 4.2/5 (based on 24 reviews) Pros Supports a wide variety of sources and targets, including PostgreSQL. No coding of complex transformations. Uses a drag-and-drop designer for your pipelines. It also has templates for the most used data flow. It can execute large jobs with multithreading. You can also use data splitting to reduce processing time. Supports data masking of sensitive data. Supports encrypted communication. You can use Nifi chat, Slack, and IRC for support. Cons The [user community](https://lists.apache.org/list.html?users@nifi.apache.org) is small but growing. HPCC Systems [HPCC Systems](https://hpccsystems.com/) uses a lightweight core architecture for high-speed data engineering , making it an excellent choice for efficiently working with large volumes of information. It offers a secure and scalable platform, ensuring integrity and protection while managing complex workflows. With its unique Enterprise Control Language (ECL), users can perform powerful transformations and analytics with minimal coding effort. The platform also includes integrated components for ingestion, processing, and querying, simplifying the entire pipeline. This blend of performance, simplicity, and security makes it a versatile solution for businesses looking to streamline their information management processes. Rating No review details are available. Pros Secure but near real-time data engineering. Uses the Enterprise Control Language (ECL) designed for huge data projects. Provides a wide range of machine learning algorithms accessible through ECL. Provides documentation and training to develop pipelines. Cons Must learn a new language (ECL) to design pipelines. Singer.io [Singer.io](http://singer.io) is an open-source framework for building simple , flexible , scalable data pipelines. It follows a straightforward \u201ctap and target\u201d architecture, where taps extract information from sources, and targets load it into destinations. The platform uses a standard JSON-based format for data exchange, ensuring compatibility across various systems and applications. One of its standout features is the ability to handle diverse sources, including APIs, databases, and flat files, making it suitable for various ETL needs. The system also supports incremental extraction, allowing only new or updated records to be processed, which improves efficiency and reduces resource consumption. Developers appreciate its simplicity and extensibility, as they can create custom taps or targets in Python to meet specific requirements. Here\u2019s a sample Singer ETL script: \u203a pip install target-csv tap-exchangeratesapi\n\u203a tap-exchangeratesapi | target-csv\nINFO Replicating the latest exchange rate data from exchangeratesapi.io\nINFO Tap exiting normally\n\u203a cat exchange_rate.csv\nAUD,BGN,BRL,CAD,CHF,CNY,CZK,DKK,GBP,HKD,HRK,HUF,IDR,ILS,INR,JPY,KRW,MXN,MYR,NOK,NZD,PHP,PLN,RON,RUB,SEK,SGD,THB,TRY,ZAR,EUR,USD,date\n1.3023,1.8435,3.0889,1.3109,1.0038,6.869,25.47,7.0076,0.79652,7.7614,7.0011,290.88,13317.0,3.6988,66.608,112.21,1129.4,19.694,4.4405,8.3292,1.3867,50.198,4.0632,4.2577,58.105,8.9724,1.4037,34.882,3.581,12.915,0.9426,1.0,2017-02-24T00:00:00Z Rating No review details are available. Pros 100+ sources and targets, including PostgreSQL. JSON-based for language-neutral app communication. Develop your own source and your own targets using Python. Uses scripts to process data. Cons Requires technical skills to develop pipelines. KETL [KE](https://sourceforge.net/projects/ketl/) TL is a powerful open-source ETL platform for high-performance data integration and processing. It is tailored to handle complex workflows, offering a scalable and robust architecture supporting enterprise-level operations. One of KETL\u2019s key features is its ability to manage large-scale transformations efficiently, making it suitable for businesses with substantial data volumes. The platform provides built-in support for scheduling , monitoring , and logging , ensuring seamless operation and easier management of ETL processes. It also includes many prebuilt connectors for databases, flat files, and other data sources, allowing for flexible and straightforward integration. Its modular design enables users to extend its capabilities with custom plugins, making it adaptable to unique business needs. Here\u2019s a sample KETL command to show the XML definition of MyJob: Rating No review details are available. Pros Extract and load data that supports JDBC. This includes flat files and relational databases like PostgreSQL. Manage the most complex data in minimal time. Integrates with security tools to keep your data safe. Cons Hard to find ETL samples and documentation you can study. You may get confused with documentation from similar ETL tools named KETL (like Kogenta ETL (KETL) and configurable ETL (kETL)). Talend Open Studio Note : As of January 31, 2024, Talend has discontinued support for Talend Open Studio. [Here are the top alternatives for 2025](https://skyvia.com/blog/talend-alternatives/) . However, the [platform](https://www.talend.com/products/talend-open-studio/) is still powerful, free, and useful for data integration, transformation , and migration . Its easy-to-use drag-and-drop interface is perfect for anyone without a strong coding background. The platform supports various systems, seamlessly connecting to databases, APIs, and cloud services. Its flexible design allows users to tackle anything from basic data tasks to more complex, multi-step workflows. As an open-source tool, it also allows developers to customize and enhance its features, making it a highly adaptable solution for businesses with unique challenges. Rating [G2 Crowd](https://www.g2.com/products/talend-open-studio/reviews) 4.3/5 (based on 46 reviews) Pros 900+ data integration components. Includes connectors, data transformation, and many more. Graphical development of pipelines. Extend features with Java programming language. Great for small to large datasets. Cons No scheduler for ETL pipelines. The app is CPU and memory-intensive, based on some user reviews. Paid PostgreSQL ETL Tools Now that we\u2019ve found the free solutions, let\u2019s come close to the paid platforms that provide a lot of value, like ease of use, great support, and more. Skyvia [Skyvia](https://skyvia.com/data-integration/) is a user-friendly, cloud-based ETL tool that makes working with PostgreSQL a breeze. Whether syncing data, importing files, or building complex integrations, its no-code interface ensures you can get the job done without being a tech wizard. The platform supports [200+](https://skyvia.com/connectors#marketing) different sources, connecting PostgreSQL to everything from CRMs to cloud storage services. What really sets it apart is the scheduling of recurring imports, synchronizing information between [platforms](https://skyvia.com/blog/best-data-pipeline-tools/) , and even setting up workflows to keep everything running smoothly. The solution also offers powerful transformation abilities to clean and reshape data before it lands in the DBMS. Rating [G2 Crowd](https://www.g2.com/products/skyvia/reviews#survey-response-9097470) 4.8/5 (based on 242 reviews) Pros Free plans and trials. Simple, drag-and-drop designer. Simple and complex data transformations. With a pipeline scheduler. Detailed logs and failure alerts that you can easily comprehend. Almost zero code querying of data sources. 100% cloud. No need to install it. Cons Limits on free usage. Pricing Flexible [freemium model](https://skyvia.com/pricing/) . Free plan for 10k records/month. Starts at $79/month for 1MB. Talend Cloud Features you love in the Open Studio but for the cloud. [Talend Cloud](https://talend.com/blog/talend-cloud-management-console-accelerate-your-move-to-the-cloud/) is a powerful, cloud-based ETL tool that takes the complexity out of managing PostgreSQL integrations. It handles everything from simple data migrations to advanced workflows, all with a user-friendly interface that combines drag-and-drop simplicity with robust functionality. The solution supports 1000 sources, making connecting PostgreSQL with CRMs, ERPs, and cloud platforms easier. One of its standout features is its real-time data integratio n abilities, ensuring your systems are always up-to-date. With built-in data transformation tools, you can clean, format, and prepare the information without additional software. Plus, its automation features let users schedule tasks, freeing up time and reducing manual effort. Rating [G2 Crowd](https://www.g2.com/products/talend-cloud-data-management/reviews?source=search) 4.2/5 (based on 206 reviews) Pros Easy and fast browser-based graphical pipeline designer. Extend or customize your pipeline using Python. Talend Trust Score checks data reliability at a glance, over time, or at any point in time. Cons Pricey, according to Gartner reviews. Pricing It provides a wide range of plans. Contact Talend Sales for [pricing](https://talend.com/pricing/) . Informatica PowerCenter [Informatica PowerCenter](https://www.informatica.com/download.html) is a robust solution designed to handle even the most complex integration workflows. It\u2019s an excellent choice for businesses utilizing PostgreSQL and other databases, offering advanced features to extract , transform , and load information across diverse systems. Its scalability ensures it can accommodate large datasets and complex operations, making it a preferred option for enterprise-level organizations. Its vast library of prebuilt connectors truly sets it apart, enabling seamless integration with various platforms, including databases and cloud-based applications. Additionally, it provides powerful transformation options to ensure your information is accurate, consistent, and ready for analysis. With automation capabilities, users can easily schedule repetitive processes , reducing manual work and enhancing overall efficiency. Rating [G2 Crowd](https://www.g2.com/products/informatica-powercenter/reviews?source=search) 4.4/5 (based on 85 reviews) Pros Build formulas for data transformation instead of coding. Drag-and-drop designer and configuration with keyboard shortcuts. Uses parallel data processing to handle huge amounts of data. Granular access privileges and flexible permission management for security. 24/7 support is available. Cons Runtime logs are a bit challenging to read. Transitioning ETL experts need to familiarize the product\u2019s terminologies. Some user reviews in G2 report the unresponsiveness of this app. Pricing Prepaid subscription based on Informatica Processing Unit (IPU). [Contact](https://www.informatica.com/products/cloud-integration/pricing.html) sales for more details about IPUs. Stitch [Stitch](https://stitchdata.com/) is a lightweight and easy-to-use ETL tool for modern data integration. It\u2019s perfect for businesses working with PostgreSQL and other databases, offering a straightforward way to extract , transform , and load the information across platforms. It handles small datasets or large-scale data pipelines with ease. The solution stands out because of its vast selection of prebuilt integrations, connecting PostgreSQL to hundreds of popular apps, cloud platforms, and DWHs. Its simplicity provides essential transformation options to ensure the data is ready for analysis. Rating [G2 Crowd](https://www.g2.com/products/talend-stitch/reviews?source=search) 4.4/5 (based on 68 reviews) Pros Simple interface to quickly create pipelines. Enterprise-grade security and data compliance. Extensible platform with the Singer open-source framework. Cons No free version. Pricing Standard [pricing](https://www.stitchdata.com/pricing/) starts at $100/month with 1 destination and 10 sources. Hevo [Hevo](https://hevodata.com/) is a no-code ETL tool that simplifies integration and makes working with PostgreSQL and other platforms effortlessly. It provides a seamless way to extract, transform, and load data in real time, making it an excellent choice for businesses that value speed and efficiency. With its scalable architecture, Hevo can handle everything from small datasets to enterprise-level workflows. What sets Hevo apart is its fully automated pipeline setup, enabling users to connect PostgreSQL with various sources without coding. Its transformation abilities ensure your data is clean , consistent , and analysis-ready. It also offers robust scheduling and monitoring features, so data syncs happen on time without manual intervention. Rating [G2 Crowd](https://www.g2.com/search?utf8=%E2%9C%93&query=Hevo&product_id=56648&product_id=158257&product_id=20265&product_id=18962&product_id=13574) 4.4/5 (based on 254 reviews) Pros 150+ connectors, where 50 are free. Flexible transformation using Python. Single-row testing before deployment. Easy-to-use forms with a schema mapper and keyboard shortcuts. Process huge amounts of data with horizontal scaling. 24/7 Support. Provides resource guides and video tutorials. Cons PostgreSQL is not included in the free connectors. Not allowed registration: personal email addresses (Outlook, Gmail) and .edu addresses. Requires knowing Python to do transformations. No drag-and-drop designer for pipelines. Pricing The [pricing](https://hevodata.com/pricing/pipeline/) is flexible and provides a 14-day free trial. Integrate.io [Integrate.io](https://www.integrate.io/) is a cloud-based ETL tool that especially fits businesses using PostgreSQL and other platforms. It offers a no-code interface, allowing users to extract , transform , and load data effortlessly for technical and non-technical teams. Its scalability ensures it can handle small tasks and complex enterprise-level workflows. Its vast library of prebuilt connectors enables seamless integration between PostgreSQL and various cloud applications , databases , and data warehouses . Its robust transformation tools help ensure that your data is accurate, consistent, and ready for analysis. With automation features like scheduled workflows and real-time monitoring, the solution minimizes manual effort while maximizing efficiency. Rating [G2 Crowd](https://www.g2.com/search?utf8=%E2%9C%93&query=Integrate.io+) 4.3/5 (based on 201 reviews) Pros Powerful, code-free data transformation. Connects to major databases, including PostgreSQL, and other sources. ETL, reverse ETL, ELT, CDC, and REST API. UI is easy and applicable for beginners and experts alike. Cons No on-premise solution. No real-time data synchronization capabilities. Does not support pure data replication use cases. Business emails only to get started. Limited connectors. Pricing The [pricing](https://www.integrate.io/blog/how-integrateio-pricing-works/) is based on the number of connectors. A 14-day trial is available. Pentaho Data Integration [Pentaho Data Integration](https://pentaho.com/products/pentaho-data-integration/) (PDI) helps businesses seeking to simplify the integration process, especially with platforms like PostgreSQL. It offers a rich set of ETL features for handling tasks of any complexity, from small projects to large-scale enterprise workflows. Its scalable and flexible architecture allows businesses to manage growing data needs efficiently. One of PDI\u2019s standout features is its intuitive drag-and-drop interface , which lets users build pipelines without extensive coding knowledge. It also provides advanced transformation abilities to easily clean , f orma t, and prepare data for analysis. Rating [G2 Crowd](https://www.g2.com/search/products?max=10&query=Pentaho+Data+Integration+) 4.3/5 (based on 15 reviews) Pros Codeless pipeline development. Supports streaming data. Wide range of connectors. Enterprise-scale load balancing & scheduling. Use R, Python, Scala, and Weka for machine learning models. Uses Pentaho security or advanced security providers. 24/7 support. Cons Technical documentation needs improvement. Pricey, according to some reviews in Gartner Peer Insights. No built-in data masking for sensitive data. But a scripting transformation is possible. Pricing Try for free the [Pentaho Community Edition](https://www.hitachivantara.com/en-us/products/lumada-dataops/data-integration-analytics/pentaho-community-edition.html) . 30-day free trial of Pentaho Enterprise Edition. $100/user/month to process 5 million rows. You can also adjust your plan as you grow. Finding Your Ideal Postgres ETL Tool Choosing the perfect ETL solution for PostgreSQL boils down to what you need most. Whether it\u2019s handling massive data loads , staying within a budget , or working with the team\u2019s technical skills . Look for a tool that: Scales with your growth; Integrates seamlessly with the data sources; Doesn\u2019t make your head spin during setup. If you\u2019re part of a small team, open-source platforms can do the job without breaking the bank. On the other hand, bigger enterprises might lean toward paid solutions packed with advanced features for high-performance workflows. The key is finding a tool that aligns with your goals so you can streamline data integration and unleash the full power of PostgreSQL. Optimizing Workflows with PostgreSQL ETL Imagine you\u2019ve found the tool of your dreams. The next step is data optimization, so let\u2019s begin. Define Clear ETL Processes. Start by outlining data extraction, transformation, and loading requirements. Define specific tasks like data cleaning, validation, and enrichment to maintain data consistency and accuracy. Leverage PostgreSQL Extensions. Use extensions like PostgreSQL-FDW for handling external data and pg_partman for efficient table partitioning. Automate ETL Workflows. Select automation platforms like Skyvia or Apache Airflow to streamline repetitive tasks, reducing manual intervention and errors. Optimize Query Performance. Implement indexing, parallel processing, and query optimization techniques to enhance speed and scalability. Monitor ETL Pipelines. Use tools to monitor pipeline performance, identify bottlenecks, and ensure real-time data accuracy. Conclusion You must balance usability , performance , security , and price in choosing from this list of PostgreSQL ETL tools. Whether you\u2019re new to data integration or an established ETL expert on another product, this list will help you choose what\u2019s good for your use case. If you want a free tool to try with a really good, easy-to-use interface, try out Skyvia\u2019s [free plan](https://app.skyvia.com/register) before starting with the paid versions. And integrate your on-premise or cloud PostgreSQL database easily with other online services. FAQ for ETL Tools for PostgreSQL Is PostgreSQL an ETL tool? PostgreSQL is not an ETL tool but a powerful database management system. However, it can be used in ETL workflows for data transformation and loading when paired with ETL tools like Skyvia, Apache Airflow, or Talend. Why do we need to integrate PostgreSQL with ETL tools? Integrating PostgreSQL with ETL tools simplifies data extraction, transformation, and loading from multiple sources. ETL tools enhance automation, reduce manual errors, and improve the efficiency of managing complex data workflows. Can PostgreSQL be used for data warehousing needs? Yes, PostgreSQL is often used for data warehousing due to its robust querying capabilities, scalability, and support for large datasets. Features like table partitioning and indexing make it suitable for analytical workloads. What are some key extensions for PostgreSQL ETL workflows? Popular extensions include PostgreSQL-FDW for integrating external data sources, pg_partman for managing partitions, and pg_stat_statements for monitoring query performance during ETL operations. What are the best ETL tools for PostgreSQL? Top ETL tools for PostgreSQL include Skyvia, Apache Airflow, Talend, Hevo, Integrate.io, etc., for PostgreSQL. Each offers unique features like automation, cloud integration, and user-friendly interfaces. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-etl-tools-for-postgresql%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+Open+Source+and+Paid+ETL+Tools+for%C2%A0PostgreSQL&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-etl-tools-for-postgresql%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-etl-tools-for-postgresql/&title=Best+Open+Source+and+Paid+ETL+Tools+for%C2%A0PostgreSQL) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-fivetran-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Top FiveTran Alternatives & Competitors in 2025 [Free & Paid] By [Babu Tharikh](https://skyvia.com/blog/author/tharikh/) [0](https://skyvia.com/blog/top-fivetran-alternatives/#respond) 3831 March 6, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-fivetran-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Top+FiveTran+Alternatives+%26+Competitors+in+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-fivetran-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-fivetran-alternatives/&title=10+Top+FiveTran+Alternatives+%26+Competitors+in+2025+%5BFree+%26+Paid%5D) Data is a goldmine, and it\u2019s best to have it on demand on whichever system you need. You might have heard of the Fivetran tool that helps to load data into a data warehouse using its pre-built connectors. It provides seamless integration with programs, databases, files, and events. The ready-to-use connectors save time and effort by creating an automated data flow on your schedule. The demand for better business intelligence and analytics is driving the growth of these tools. The market size for business intelligence and analytics software applications is forecast to increase worldwide over the next few years, from [15.2 billion U.S. dollars in 2020 to more than 18 billion in 2025](https://www.statista.com/statistics/590054/worldwide-business-analytics-software-vendor-market/) . But did you know there are plenty of alternative Fivetran competitors that can suit your business purpose or even perform better and deliver more value at an economical cost? In this article, you can learn about the top Fivetran competitors and narrow down choices that could best meet your requirements. Before we dive into the comparative analysis details, here\u2019s a relative chart that gives you a snapshot view of Fivetran alternatives. This value-added chart will help you get an overall idea of offerings from companies like Fivetran. Here\u2019s the comparison list of the ten ELT and [ETL tools](https://skyvia.com/blog/etl-tools/) . Categories Stage Target segment Deployment Business model Pricing Location Skyvia ELT tool Late stage Enterprise, Mid-size Saas Commercial Freemium Prague, Czech Republic Hevo data ELT tool Late stage Enterprise, Mid-size Saas Commercial Freemium California, U.S. Stitch ELT tool Mid-stage Enterprise, Mid-size Saas Open core Contact sales Philadelphia Us Matillion ELT tool Late stage Enterprise, Mid-size Saas Commercial Freemium Manchester, U.K. Airbyte ELT tool Early-stage Enterprise, Mid-size Saas Open Source Contact sales San Francisco, US Integrate.io Data Warehousing, ETL and reverse ETL, ELT Mid-stage Enterprise, Mid-size Saas Commercial Free Trial, Contact sales California, U.S. Talend ELT Tools, Data Privacy and Governance Late stage Enterprise, Mid-size Saas Commercial Free Trial California, U.S. Informatica Data Cataloging, ETL Tools Late stage Enterprise, Mid-size Saas Commercial Free Trial California, U.S. AWS Glue ETL tool Mid-stage Enterprise, Mid-size Saas Commercial Contact sales US Rivery Data Modelling and Transformation, ELT Tools, Workflow Orchestration Mid-stage Enterprise, Mid-size Saas Commercial Free Trial, Contact sales New York, US Now that you have some idea of the list of tools, let\u2019s look at the Fivetran alternatives with their pros and cons and get an idea of their pricing. Table of Contents Skyvia Hevo Data Stitch Matillion Airbyte Integrate.io Talend Informatica AWS Glue Rivery Skyvia [Skyvia](https://skyvia.com/) is a robust cloud-based data platform offering businesses and organizations a simple yet comprehensive solution for data integration, backup, querying, and data connectivity. It simplifies and accelerates retrieving information, maintaining cloud-based data, and accessing diverse data sources. Users of Skyvia can use it without coding, too, as a simple wizard guides them through the procedure. This functionality makes it simple to work with cloud-based data. [Know more about Skyvia](https://skyvia.com/data-integration/) . Pros Skyvia has scheduled bidirectional data integration to/from multiple sources. Automatic mapping saves a lot of time. All connectors are included in the price. Cons The synchronization process could be a little bit faster. Email notification isn\u2019t available when integration is ready. Pricing Skyvia has a FREE plan for Data integration, Backup, Query, and connect, and it also has plans ranging from 19$ per month to 1999$ per month based on records and scheduling. You can get offers of 20% off on all monthly subscriptions. Hevo Data [Hevo](https://hevodata.com/) is a codeless, fully-automated data pipeline solution that expedites and simplifies data transfer from all sources to your warehouse. Hevo enables the reliable and real-time loading of data from any source, including relational databases, SaaS apps, S3 buckets, files, and NoSQL, to any of your warehouses, such as Snowflake, Google BigQuery, and Amazon Redshift. Moreover, Hevo supports 100+ pre-built connectors spanning SDKs, cloud storage, streaming services, databases, and SaaS applications. Pros With only a few clicks, configuration and setup are made simple. Possibility to set up ETL without coding skills. Excellent customer service team with friendly individuals. Integration of data without hiccups. Excellent user interface. Cons Not being able to schedule a pipeline job for a specific time of the day. Confusing transformations. High CPU usage. Tricky in the beginning. Pricing One of the three pricing plans offered by Hevo Data is free. This plan permits the use of unlimited data sources and models. You receive email support, a single sign-on, and a total number of user seats. With the free plan, consumers can experience up to one million events. The business plan\u2019s price is negotiable and established in conjunction with the client by the Hevo team. Stitch [Stitch Data](https://www.stitchdata.com/) is an open-source tool for integrating data from multiple systems (a warehouse) into a single platform. It is an ETL (extract, transform, and load) tool that transfers data from various systems to one and simplifies the reporting process. Like other solutions on the market, it extracts data from many channels and campaigns and compiles it into a centralized dashboard. Pros Accurate sales and product reports. Easy-to-generate reports. Effective for saving time. Fast integration. Great interface. Friendly GUI. Cons It\u2019s challenging to traverse the inventory list. Learning can be time-consuming. The absence of a designated dashboard sometimes causes customer support delays. Pricing Standard : This is the primary plan. Here, you have access to a single data warehouse and can only receive support via chat. The price ranges from $100 to $1,250 based on the number of millions of rows required. Stitch Unlimited : With a bespoke fee that exceeds $1,250 per month, Stitch Unlimited offers five data warehouses and platinum customer support. Stitch Unlimited Plus : With infinite integrations, more rights, and impeccable customer service. Matillion [Matillion](https://www.matillion.com/) Data Loader simplifies data migration into a cloud-based data warehouse of your choosing, such as Amazon Redshift, Google BigQuery, or Snowflake. The free SaaS-based data integration tool offers immediate access to your data, enabling you to innovate more rapidly and make more intelligent timely business decisions. Support your migration efforts by integrating with numerous popular data sources at no extra charge. Pros Matillion\u2019s robust GUI-based data transformation capabilities give businesses a straightforward method for organizing queries and their linkages for data modeling. Another distinctive feature of the Matillion platform is the possibility of deploying the product on-premises. Since data loading capabilities and transformation are both available on the Matillion platform, it may be easier to apply data governance best practices than with an independent replication and transformation strategy. Cons Matillion has a smaller selection of connectors than other leading ETL systems. The user experience centers around its GUI-based morphing features. Because Matillion is capable of more than data replication, it may have fewer partnerships for other capability sets than competing ETL solutions. Pricing Matillion offers a free plan. Pricing is provided via Credits, beginning at $2.00 per credit. Matillion Credits can pay for Virtual Core hours when executing Matillion instances. One Matillion Credits corresponds to one Virtual Core hour. There are also annual rates available. Airbyte [Airbyte](https://airbyte.com/) is an open-source data integration solution with pre-built API and U.I. connectors. More than 800 businesses use the platform to synchronize their data in real-time. In addition, companies can self-host Airbyte, eliminating the possibility of third-party involvement. Engineers can choose raw data to begin their normalization processes by utilizing services like optional normalized schema. Pros Airbyte provides an open-source data integration platform. In contrast to many ELT manufacturers, clients can use Airbyte Cloud (a SaaS service) or deploy an open-source version of Airbyte on their own. It provides a CDK as an alternative to creating code from scratch to expedite development. They support over twenty locations. When you require data put into a custom-built platform. Cons For long-tail integrations built using the Airbyte CDK, your team is responsible for maintenance, support, and resolving anything that breaks. Many of Airbyte\u2019s connectors are either in development or need to be ready for production. The price structure of Airbyte is not apparent. With a credit-based, volume-based consumption approach, it might be challenging to estimate usage and reduce expenses as data quantities increase. Pricing Airbyte offers a volume-based model with three pricing plans, one of which is free. The enterprise version provides custom costs and advanced features such as priority support, onboarding assistance, etc. Integrate.io [Integrate.io](https://www.integrate.io/) is an E-commerce-Specific Data Warehouse Integration Platform. The Integrate.io Platform provides affordable access to ETL & Reverse ETL, ELT & CDC, API Management, Data Warehouse Monitoring, and Data Observability. Pros Simple to use the tool. Incredibly customizable. Drag-and-drop UI for easy use. Easy third-party platform integration. Excellent customer service team. Cons It may lack ease of use for some users. Debugging can be challenging when error logs lack specificity. Interaction between fields or packages is slow. Pricing Integrate.io doesn\u2019t provide any free plan, but it provides a Free trial. The subscription model of integrate.io for no code data pipeline platform is based on the number of node hours ranging from monthly 1,199$ to 5,999$ Talend [Talend](https://www.talend.com/) is a data management application built on Java code with a workflow user interface. Talend delivers a package of enterprise [data management tools](https://skyvia.com/blog/best-data-management-tools/) . While Talend supports more than a thousand non-native connectors, its fully-coded, open-source architecture has a steep learning curve for customers unfamiliar with its interface. Talend\u2019s per-seat pricing model is essential for increasing data teams seeking to scale. Pros Robust data orchestration capabilities. Some AI/ML capabilities allow data scientists to model data. Data governance. Big data streaming. The capacity to consider sources as destinations. Cons Desktop design reliance. Infrastructure costs. Complex workflow design and configuration. Unnatural connectors. Pricing Talend Cloud costs $12k per user per year or $1170 per user per year for unlimited usage. Talend Data Fabric provides custom pricing that is often more expensive than Talend Cloud. Informatica [Informatica](https://www.informatica.com/) Big Data is database management software that helps enterprises capture and process large amounts of data in real-time. Informatica Big Data is a cloud-ready platform. This enables users to manage data assets from any location, at any time, and on any device, without the limits of on-premise Hadoop distributions. Pros Effective GUI interfaces for Session monitoring, Job Scheduling, [ETL Design](https://skyvia.com/blog/etl-architecture-best-practices/) , Debugging, Administration, etc. Queued Message. Third-party application data. Mainframe and file-based data. XML and unstructured data. Accessibility to a wide variety of data sources of the company. Load stabilization and parallel processing. Cons In the repository manager, moving an object from one folder to another is impossible. Further, you cannot import XML export files. Also, it includes Mappings, and one can do workflow development too. Pricing Informatica\u2019s several products can contain several optional components. Integration Cloud\u2019s base edition begins at $2,000 per month. The unknown is the cost of add-on tiers. Informatica offers 30-day free trials on the majority of its products. AWS Glue [AWS Glue](https://aws.amazon.com/glue/) is a managed extract, transform, and load (ETL) service that simplifies data preparation and loading for analytics. This tool allows users to create and execute ETL jobs in the AWS Management Console. AWS Glue identifies data and stores the related metadata (e.g., table definition and schema) in the AWS Glue Data Catalog when users direct it to AWS-hosted data. After being cataloged, data is immediately searchable and queryable. Pros It is swift, easy, and self-intuitive. As it is a managed service, one can take care of only a few underlying details. It is a pay-as-you-go service. So, there is no need to provide any capacity in advance. Cons AWS Glue runs on Apache Spark. So, to make the best of the AWS Glue service, the developer should have a good idea of Apache Spark. AWS Glue crawlers sometimes mismatch the data in the files. Sample Code is very basic and only available in some scenarios. Pricing AWS Glue follows a pay-as-you-go pricing model, which charges $0.44 per DPU-Hour for each Apache Spark or Spark Streaming job, billed per second with a 1-minute minimum (Glue version 2.0 and later) or 10-minute minimum (Glue version 0.9/1.0). Rivery [Rivery](https://rivery.io/) is a cloud-based service that provides business intelligence tools to manage and automate data pipelines for small to large companies. It has a single dashboard that lets users get insight into corporate operations, evaluate organizational performance, and generate activity reports utilizing key performance indicators (KPI). Pros The learning curve while using Rivery is incredibly flat. Rivery\u2019s support is incredibly professional, responsive, and efficient. It manages incremental extraction from systems like Salesforce or NetSuite. Cons While creating complex scripts might be frustrating to rearrange the various building blocks using drag and drop. Lack of different \u201croles\u201d for users. Pricing Rivery offers both pay-as-you-go pricing on demand and annual contracts. All plans include both pay-as-you-go and annual price options. The credit-based pricing structure begins at $0.75 per credit. The cost of database sources comes from the total amount of data sent. If you want to have a detailed view of Fivetran and other [data integration tools](https://skyvia.com/blog/data-integration-tools/) , we have you covered! Compare features, benefits, data sources and destinations, and pricing plans to [choose the best ETL tool for your business case](https://skyvia.com/etl-tools-comparison/#Skyvia) . Conclusion Knowing about various alternatives to Fivetran and their benefits helps to create awareness which is critical to making your buying decision. We recommend that you can shortlist a few and then make an in-depth comparison against your requirements to help make an informed choice. Skyvia is a cloud-based data integration, backup, and management platform for businesses of all sizes, so we\u2019d encourage you to try Skyvia. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-fivetran-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Top+FiveTran+Alternatives+%26+Competitors+in+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-fivetran-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-fivetran-alternatives/&title=10+Top+FiveTran+Alternatives+%26+Competitors+in+2025+%5BFree+%26+Paid%5D) [Babu Tharikh](https://skyvia.com/blog/author/tharikh/) Salesforce Technical Writer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-informatica-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best Informatica Alternatives for Data Integration in 2025 By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/top-informatica-alternatives/#respond) 3124 August 4, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-informatica-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Informatica+Alternatives+for+Data+Integration+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-informatica-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-informatica-alternatives/&title=10+Best+Informatica+Alternatives+for+Data+Integration+in+2025) Are you on the hunt for Informatica competitors and alternatives? Then, you\u2019re in the right place. Informatica is a data integration platform that\u2019s been around since 1993, and over 5000 enterprise customers love it, such as Dolby and Twitch. Its maturity in the market may make it a strong contender for your consideration. Informatica also looks good on paper, boasting a friendly interface for both cloud and on-premise settings. It also comes equipped with an AI-powered recommendation engine called Claire AI. It assists with data mapping, quality, and transformation. However, the reviews on the platform are mixed. And you should be aware that you need a work email to register, which you may find restricting. While there\u2019s a free option available, you may notice that it has limited support and features. Whereas customers find the paid versions sort of pricey. There are many alternative integration tools aside from Informatica. Whether you\u2019re turned off by the price, features you don\u2019t need, or limits on the free version, there are plenty of other tools for you. So, let\u2019s check out which of these suits your needs. Table of Contents Informatica Competitors and Alternatives Compared Skyvia Fivetran AWS Glue Talend Integrate.io Alteryx Stitch Pentaho SAP HANA Cloud Boomi Conclusion Informatica Competitors and Alternatives Compared Before proceeding, let\u2019s look at your options in the following table. Integration Tool Focus Connectors User Interface Support Pricing Skyvia ETL, ELT, data ingestion, workflow automation 180+ No code drag-and-drop Email, chat, forum, documentation Free, Paid plans Fivetran ELT, Data ingestion 300+ Simple web interface Email, Support portal, documentation Free, Paid plans AWS Glue ELT, ETL, Reverse ETL, Streaming JDBC-compatible connectors and Amazon ecosystem connectors. Simple web interface AWS Support Center Paid plans Talend Data integration and management platform 1000+ Graphical, drag-and-drop Online ticketing, community, documentation Free, Paid plans Integrate.io ELT, ETL, Reverse ETL 150+ Graphical, drag-and-drop Email, chat, phone, Zoom, documentation Paid plans Alteryx ETL, Data Prep 80+ Graphical, drag-and-drop Email, phone, community, documentation Paid plans Stitch ETL 140+ Simple web interface Chat, Support portal, community, documentation Paid plans Pentaho ETL, streaming data 40+ Graphical Email, phone Support portal, documentation Free, Paid plans SAP HANA Cloud DBaaS, Replication, ETL Major databases, ODBC-compatible, REST and SOAP Graphical, drag-and-drop Support portal, phone, chat, community, documentation 90-day free, Paid plans Boomi Data integration, ETL, workflow automation 240+ Graphical Email, phone, chat, documentation Paid plans Now that you have some idea about Informatica\u2019s competitors let\u2019s explore the power of each. Skyvia [Skyvia](https://skyvia.com/) is a fantastic [data integration](https://skyvia.com/solutions/) tool with no code needed. Devart launched this product in 2014 for cloud data integration and backup. Skyvia also has thousands of free users, and over 2000 customers pay for it. Some of them are big names like Hyundai and General Electric. Pros Known for its ease of use and easy adoption. See G2 Crowd [user reviews](https://www.g2.com/products/skyvia/reviews) . With [180+ stable connections](https://skyvia.com/connectors/) . Drag-and-drop components in a pipeline design canvas, including transformations. No coding is required, meaning its implementation is easy for both technical and business users. Freemium tier using all connectors. The Azure ecosystem security certifications (HIPAA, GDPR, PCI DSS, ISO 27001, and SOC 2). Cons Limited free tier. Pricing Skyvia offers a free plan and a 14-day trial for the paid plans. There are no sale commitments. You can upgrade or downgrade at any time. Check out a detailed price comparison [here](https://skyvia.com/pricing/) for more information. Fivetran [Fivetran](https://www.fivetran.com/) is another Informatica alternative that supports data ingestion and ELT. It started in 2013, and now it has 2500+ customers worldwide. Lufthansa, Morgan Stanley, and more trust Fivetran for their data integration needs. Pros A big list of 300+ connectors. Pre-built and QuickStart data models. Low-code solutions. Free tier for 500,000 monthly active rows. Cons Complex transformations need dbt and SQL coding. Least security for the Free and Starter plan. Some connectors are not free. Mixed reviews on customer support. Pricing Fivetran has a free tier for 500,000 monthly active rows (MAR). Paid plans follow volume-based pricing with a 14-day free trial. AWS Glue [AWS Glue](https://aws.amazon.com/glue/) is a data integration tool from Amazon. Aside from ETL, ELT, and replication, it also supports batch and stream processing. Pros Seamless integration with other AWS products. Low-code, no-code solutions. Built-in security from Amazon Web Services. Cons Complex integrations may need coding. Pricing With AWS Glue, you pay only for what you use with no upfront costs or commitments. It also has AWS Glue 1 Million, where you can do 1 million requests/month. Talend [Talend](https://talend.com/) is a data platform that offers cloud and on-premise data integration solutions. The company launched in 2005. And it serves thousands of customers, including Toyota, eBay, and more. Talend has an open-source solution called Talend Open Studio. And a full-featured tool called Talend Data Fabric. Pros Has the most number of connectors in the list: 1000+. With open-source and commercial offerings. Drag-and-drop graphical user interface. Free Talend Open Studio. Cons Limited data integration features for Talend Open Studio. Pricey commercial offering. Pricing Talend offers a free desktop integration tool called Talend Open Studio. Meanwhile, the commercial offering is volume-based pricing with a free 14-day trial. Integrate.io [Integrate.io](https://www.integrate.io/) is a SaaS data platform that launched in 2021. It is a combination of products merged over the years since 2012. This includes xPlenty, FlyData, Intermix.io, and DreamFactory. They have hundreds of customers from different sectors. Pros 150+ connectors. Drag-and-drop user interface. Low-code and no-code solutions. Create your own REST APIs for your data sources. 220 data transformation components. Cons You can only use 2 connectors for ETL. Pay extra for more connectors. No free plan but with a 14-day free trial. Pricey. Pricing Integrate.io has separate pricing for [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) . There are 3 pricing plans for ETL based on features and a fixed number of 2 connectors. Pay an extra annual fee for more connectors. Meanwhile, ELT pricing is volume-based with an unlimited number of connectors. Alteryx [Alteryx](https://www.alteryx.com/) Designer claims to be an all-in-one tool. The tool covers data preparation to analytics and reporting. Coding is optional but needed in complex scenarios. It has 2 editions: Designer Desktop that works on Windows only. And Designer Cloud that is cross-platform. Pros Drag-and-drop user interface. 80+ connectors. Low-code and no-code solutions. With Machine Learning and predictive analytics. Cons Separate platform fee for new customers of Alteryx Designer Cloud. May need to code in Python on certain complex scenarios. Pricey, according to some reviews. Pricing Alteryx offers a free trial of an unspecified period for both Designer Desktop and Cloud. You need to contact their Sales to get a quote. Stitch [Stitch](https://stitchdata.com/) is a cloud-first, open-source ETL tool. It started in 2016. And today, it has 3000+ customers who trust them. Talend acquired them in 2018. And in May 2023, Qlik acquired Talend. Pros 140+ connectors. Simple web interface. Extend the functionality with Singer ETL. Integrate with Talend transformations. Cons No free plan. The lowest tier allows 1 destination only. Some reviewers feel customer support needs improvement. Pricey for small companies. Pricing Stitch has 3 pricing tiers with a 14-day free trial. Each tier differs in the number of sources and destinations. Support level, security, and more improve as you go to higher plans. Pentaho [Pentaho Data Integration](https://www.hitachivantara.com/en-us/products/lumada-dataops/data-integration-analytics.html) is an ETL tool with both open-source and commercial offerings. Launched by Hitachi Vantara in 2004, and now serves thousands of customers worldwide. Pros Simple graphical user interface. Low-code, no-code solutions. Supports Hadoop and Spark big data scenarios. Community Edition is free and open-source. Cons Only 68% recommend this tool in Gartner Peer Insights. Only 40+ connectors, but maybe enough for some scenarios. Limited features for Community Edition. Pricing Try the Community Edition for free but with limited features. You can also try the Enterprise Edition with a 30-day free trial. Contact their Sales for a quote. SAP HANA Cloud [SAP HANA Cloud](https://www.sap.com/products/technology-platform/hana.html) is a Database as a Service (DBaaS) with data integration features. It uses SAP Data Services to integrate various data sources into SAP HANA Database. Pros In-memory technology for faster data processing. Real-time analytics. Machine Learning feature. Seamless integration to other SAP products. Cons Complex setup and deployment. High price. Steep learning curve. Pricing SAP HANA Cloud has a free tier with community support only and up to 90 days. Pricing for paid tiers is consumption-based. And they also offer trial accounts intended for personal exploration. Visit the SAP Discovery Center for more information. Boomi [Boomi](https://boomi.com/) AtomSphere is an iPaaS, or [Integration Platform as a Service](https://skyvia.com/blog/what-is-ipaas/) . It can integrate cloud and on-premise data sources without coding. It also supports real-time data integration. Pros 240+ connectors Low-code solutions Pre-built transformations Mostly positive feedback on the user interface Cons Phone and chat support are not available for the Standard paid plan. Scheduling of pipelines is not possible during the 30-day free trial. Pricing Pricing depends on the number of connectors, workflow, and other factors. You can do a 30-day free trial before switching to a paid plan. Contracts are annual. Conclusion We brought you 10 Informatica competitors and alternatives in this article. They all have varied features and pricing, so remember to try their free tiers or avail of the free trials. Of all the Informatica competitors, Skyvia has the highest rating for both G2 and Gartner Peer Insights. Check out the comparison table again. You can try it out first and see why reviewers rated it as it is. [Register today](https://app.skyvia.com/register) and start working immediately. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-informatica-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+Informatica+Alternatives+for+Data+Integration+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-informatica-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-informatica-alternatives/&title=10+Best+Informatica+Alternatives+for+Data+Integration+in+2025) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-jitterbit-software-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Jitterbit Alternatives and Competing Platforms By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/top-jitterbit-software-alternatives/#respond) 3467 May 16, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-jitterbit-software-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Jitterbit+Alternatives+and+Competing+Platforms&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-jitterbit-software-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-jitterbit-software-alternatives/&title=Jitterbit+Alternatives+and+Competing+Platforms) In recent years, the trend of adopting cloud technologies and Big Data has skyrocketed. Almost [70% of companies](https://capitaltechsearch.com/blog/companies-are-increasingly-reliant-on-big-data/) claim that Big Data has become a crucial part of their IT landscape, and every tool we use today is software that runs in the cloud or is actively supported by relevant cloud platforms. Data processing in the cloud comprises [data integration](https://skyvia.com/data-integration/replication) , validation, transformation, modeling, and warehousing. This article covers the aspect of [data integration tools](https://skyvia.com/blog/data-integration-tools/) on the cloud, the benefits, and the business value generated by such data integration tools. We\u2019ll also look at Jitterbit as a data integration tool, its pros and cons, followed by some of its competitors. Table of Contents What is Jitterbit Software? How Jitterbit Facilitates Workflow Automation and Integrations TOP 10 Jitterbit Competitors Skyvia Fivetran Stitchdata Dell Boomi Celigo Mulesoft Matilion Talend Hevo data Informatica Conclusion What is Jitterbit Software? [Jitterbit](https://www.jitterbit.com/) is a cloud-based data integration platform that allows users to connect to multiple data sources on the cloud or on-premises, perform data manipulation, and deliver enriched datasets quickly without the hassle of setting up any infrastructure locally. The platform allows users to connect multiple systems together with the help of its API integrations and also automates business processes that do not need manual intervention. Some of the major features of Jitterbit are as follows: Cloud Studio . Allows users to design and develop application integration without writing any code. API Manager . Enables users to publish and manage APIs. App Builder . Provides a low-code interface to build and deploy web and mobile applications with no experience. Management Console . A centrally managed console to control and monitor workflow integrations, etc. Benefits of Jitterbit Let\u2019s take a look at some of the key benefits of Jitterbit. Being an established player in the field of data integration, it has very promising support for high-performant parallel processing of [structured and unstructured data](https://skyvia.com/blog/structured-vs-unstructured-data/) . It also provides out-of-the-box integration with machine-learning algorithms that allow users to enrich their data without implementing any code. Jitterbit also has a component that eases data cleansing between multiple systems. For experienced developers, it opens its platform to code for complex data processing jobs. Drawbacks of Jitterbit Migrating pipelines between environments creates duplicates that are overhead and need to be cleaned up later. Debugging is, at times, challenging. No checkpoints are available that allow developers to debug pipelines effectively. The Standard edition of Jitterbit provides access with 2-3 connectors which might not be sufficient for starters. How Jitterbit Facilitates Workflow Automation and Integrations Jitterbit allows users to build integrations with various data sources and tools. Some of the interesting use cases are discussed below. Workflow Automation Jitterbit provides a low-code solution for developing and automating manual workflows. Repetitive tasks such as HR approvals, process management, etc., can be incorporated into Jitterbit to empower employees with better and more efficient decision-making and growth. Salesforce Integrations Jitterbit provides a native data loader for Salesforce that allows it to connect and extract data from Salesforce easily. Salesforce is a popular CRM tool used by many leading companies in the world. With the help of its connector, Salesforce data can be integrated with other CRM tools such as NetSuite, Oracle, or SAP. When considering social media, integrations with popular marketing tools such as Hubspot and MailChimp also prove to be a powerful add-on to Jitterbit. Although Jitterbit has established itself as a major player in the world of data integration, it cannot be considered the silver bullet. There are a plethora of popular data integration tools that can serve as Jitterbit alternatives, some of which are also Jitterbit software competitors. Let\u2019s take a look at some of the alternatives to Jitterbit. TOP 10 Jitterbit Competitors Skyvia [Skyvia](https://skyvia.com/solutions/) is one of the leading data integration providers in modern days. With its universal cloud data platform, Skyvia provides a powerful data extraction web interface that\u2019s very easy to set up and use. There are [170+ source connectors](https://skyvia.com/connectors/) , including various cloud applications, CRMs, E-commerce, etc., databases, and data warehouses, from which data can be extracted, fetched, and synchronized. Key features Skyvia is a one-stop-shop for all business analytics data-related needs: extraction, transformation, and loading: [ETL, ELT](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) . synchronization in both directions. migration services. data management. sharing data as REST API. cloud data backups, etc. Use cases Data integration and transformation. Data orchestration and processing. Workflow automation. Budget Skyvia provides a Free plan to get started with some limitations on the number of records that can be processed. However, with paid plans such as the Basic or Standard , there are a lot of options possible. It also offers a Business and Enterprise plan for heavy usage. Industry Banking and financial sector. Retail. Telecommunication. Fivetran [Fivetran](https://www.fivetran.com/) is a cloud-based ELT tool that enables users to extract, load, and transform data from various data sources. As it\u2019s a SaaS application, there is no component to be installed on the servers. Fivetran provides reliable connectors for Snowflake, Redshift, and other MPP warehouses, which makes it one of the best platforms to start data integration. Fivetran also has an integration with Dbt that allows users to perform analytics on top of the data that is being processed. Key features Lots of data connectors are available. Near real-time data replication and synchronization. Automated data reporting. Use cases Realtime and batch data processing. Social media analytics. Budget Fivetran offers a Free plan for individuals to start processing up to 500K monthly active records (MAR). Based on the use case and data volume required to process, customers can choose from Starter , Standard , or Enterprise plan. For example, with the Standard plan, processing 1M records per month would be priced at 526.32 USD . Industry Retail and CPG. Financial services. Manufacturing. To learn more about the comparison between Skyvia and Fivetran, check out this [guide](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) . Stitchdata [Stitch](https://www.stitchdata.com/) is a cloud-based ETL tool that provides access to over 100 data sources. It allows organizations to automate their data processing workflows from on-premise and cloud data sources to data warehouses and lakes. Stitch provides robust data replication and data transformations using its Singer-based replication engine. That being said, Stitch can also be combined with open-source Singer taps and targets to run a data pipeline locally. Key features Connections with multiple data sources. Data encryption by default while loading data to data warehouses. Integration with open-source Singer taps and targets. Use cases Data orchestration and extensibility. Data transformation and data quality. Budget With the Standard plan, customers can get started using the trial for free for two months after which the pricing starts at 100 USD to process 5 million rows per month. The pricing for the Advanced plan starts at 1,250 USD per month and Premium at 2,500 USD per month with a potential to process 1 billion rows per month. Industry E-commerce and retail. Financial services. Software and IT. To learn more about the comparison between Skyvia and Stitchdata, check out this [guide](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) . Dell Boomi [Boomi](https://boomi.com/) is a cloud-based data integration platform developed by Dell. It serves a wide range of applications ranging from data integration, application integration, and API management. Dell Boomi can be used as an on-premise or as a SaaS application. It provides easy integrations with major cloud vendors such as AWS and Azure. The visual ETL designer makes it easy for users to develop complex data transformations without writing any code. Key features Powerful Data Integration tool. Provides functionality to manage enterprise data hub. API management solution to allow users to create and deploy custom API solutions. Use cases API management. Master data hub. Data catalog and preparation. Budget Dell Boomi has multiple products in its portfolio that are priced differently. Boomi Integration has a subscription-based pricing plan based on the features and the number of connectors required. Customers can request quotes from the sales team based on their requirements. Industry Banking and financial services. Healthcare and life sciences. Retail and public sector. To learn about the comparison between Skyvia and Dell Boomi, check out this [guide](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) . Celigo [Celigo](https://www.celigo.com/) is an iPaaS that provides application integration between various SaaS applications. While it allows a visual drag-and-drop editor to integrate applications, advanced users can also write scripts and use code injection to customize their workflows. Key features Low-code application integration platform. In-built process automation tools. Leverage pre-built solutions from the Celigo marketplace. Use cases POS terminal integration. Integration with CRM systems such as Microsoft Dynamics 365. Budget Celigo provides a Free plan for users to get started with. The Standard and Premium plans are priced at 600 USD and 1,200 USD per month, respectively, while the Enterprise plan is priced at 2,500 USD per month. Industry IT. Financial services. E-commerce. To learn more about the comparison between Skyvia and Celigo, check out this [guide](https://skyvia.com/etl-tools-comparison/celigo-alternative-skyvia) . Mulesoft Another popular tool in this segment is [Mulesoft](https://www.mulesoft.com/) . Initially, it provided middleware servicing and later started providing Integration-as-a-Service. Mulesoft provides Anypoint Studio that can be used as an IDE to develop integrations with various data sources and CRM tools using pre-build connectors. Transformations such as filters, joins, and normalization are available as a part of the IDE, which makes it easier to build custom integrations. In addition to that, there are several products within the Mulesoft ecosystem that make it one of the most robust platforms for data integration. Key features Develop integrations with APIs using AnyPoint API Designer. Integration with other tools within the Mulesoft ecosystem. Run Mulesoft applications on the cloud using CloudHub 2.0 Use cases Salesforce CRM integrations. Data orchestration and processing. Budget Mulesoft Anypoint offers three tiers of pricing \u2013 Gold , Platinum , and Titanium . Based on the use case and data processing requirements, customers can request a quote from Mulesoft. Industry Financial services. Government. Healthcare. To learn more about the comparison between Skyvia and Mulesoft, check out this [guide](https://skyvia.com/etl-tools-comparison/mulesoft-alternative-skyvia) . Matilion [Matilion](https://www.matillion.com/) is an ELT (extract-load-transform) tool that specifically runs on the cloud and is built with the cloud data warehouses in mind. It has native support for Snowflake, Amazon Redshift, Azure Synapse, Google BigQuery, and Delta Lake on DataBricks. With its intuitive pipeline designer, Matition provides an easy-to-use interface to manage complex data pipelines. It allows users to run performance tests and integrate with version control systems such as git. Key features Native integration with Snowflake Universal connectivity with major CRM tools and data sources. Robust documentation for most of its connectors. Use cases Data integration and transformation. Enterprise data platform. Data lakehouse. Budget Matilion provides a Free plan to start with processing up to 1 million rows per month. The other plans Basic , Advanced , and Enterprise are priced at 2 USD, 2.5 USD, and 2.7 USD per credit respectively. Matilion credits are a unit of data processing units by Matilion. Industry Retail. Banking and financial sector. To learn more about the comparison between Skyvia and Matilion, check out this [guide](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) . Talend Another major player in the field of data integration, [Talend](https://www.talend.com/) , is an enterprise-grade tool that is used by major banks and insurance companies. Talend provides a plug-and-play interface that allows components to be stitched in and out while designing complex data processing workflows. Talend also has an open-source product known as Talend Open Studio for Data integration. In the background, Talend generates Java code that can also leverage other open-source libraries. This also allows developers to inject custom Java code into Talend pipelines to customize their workflow. Key features No code platform. Easy integration with Java libraries. Support for many databases and data warehouses. Use cases Data integration and data quality. Change Data Capture. Data catalog. Budget Talend offers multiple plans such as Stitch, Data Management Platform, Big Data Platform, and Data Fabric. Based on the use case and team size, customers can choose from these plans and get quotes from Talend. Industry Talend is used by industries such as: Financial services. Healthcare. Retail. Government. To learn about the comparison between Skyvia and Talend, check out this [guide](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) . Hevo data Hevo Data is an end-to-end cloud-based data pipeline platform that allows users to extract data from most of the popular data sources, including SaaS applications and databases. Hevo Data comes with two popular products \u2013 Hevo Pipeline for data extraction and Hevo Activate for reverse ETL. Hevo Pipeline provides an easy-to-use interface to connect to source systems and start extracting data. It has connections available for most common data sources, including Google BigQuery, AWS Redshift, Databricks, etc. Additionally, Hevo Data also allows users to monitor their data pipelines so that in case of any issues, the root causes can be easily detected and mitigated. There is a free tier available that allows users to begin processing up to 1 million records. Later, you can migrate to any of the usage-based plans if required. Key features No-code platform. In-depth documentation. More than 150+ data source connectors. Observability and monitoring. Use cases Social media analytics. Competitive intelligence platforms. Budget HevoData Pipeline offers a Free plan for users to get started with it and a 14-day trial for their Starter Plan, which, at the time of writing, is priced at 239 USD per month for processing 5 million events. For large data teams, Hevo Data also offers a Business Plan that customers can negotiate with. Industry Hevo Data serves a wide variety of industries, such as: EdTech. Food delivery platforms. IT and software. To learn more about the differences between Skyvia and Hevodata, check out this [guide](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) . Informatica [Informatica](https://www.informatica.com/) has been one of the leading providers of [ETL tools](https://skyvia.com/blog/etl-tools/) for the past few decades. It provides on-premise as well as cloud-based solutions for data integration. Informatica is majorly focused on enterprise customers that need to process large volumes of datasets with complex security and compliance in mind. In addition to data integration tools, Informatica also provides solutions for data governance and master data management. Key features Enterprise-grade ETL solution provider. Provides a pre-paid approach to purchase capacity before executing jobs on the cloud. Allows efficient integration with APIs and other SaaS applications. Use cases Data integration and master data management. Data integration governance across systems. Budget Informatica charges its customers for their usage based on IPUs. An IPU is a unit of processing that a customer can purchase before processing the data. Check out the [pricing page](https://www.informatica.com/products/cloud-integration/pricing.html) for more details. Industry Some of the major industries supported by Informatica are: Financial services. Life science. Retail. Telecommunication. To learn more about the comparison between Informatica and Skyvia, check out this [guide](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) . Conclusion Data integration is a complex subject, and there are a lot of tools and techniques to implement a robust ETL application. In this article, we discussed some of the top data integration providers and their comparison with Skyvia. [Skyvia Connectors](https://skyvia.com/connectors/) provide a way to extend Skyvia\u2019s tool base and connect to a plethora of applications. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-jitterbit-software-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Jitterbit+Alternatives+and+Competing+Platforms&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-jitterbit-software-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-jitterbit-software-alternatives/&title=Jitterbit+Alternatives+and+Competing+Platforms) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-marketing-analytics-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top 12 Marketing Analytics Tools and Software for 2025 By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-marketing-analytics-tools/#respond) 1250 September 25, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-marketing-analytics-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+12+Marketing+Analytics+Tools+and+Software+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-marketing-analytics-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-marketing-analytics-tools/&title=Top+12+Marketing+Analytics+Tools+and+Software+for+2025) In today\u2019s fast-paced digital reality, marketing is all about data. From tracking customer behavior to measuring campaign performance, data-driven insights help marketers make smart decisions and get the most they want while working. [McKinsey\u2019s report](https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/marketing-analytics-it-works-so-why-aren039t-more-companies-using-it) says that companies that use marketing analytics are 15% more likely to see a positive return on investment (ROI) from their marketing efforts than those that don\u2019t. 15% may seem like a slight difference, but for medium and large enterprises, it\u2019s a precious amount, and even for a startup, 15% revenue also means a lot. According to another [McKinsey study](https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/the-secret-to-great-marketing-analytics-connecting-with-decision-makers) , companies in the top quartile of analytics performance are much better at attracting new customers and retaining existing ones. These statistics demonstrate how marketing analytics tools provide insights into customer behavior and campaign effectiveness and help businesses predict future trends. With the right tools, companies can track every click to understand the audience better and tailor their strategies. And this is just a small example. Let\u2019s explore how these tools can make marketing efforts smarter, more efficient, and more successful. Table of Contents Why Marketing Analytics is Crucial in 2024 The Evolution of Marketing Analytics Tools The Best Digital Marketing Analytics Tools to Use Google Analytics 4 HubSpot Analytics SEMrush Adobe Analytics Mixpanel Maropost Tableau Moz Pro Ahrefs CleverTap Domo Supermetrics Outreach Find the Perfect Marketing Analytics Solution Tailored to Your Business Final Thoughts Why Marketing Analytics is Crucial in 2024 Marketing analytics has never been more essential than now. With the rapid evolution of technology and changing consumer behaviors, businesses need to stay ahead of the curve by using data-driven insights not to be blind while making decisions. Let\u2019s consider why marketing analytics is a must-have for this season and upcoming years. Though we have yet to discover what AI technologies will bring to us in the future: Harnessing Generative AI for Predictive Insights. Generative AI transforms how marketers interact with data by analyzing past trends and predicting future outcomes. Such an approach helps businesses anticipate customer needs and shape strategies proactively, giving them a competitive edge in the market\u200b. Shift to First-Party Data. As third-party cookies phase out, the focus is shifting toward first-party data, which allows marketers to build more personalized and trustworthy connections with their audience. Real-Time Data Analytics. Analyzing data in real-time allows marketers to adjust their strategies on the fly. This dynamic approach helps optimize campaigns continuously, making them more responsive to changing market conditions and customer behaviors\u200b. Impact on Profitability. According to [WinSavvy](https://www.winsavvy.com/intersection-of-marketing-and-statistics-trends/) \u2018s research, companies that implement data-driven marketing strategies are up to six times more likely to be profitable year-over-year than those that don\u2019t use data effectively\u200b. So, the reality is that [92%](https://www.winsavvy.com/intersection-of-marketing-and-statistics-trends/) of marketing professionals believe data and analytics are vital for business success. This reliance on data helps create more targeted campaigns, understand customer behavior, and achieve better results overall. The Evolution of Marketing Analytics Tools Today, marketing means data-driven insights and analytics tools have been at the forefront of this transformation. But let\u2019s take a quick trip down memory lane to see how marketing analytics tools have evolved over the years and why they\u2019re more potent than ever in 2024. The Early Days: Basic Web Analytics Back then, marketing analytics was mostly about counting website visits and tracking basic metrics like page views and bounce rates. Tools like Google Analytics were a game-changer, giving marketers their first glimpse into how people interacted with their websites. Rise of Multi-Channel Analytics As marketing grew more complex with multiple channels, like email, social media, and paid ads, analytics tools evolved to keep up. Platforms started integrating data from various sources, giving marketers a more comprehensive view of their efforts. According to Statista, 74% of marketers now rely on [multi-channel](https://skyvia.com/webinars/multi-vs-omni-e-commerce-channel-strategy) analytics to understand the entire customer journey, making it a staple in modern marketing strategies. Real-Time Analytics and Personalization Another significant leap has been the move to real-time analytics, allowing marketers to adjust their strategies immediately. Real-time data means users can see how a campaign is performing right now and tweak it for better results. According to [HubSpot](https://www.hubspot.com/products/marketing/analytics) , real-time analytics companies are 30% more likely to outperform their competitors in customer engagement and conversion rates. The Shift to Predictive Analytics and AI Now, we\u2019re in the era of predictive analytics and AI-driven insights. Modern tools don\u2019t just tell us what happened; they predict what will happen next. Advancements in machine learning and big data have fueled this shift. A study by [Deloitte](https://www2.deloitte.com/ua/uk.html) found that 67% of high-performing marketing teams are using predictive analytics to drive decisions, showing just how essential these tools have become. The Future: Integrated, Automated, and Smart Looking ahead, the future of marketing analytics means integration and automation. Tools are becoming smarter and more intuitive, integrating seamlessly with CRM systems, ad platforms, and other marketing technologies. Gartner predicts that by 2025, over 80% of marketers will rely on automated analytics processes to streamline their workflows and drive efficiency. The Best Digital Marketing Analytics Tools to Use Nowadays, having the right analytics tools is like having a secret weapon. Ranging from classic tools like Google Analytics that help users understand websites\u2019 traffic to more advanced platforms like HubSpot and Adobe Analytics that offer in-depth insights into customer behavior across multiple channels, there\u2019s a tool for every need and budget. Here\u2019s the list of 12 top digital marketing analytics tools for 2024, plus a bonus tool. Google Analytics 4 [Google Analytics 4](https://support.google.com/analytics/answer/10089681) is Google\u2019s latest version of its favorite analytics platform, which provides insights into user behavior across websites and apps. It offers advanced tracking capabilities, machine learning insights, and a more flexible data model than its predecessor. With GA4, businesses can get a more holistic view of the customer journey, seamlessly tracking interactions across multiple devices and platforms. It also introduces enhanced privacy controls. So it\u2019s easier to comply with data regulations while gaining valuable insights. Its deeper integration with Google Ads allows for more precise audience targeting and campaign optimization. Best for Website traffic analysis, user behavior insights, and conversion tracking. Pros In-depth insights and cross-platform tracking. Event-based data model. Cons A steep learning curve. Limited integration with some third-party tools. Pricing Free; premium version (Google Analytics 360) starts at $150,000/year . Integrations Native integration with Google ecosystem like Google Ads, Google Search Console, BigQuery, and more. HubSpot Analytics Part of the [HubSpot](https://www.hubspot.com/) platform, HubSpot Analytics offers robust tools for tracking website performance, marketing campaigns, and customer interactions. It integrates well with HubSpot\u2019s CRM, providing a unified view of marketing and sales data. This seamless integration enables businesses to track the entire customer lifecycle, from the first touchpoint to the final sale, all in one place. HubSpot Analytics also provides intuitive dashboards and customizable reports, allowing teams to monitor key metrics and make data-driven decisions easily. The platform\u2019s built-in automation tools also help streamline marketing efforts, allowing businesses to focus on growing their customer base. Best for Inbound marketing, CRM integration, and lead management. Pros User-friendly interface, great for CRM integration. Detailed contact analytics. Cons The tool can be pricey for smaller businesses. Advanced features require higher-tier plans. Pricing Free basic plan; paid plans start at [$15](https://www.hubspot.com/pricing/marketing/starter?hubs_content=www.hubspot.com%2Fproducts%2Fmarketing%2Fanalytics&hubs_content-cta=nav-pricing&term=annual) /month. Integrations Native integrations provide smooth data flow into Google Analytics, Salesforce, WordPress, and other companies\u2019 data stacks. SEMrush SEMrush is a comprehensive SEO and digital marketing tool that helps businesses improve their online visibility through competitor analysis, keyword research, and website audits. It offers backlink analysis, content optimization, and social media tracking, giving businesses a full suite of tools to boost their online presence. With SEMrush, companies can uncover valuable insights into their competitors\u2019 strategies, identify growth opportunities, and fine-tune their marketing campaigns for better performance. The platform\u2019s detailed reporting and customizable dashboards make it easy to track progress and measure the effectiveness of SEO efforts, helping businesses stay ahead in the competitive digital landscape. Best for SEO, PPC, and content marketing strategy. Pros Extensive keyword research. Competitor analysis and SEO tools. Cons The tool can be overwhelming due to the number of features. There are higher costs for premium features. Pricing Starts at $139.95/month. Integrations Native integration with Google Analytics, Google Search Console, and social media platforms. Coupler.io Coupler.io is a reporting automation and analytics solution for multi-channel marketing analysis. This [Supermetrics alternative](https://blog.coupler.io/supermetrics-alternative/) helps users turn raw data into actionable, auto-updated reports. Their no-code data connectors and a robust data transformation module simplify the collection and management of PPC, SEO, CRM, and other marketing data. Coupler.io also offers in-app visualization and over 100 customizable dashboard templates for Looker Studio, Power BI, spreadsheets, enabling marketers with holistic performance analytics. While the AI Insights assist in the quick trend analysis and data interpretation with improvement recommendations. Best for Automating multi-channel marketing reporting. Pros Pre-defined datasets and transformations. Fast in-app dashboards. AI assistance in analytics. Cons Learning curve. Limited free plan. Pricing Starts at $24/month. Integrations Native integration with Meta Ads, Pipedrive, Google Analytics, Shopify, and many others. Adobe Analytics Adobe Analytics provides enterprise-level analytics solutions with robust features for real-time data analysis, predictive analytics, and advanced segmentation. It empowers businesses to gain deep insights into customer behavior across all digital channels, from websites to mobile apps. Its scalable architecture suits organizations of all sizes, from large enterprises to growing businesses looking to use advanced analytics. Best for Deep data insights and custom reporting. Pros Highly customizable, powerful real-time analytics. Predictive capabilities. Cons Expensive. Requires a high level of technical expertise to use all the functionality. Pricing Custom pricing; contact Adobe for details. Integrations Native integration with Adobe Experience Cloud and CRM platforms. Allows integration with other third-party tools. Mixpanel [Mixpanel](https://mixpanel.com/) works with product and user analytics, helping companies understand engagement and retention. It\u2019s excellent for tracking events and building funnels to optimize user experience. With its robust real-time analytics, Mixpanel allows businesses to dive deep into user interactions, uncovering the behaviors that drive conversions and long-term engagement. The platform also offers advanced segmentation and cohort analysis, enabling companies to tailor their products and marketing efforts to specific user groups. Mixpanel\u2019s customizable dashboards make it easy to visualize key metrics and measure the impact of product changes. Best for Product analytics and user engagement tracking. Pros Intuitive interface. Powerful event tracking. Real-time insights. Cons Limited reporting features compared to some competitors. Pricing Free tier available; paid plans start at $28/month. Integrations Mixpanel primarily uses native integrations for tools like Slack, Salesforce, and Segment. In some cases, particularly with more specialized or less common tools, third-party connectors or APIs might be used to achieve the desired integration. Maropost [Maropost](https://www.maropost.com/) offers marketing automation and analytics tools for email, SMS, and social media marketing. It\u2019s ideal for businesses looking to streamline their marketing communications. With Maropost, companies can manage all their campaigns from a single platform, ensuring consistent messaging across channels. Maropost also provides in-depth analytics and reporting, giving businesses the insights they need to optimize their strategies and drive higher ROI. Best for Email marketing. Marketing automation. Pros Robust automation features. Good customer support. Easy to use. Cons Limited customization options. The solution can be costly. Pricing Custom [pricing](https://www.maropost.com/platform/plans/) based on users\u2019 needs. Integrations Native integrations with Shopify, Salesforce, and various e-commerce platforms. Tableau [Tableau](https://www.tableau.com/) is a leading data visualization tool that helps users see and understand their data through interactive, shareable dashboards. It\u2019s highly customizable and integrates well with various data sources.\u00a0 With its drag-and-drop interface, even non-technical users can create complex visualizations that reveal deep insights into their data. Tableau\u2019s analytics capabilities, like advanced calculations and trend analyses, allow users to dive deeper into their data and uncover actionable insights. Whether users explore data on their desktops or share it across the organization, Tableau makes it easy to collaborate and make data-driven decisions. Best for Data visualization and business intelligence. Pros Powerful visualization capabilities. Wide range of data connectors. Cons Steep learning curve. The solution can be expensive for full functionality. Pricing Starts at $35/month per user. Integrations Native integration with Salesforce, Google Analytics, Excel, and many more. Moz Pro [Moz Pro](https://moz.com/products/pro) provides tools for SEO, including keyword research, website audits, and backlink analysis. It helps businesses improve their search engine rankings and online visibility. Moz Pro provides detailed insights into how a user\u2019s website performs in search engines. With features like rank tracking and on-page optimization suggestions, Moz Pro makes it easier for businesses to identify opportunities to enhance their SEO strategy. The intuitive interface and in-depth reports help users understand their SEO data, making it easier to stay ahead of the competition in search engine rankings. Best for SEO and inbound marketing. Pros Excellent keyword research tools. Comprehensive SEO tracking. Cons Some users find the interface dated. Limited features in lower-priced plans. Pricing The solution starts at [$49](https://moz.com/products/pro/pricing) /month. Integrations Native integration with Google Analytics, Google Search Console, HubSpot, and Google Data Studio. Ahrefs [Ahrefs](https://ahrefs.com/) is a popular tool for SEO and content marketing, known for its robust backlink analysis and keyword research capabilities. It\u2019s a favorite among marketers for its comprehensive tools that help uncover valuable insights into competitors\u2019 strategies and identify opportunities to improve search engine rankings. Ahrefs\u2019 site explorer allows users to dive deep into any website\u2019s backlink profile, providing detailed information on where links are coming from and the quality of those links. Ahrefs also offers powerful keyword research tools that help users discover high-volume keywords and track their performance over time. Best for Backlink analysis, keyword research, and competitor analysis. Pros Excellent data accuracy. Extensive backlink database. Easy to use. Cons The platform can be pricey, especially for smaller businesses. Pricing Starts at $129/month. Integrations Native integration with Google Analytics, Google Search Console, Google Data Studio, Excel/Google Sheets. CleverTap [CleverTap](https://clevertap.com/) is a customer engagement and retention platform that combines analytics, personalization, and automation to help businesses retain and grow their user base. It offers a comprehensive set of tools that allow companies to understand user behavior in real-time and deliver personalized experiences across multiple channels. With CleverTap, businesses can segment their audience based on behavior, preferences, and demographics, enabling highly targeted marketing campaigns. The platform triggers personalized messages immediately through push notifications, in-app messages, or email. CleverTap provides robust analytics to track the effectiveness of these campaigns, helping businesses optimize their strategies for better engagement and retention. Best for Mobile app analytics and user retention strategies. Pros The solution is powerful in segmentation, real-time insights, and engagement automation. Cons Complex setup. It can be expensive for smaller teams. Pricing Custom pricing based on usage. Integrations Native integration with Google Analytics, Firebase, Segment, Shopify, Amazon Redshift, and more. Domo [Domo](https://www.domo.com/) is a business intelligence and data visualization platform that provides real-time data insights and easy-to-create dashboards.\u00a0 It helps businesses make data-driven decisions quickly by bringing all their data together in one place. Domo allows users to create custom dashboards without extensive technical skills, making it accessible to many users. With its real-time data capabilities, Domo ensures that businesses always use the most up-to-date information. Its collaborative features make it easy to share insights across teams, fostering a data-driven culture throughout the organization. Best for Data visualization and business intelligence for large datasets. Pros The solution is scalable. It integrates with a vast number of data sources. User-friendly dashboards. Cons High cost. The platform can be overwhelming due to its feature richness. Pricing Custom [pricing](https://www.domo.com/pricing) ; contact Domo for details. Integrations Native integration with Salesforce, Google Analytics, AWS, and many others. Supermetrics [Supermetrics](https://supermetrics.com/) pulls all users\u2019 marketing data into one place, like Google Sheets, Excel, or a data warehouse, making tracking and analyzing performance across multiple platforms easier. With Supermetrics, users can automate the data collection process, eliminating the need for manual data entry and reducing the risk of errors. The platform\u2019s seamless integration with popular data tools makes it simple to create custom reports and dashboards that provide a clear overview of marketing performance. Supermetrics also provides advanced scheduling features so users can set up automated data refreshes, ensuring they always have the latest insights. Best for Aggregating data from multiple marketing platforms for easy reporting. Pros The platform simplifies data aggregation. It supports numerous data sources. Cons Limited customization options. Pricing can increase with more connectors. Pricing Starts at $39/month. Integrations Native integration with Google Ads, Google Analytics, Facebook Ads, HubSpot, and many more. Outreach Last but not least. Since marketing and sales processes are interdependent and inseparable, let\u2019s also consider [Outreach,](https://www.outreach.io/) which is a tool that analyzes sales data. This powerful sales engagement platform helps businesses optimize their sales processes by providing deep insights into customer interactions and sales performance. Outreach\u2019s advanced analytics and reporting features enable sales teams to identify what\u2019s working, what\u2019s not, and where there\u2019s room for improvement. With Outreach, sales teams can align more closely with marketing efforts, ensuring a unified approach to driving growth and success. Best for The platform excels in helping sales teams improve their productivity, streamline workflows, and increase engagement with prospects through personalized communication at scale. Pros Outreach automates repetitive tasks like follow-up emails, reminders, and assignments. The tool supports email, phone, and social media outreach, allowing for a cohesive and multi-faceted sales strategy. Cons Initial setup and customization can be time-consuming, especially for companies with unique workflows or complex sales processes. The price might be a barrier for smaller teams or startups with limited budgets. Pricing Pricing plans are flexible and depend on users\u2019 requirements. Integrations The tool natively integrates well with CRMs like Salesforce and Microsoft Dynamics and platforms like Linkedin Sales Navigator, Gmail, Outlook, and Slack, ensuring that all the sales data is synchronized and up-to-date. Find the Perfect Marketing Analytics Solution Tailored to Your Business First and foremost, the service selection depends on the business size and needs. But there\u2019s always more to consider, so here are the steps to find the perfect match. Identify Business Goals. Define what you need from an analytics tool. Are you focused on SEO, email marketing, and social media, or are you combining all three? Knowing such goals will help narrow down the options. Consider the Team\u2019s Needs. Answer the question of who will use the tool. If the team isn\u2019t tech-savvy, selecting a user-friendly platform like HubSpot Analytics is better. Tools like Adobe Analytics or Tableau might fit a more advanced team. Budget Matters. Marketing analytics tools come in all price ranges, from free options like Google Analytics to high-end solutions like Adobe Analytics. Assess the budget and choose a tool that provides the best value without stretching resources. Test and Adapt. Don\u2019t be afraid to try out different tools. Many offer free trials, so users can test them out and see how well they align with their needs. Look at Integrations. A great marketing analytics tool should seamlessly integrate with the existing systems. In this step, [data integration platforms](https://skyvia.com/blog/data-integration-tools/) become your best friends. For instance, the [Skyvia](https://skyvia.com/data-integration) data integration tool can connect [190+](https://skyvia.com/connectors) data sources, like CRM, email marketing platforms, e-commerce systems, etc., into the analytics tools. This ensures that all the data flows smoothly into one place, giving you a comprehensive view of your marketing efforts. Except for this, the solution is user-friendly and doesn\u2019t need any line of code to use. According to the [G2 Crowd\u2019s](https://www.g2.com/categories/etl-tools?rank=1&tab=easiest_to_use#rank-1) last rating, Skyvia is in the top three easiest-to-use [ETL tools](https://skyvia.com/blog/etl-tools/) . The [TrustRadius](https://www.trustradius.com/products/skyvia/reviews#overview) review also shows it as a top-rated data integration tool. Final Thoughts Integrating all the company\u2019s marketing data can be challenging, especially while using multiple platforms that offer a range of capabilities tailored to various marketing needs, from basic analytics and reporting to advanced predictive insights and automation. No matter if you\u2019re just getting started with marketing analytics or looking to upgrade the existing setup, remember that the right tools can make all the difference in turning data into insights and insights into action. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-marketing-analytics-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+12+Marketing+Analytics+Tools+and+Software+for+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-marketing-analytics-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-marketing-analytics-tools/&title=Top+12+Marketing+Analytics+Tools+and+Software+for+2025) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-matillion-competitors/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top Matillion Competitors: Your Ultimate Guide By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-matillion-competitors/#respond) 3438 June 9, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-matillion-competitors%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Matillion+Competitors%3A+Your+Ultimate+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-matillion-competitors%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-matillion-competitors/&title=Top+Matillion+Competitors%3A+Your+Ultimate+Guide) In the modern analytics world, any business requires integration tools to be successful. What are [data integration tools](https://skyvia.com/blog/data-integration-tools/) , why are they necessary for your business vitality, and what are the alternatives for one of the most popular Matillion services? If you have met with the data extracting, transforming, and loading automation processes and use them already, you know how it can help you to: Save time and resources. Reduce the risk of manual data integration errors. Improve decision-making with real-time data insights. If yes, you\u2019re the happy one working with such tools; your business is efficient and productive, and your data is accurate and sounds qualified, so you can make decisions knowing what\u2019s happening. If your answer is \u201cAhh, not sure,\u201d let\u2019s see how it works to help your business and what tools you need. There\u2019s a community of tools to solve your business pains, but let\u2019s start with [Matillion](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) as the one popular solution and compare it with some alternative ones. Matillion Let\u2019s see what\u2019s [Matillion](https://www.matillion.com/) , why it\u2019s good enough, and its common use: The platform is a cloud-based ETL solution focused on easy data extraction, transformation, and BI. It gives 140+ connectors and a regular Connectivity framework allowing clients to create custom connectors for any relaxation API supply device. Matillion ETL data are hosted in AWS, GCP, or Azure platforms. The intuitive UI doesn\u2019t need the extra costs for user training. Here you can work in real-time and integrate data into Git. The virtual machine approach offers complete control of data because it always is in your environment. Use cases: Cloud data warehousing (for Amazon Redshift, Snowflake, Google BigQuery, and Azure Synapse). Data integration and transformation. Data analysis and BI. What the users may not like? (Possible drawbacks): Java space errors due to hard limits on EC2 instances. The software looks like being in development to make people bored. Still not yours? Ok, let\u2019s dive deeper. Keep your breath. Even in the same industry, the businesses\u2019 needs are still different . It may be company size, budget, or the number of team members with the necessary skills, so let\u2019s review some more options that cover your business\u2019\u00a0 calls: Skyvia Fivetran Stitch Dell Boomi Jitterbit Celigo MuleSoft Talend Hevo Informatica Skyvia [Skyvia](https://skyvia.com/) also is a cloud-based ETL solution supporting data integration, migration, and backup. Its [Gartner rate](https://www.gartner.com/reviews/market/data-integration-tools/vendor/devart/product/skyvia) is slightly higher (4,8) than Matillion\u2019s (4,4). Both platforms solve the data warehousing case, but Skyvia is no-code and more cost-effective. Skyvia Matillion Volume & feature-based pricing models. Subscription-based model (more expensive for the short time period users). Freemium model is available for small businesses and users with a limited budget. Free trial is available. Regarding the use cases, Matillion fits some specific scenarios, but Skyvia\u2019s solution is more flexible. Why is it good enough? (Benefits): No-coding platform for both ETL & ELT. User-friendly UI. Support of 180 + sources. Cloud-to-cloud workflow automation and data management with SQL and CSV export & import capabilities. More flexible and cost-effective pricing. No need for additional software and hardware to get started. More freedom for users with SFDC data tasks. Use cases: Skyvia is a competitive player for integration data in the financial , accounting , and e-commerce areas. It lets companies streamline operations and boost productivity by integrating their platforms with various hybrid ecosystems, like CRM, accounting, etc. Skyvia\u2019s data synchronization features help to reduce errors while automating your daily routine . The ability to connect the data from different sources like email, social media, and the web to the CRM is another essential advantage of this system. What the users may not like? (Possible drawbacks): Just 180 + connectors supported. The Freemium pricing plan may limit your needs. [Skyvia vs Matillion](https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia) Fivetran Similarly to Matillion and Skyvia, [Fivetran](https://www.fivetran.com/) is an ETL solution extracting data from databases, events, applications, files, etc. It automates the data pipeline creation, which saves time and resources and allows focus on data analysis. This platform facilitates more coordinated decision-making inside your company by integrating data from multiple sources into a single location. Why is it good enough? (Benefits): Easy data integration from 160+ various sources and simple map creation facilitates your data management. The data pipeline automation decreases manual coding and saves time and costs. With its special mapping solution, Fivetran allows adding connections to data anytime. Use cases: Real-time data warehousing & analysis. BI & reporting. Data integration & replication. What the users may not like? (Possible drawbacks): Data synchronization time may be an issue for the data refresh while working in real-time. The data management capabilities aren\u2019t so vast and varied if compared with other apps. In other words, Fivetran is flexible enough to be successfully used in healthcare, finance, retail, or any area, working with sensitive data. Real-time data access improves the analysis abilities of any business. [Skyvia vs Fivetran](https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia) Stitch [Stitch](https://ua.stitchdata.com/) is pretty close to the previously described solutions. It\u2019s cloud-based, no-code, and can support 130+ connectors \u2014 a bit less than other systems offer. The pricing is volume-based with free 2 weeks trial, but it doesn\u2019t support any free version, compared to Skyvia. The data transformation here is cut, so you can only transform your data to be compatible with the destination. No extra features are available. Why is it good enough? (Benefits): In real-time, the system quickly replicates data from 130+ databases and SaaS platforms to cloud data warehouses, helping to centralize it and prepare for analytics. The UI is friendly for non-technical users. Use cases: Data warehousing & analytics (Google Ads, Facebook, Salesforce, etc.). The big data ingestion. Transformation of digital info into valuable BI. The improvement of data upload to the destination. What the users may not like? (Possible drawbacks): It\u2019s non-suitable for the custom coding requirements for some complex data integration jobs. The solution can\u2019t cover the large-scale data integration businesses\u2019 pains because of the lack of options available. Despite the drawbacks, you can use Stritchdata in e-commerce, retail, healthcare, finance, and banking. As usual, everything depends on what you need right now. [Skyvia vs Stitch](https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia) Dell Boomi [Boomi](https://boomi.com/) is a cloud-native, low-code real-time intelligent platform that supports cloud-to-cloud, SaaS-to-SaaS, cloud-to-on-premises, on-premises-to-on-premises, and B2B data integration types. Its data exchangeability is just 90+ sources, and pricing depends on the number of connectors included. As a plus, they have the 30-day free trial version. Why is it good enough? (Benefits): Errors with human impact are minimized because of automated data processing. Data silos reduce while integrating various apps. The system\u2019s flexibility and scalability allow it to support the set of integration patterns. The robust data processing security level (in the cloud or on-premise). Use cases: The business workflow automation with pre-built connectors for common apps like NetSuite, Shopify, and so on. Data management & governance with Boomi\u2019s MDM solution. Document flow automation and optimization of business processes. What the users may not like? (Possible drawbacks): The solution is complicated enough, especially for non-technical users, which may need extra time and costs for training. The customization and flexibility terms limitations. So the cost of the platform usage may increase while its flexibility and customization abilities will go down. [Skyvia vs Boomi](https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia) Jitterbit [Jitterbit](https://www.jitterbit.com/) is an iPaaS low-code, cloud-based service for workflow automation, not depending on the business\u2019 size. It provides effortless data management from the cloud, on-premises, and legacy applications with a single platform and easily scales it. The solution supports about 120+\u00a0 connectors, and the price depends on the many you need. Why is it good enough? (Benefits): The service\u2019s flexibility & customizability allows design workflow according to particular business needs. The user-friendly UI saves costs and time. Merging and utilizing data from multiple sources also saves time and improves decision-making. Automated data synchronization and real-time updates decrease the human impact and risk of errors thus. Use cases: The integration of Salesforce with popular third-party applications. For instance, ERP systems like NetSuite, Oracle, SAP, and Infor. SaaS, on-premises, and cloud apps quick integration. Cloud data migration. API integration with third-party apps. What the users may not like? (Possible drawbacks): The platform may not fit small businesses with a limited budget because of its high price. The solution\u2019s setup and configuration may be challenging and need significant technical expertise. [Skyvia vs Jitterbit](https://skyvia.com/etl-tools-comparison/jitterbit-alternative-skyvia) Celigo [Celigo](https://www.celigo.com/) is iPaaS, a low-code data integration service that directly supports the users\u2019 apps\u2019 connection to NetSuite for accurate data syncing and automation in the real-time mood. A few leading solutions, like A Smart Connector, iPaaS platform complete access, integration wizards, and templates, are offered here to quickly connect apps, sync data, and automate your business processes. The solution supports 200+ sources, and its pricing model includes unlimited flows free for 30 days. Why is it good enough? (Benefits): The business process optimization while measuring key performance metrics and gaining insights into your business operations to scale while keeping costs as low as possible. The data accuracy and integrity improvement: the data integration is automated, so manual data entry is unnecessary. Real-time monitoring and alerts to enhance your operations visibility to be more informed in decision-making. Use cases: E-commerce platforms integration with ERP systems. Data entry and management processes automation: everyday use case with prebuilt flows, settings, and configurations. HR business process automation. Front-end and back-end business (the real human experts\u2019 support and recommendations). What the users may not like? (Possible drawbacks): The challenging setup and integration. The EDI workflow complexity for supply chain automation may not be cost-effective. The limited customization options for unique decentralized systems and use cases. The high cost of implementation and maintenance compared to similar solutions described above. [Skyvia vs Celigo](https://skyvia.com/etl-tools-comparison/celigo-alternative-skyvia) MuleSoft [MuleSoft](https://www.mulesoft.com/integration-solutions/soa/sap) is an API control and integration platform that connects apps, records, and devices throughout on-premise and cloud computing areas. It\u2019s the perfect tool for communication among people, strategies, systems, and technology, allowing corporations to merge across the enterprise and work successfully. The solution supports 200+ sources; its pricing range depends on the data volume and features you\u2019d like to use. Why is it good enough? (Benefits): Quick launch of new projects. The possibility of reducing maintenance and operational costs. Scalability and flexibility enhancement with iPaaS and CloudHub infrastructure management. The strong ability to connect and share the data to create a complete customer view. Use cases: The legacy systems integration into the real world. Business processes and workflow automation. Data exchange and communication between the systems zoo. What the users may not like? (Possible drawbacks): The platform\u2019s complexity and technical expertise requirements. So, if your business doesn\u2019t have the IT team to do it \u2013 it\u2019s not your solution. The high cost of implementation and maintenance is a blocker for small companies with limited budgets. [Skyvia vs MuleSoft](https://skyvia.com/etl-tools-comparison/mulesoft-alternative-skyvia) Talend [Talend](https://ua.talend.com/) is a low-code, cloud-based, on-premise integration system combining data integration, data quality, data governance, working with virtual info supply, and supporting the entire records lifecycle. It provides some open-course abilities, or you can use the \u2018adult\u2019 set like Talend Data Fabric features. The connectors\u2019 offer ability here is pretty fine (1,000), and the pricing model depends on your subscription. Why is it good enough? (Benefits): Efficient teamwork to save time for data integration completion and productivity improvement with Data Fabric. The data quality and accuracy improvement with an end-to-end data management platform. The data quality control for cost-saving, speed, and performance improvement. Use cases: Data integration and migration. Cloud data warehousing. Big data processing and analysis. What the users may not like? (Possible drawbacks): The software complexity can increase the time and costs spent on staff learning. Limited connectivity to some data sources. Large data volumes issues. [Skyvia vs Talend](https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia) Hevo [Hevo](https://hevodata.com/) is a no-code data integration tool with 150+ sources supported. Depending on the product, it offers ETL, ELT, and reverse ETL functionality. It\u2019s a perfect solution for those who need to pull data from all their sources to the warehouse and run transformations for analytics. The pricing is event-based, including the free version. Why is it good enough? (Benefits): Integration with databases, SaaS systems, cloud storage, SDKs, and streaming services for further connection with Google BigQuery and other analytic tools. Speed up the data analysis to run in a few minutes because of automated data flow. Use cases: Optimization of ETL processes for BI. Real-time data warehousing for analytics. Cloud data operations. What the users may not like? (Possible drawbacks): The disability of: Extensive data volumes management. Support real-time or streaming data. Complex data mapping process. The high pricing model for the limited budgets compared to the competitors. [Skyvia vs Hevo](https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia) Informatica [Informatica](https://www.informatica.com/) is an iPaaS service for ETL, ELT, cloud, and on-premise usage. It allows connecting and fetching data from different heterogeneous assets and loading it into a target system. The 180+ connectors are available here, and the price is flexible depending on your business needs. Why is it good enough? (Benefits): You can integrate data from cloud-based and on-premises sources and synchronize it across different applications. You can manage your data quality dimensions. You can result in your decision-making based on data-driven insights. Use cases: Data warehousing and BI. Cloud data integration. MDM. What the users may not like? (Possible drawbacks): The cost estimation difficulty. The platform complexity for users. Limited customization options. [Skyvia vs Informatica](https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia) The win-win choice All the platforms are pretty good according to their possibilities and pricing. Of course, it depends on each company\u2019s needs because requests differ even in the same industry. We recommend trying the one where the features menu and the pricing model formula are win-win: Skyvia. The set of functionality here is more than enough to solve various business pains, and the pricing varies from Freemium to Enterprise packages, which is a comfortable and flexible solution for any business. Ready to start? Try the Freemium version. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-matillion-competitors%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+Matillion+Competitors%3A+Your+Ultimate+Guide&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-matillion-competitors%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-matillion-competitors/&title=Top+Matillion+Competitors%3A+Your+Ultimate+Guide) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-mulesoft-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) 10 Best MuleSoft Competitors & Alternatives in 2025 [Free & Paid] By [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) [0](https://skyvia.com/blog/top-mulesoft-alternatives/#respond) 3931 February 13, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-mulesoft-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+MuleSoft+Competitors+%26+Alternatives+in+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-mulesoft-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-mulesoft-alternatives/&title=10+Best+MuleSoft+Competitors+%26+Alternatives+in+2025+%5BFree+%26+Paid%5D) Mulesoft\u2019s integration approach allows you to spend less time on IT projects and more on core business. They do this by adding a layer of APIs to integrate different systems. And they have a comprehensive toolset to achieve this. Their philosophy is when you turn a business process into an API, the next project is easier to put in place. Meanwhile, [Mulesoft](https://www.mulesoft.com/) supports 140+ connectors for all your connection needs. But if you want to expand your options, what are the Mulesoft alternatives? This article gives you the 10 best alternatives to Mulesoft: Mulesoft Alternatives Comparison Table Skyvia Hevo Data Stitch Matillion Airbyte Integrate.io Talend Informatica AWS Glue Rivery Conclusion Let\u2019s begin. Mulesoft Alternatives Comparison Table The following is a summary of Mulesoft alternatives: Mulesoft Alternative Connectors User Interface Integration Category Free Tier [Skyvia](https://skyvia.com/) 150+ Graphical ETL, ELT Yes [Hevo Data](https://hevodata.com/) 150+ Form ETL, ELT Yes [Stit](https://www.stitchdata.com/) [c](https://www.stitchdata.com/) [h](https://www.stitchdata.com/) 130+ Form ELT No [Matillion](https://www.matillion.com/) 150+ Graphical ELT Yes [Airbyte](https://airbyte.com/) 300+ Form ELT Yes [Integrate.io](https://www.integrate.io/) 220+ Graphical ETL, ELT No [Talend](https://www.talend.com/) 900+ Graphical ELT Yes [Informatica](https://www.informatica.com/) \u2013 Graphical ETL, ELT No [AWS Glue](https://aws.amazon.com/glue/) \u2013 Graphical ETL, ELT, streaming Yes [Rivery](https://rivery.io/) 200+ Form ELT No Skyvia Skyvia is our first Mulesoft competitor. It\u2019s a universal cloud data platform. It handles integration, replication, backups, and more. What stands out in Skyvia is the ease of use and simplicity in designing [data pipelines](https://skyvia.com/blog/what-is-data-pipeline/) . Experienced [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) enthusiasts can adapt right away to its user interface. Pros Connectors : 180+ and growing. User Interface : Simple, drag-and-drop designer. Data Transformations : Simple and complex using expression builders. Scheduler : Yes. Coding : Almost zero code querying of data sources. 100% Cloud : Yes, with support to on-premise data sources using the Skyvia agent. Support : [Support portal](https://support.skyvia.com/) , live chat, and email. Documentation : Online technical documentation, tutorials, and webinars. Free Access and Trials : Yes. Cons Limits on free usage. Pricing Flexible [freemium model](https://skyvia.com/pricing/) . Free plan for 10k records/month. Starts at $15/month for 100k records. Hevo Data [Hevo](https://docs.hevodata.com/) is an end-to-end data pipeline that promises operational intelligence to business tools. Pros Connectors : 150+. User Interface : Easy-to-use forms with schema mapper and keyboard shortcuts. Data Transformations : Flexible transformation using Python. Scheduler : Yes. 100% Cloud : Yes. Support: 24/7 live technical support. Documentation : Provides resource guides and video tutorials. Free Access and Trials : Yes. Cons Not allowed in registration: personal email addresses (Outlook, Gmail) and .edu addresses. Only 50 connectors for the Free Plan. Requires knowing Python to do transformations. No drag-and-drop designer for pipelines. Pricing 1 million events or less are free using free connectors. Events can be new or changed records. 14-day free trial. The Starter plan starts at $239/month for 5 million events. See their [pricing page](https://hevodata.com/pricing/pipeline/) for more details. Stitch [A Talend product](https://www.stitchdata.com/) with a simple ELT self-service interface. It\u2019s for the quick movement of data into data warehouses for analysis. Pros Connectors : 130+. User Interface : Simple interface for quick pipeline creation. Scheduler : Yes. Coding : No needed coding. 100% Cloud : Yes. Support : Contact form and chat for all customers. Phone support for Enterprise customers. Documentation : Online technical documentation. Free Trials: Yes. Cons No free version. Pricing With a 14-day free trial. Standard pricing starts at $100/month with 1 destination and 10 sources. See more of the pricing plans [here](https://www.stitchdata.com/pricing/) . Matillion [Matillion](https://www.matillion.com/) offers 2 ways to integrate data: Matillion ETL and Matillion Data Loader. The difference between the two is the transformation of data. There\u2019s no transformation of data in Matillion Data Loader before loading. Pros Connectors : 150+. User Interface : Easy-to-use drag-and-drop interface. Data Transformations : Transformation components including expression editor. Scheduler : Yes. 100% Cloud : Yes. Support: [Support portal](https://support.matillion.com/s/) and Matillion community. Documentation : Technical documentation, training courses, and videos. Free Access and Trials : Yes. Cons Matillion ETL is not included in the free tier. Limited Matillion Data Loader features on the free tier. Pricing With free tier and trials to Basic, Advanced, and Enterprise plans. Pay by Matillion credits. Visit [Matillion](https://www.matillion.com/pricing/) [pricing](https://www.matillion.com/pricing/) for more details. Matillion credits start at $2.00 per credit. Airbyte [Airbyte](https://airbyte.com/) is one of the Mulesoft alternatives that is an open-source ELT platform. Pros Connectors : 300+ and an option to create your own. User Interface : Simple, form-based interface. Data Transformations : using SQL, dbt. Scheduler : Yes. 100% Cloud : Yes, but for the free tier, you can deploy to on-premise servers. Support: [Support tickets](https://support.matillion.com/s/) and Airbyte global community. Documentation : Technical documentation and tutorials. Free Access and Trials : Yes. Cons Free access requires the installation to on-premises or your cloud provider of choice. This requires some technical skills. Pricing Free Open Source and 14-day free trials for Cloud and Enterprise. Pricing is credit based and starts at $2.50 / credit. See their [pricing page](https://airbyte.com/pricing) for more details. Integrate.io [Integrate.io](https://www.integrate.io/) is a low-code data warehouse integration platform. It\u2019s an ETL and ELT platform. Pros Connectors : 220+. User Interface : Drag-and-drop pipeline designer. Data Transformations : Transformation components and expression editor. Scheduler : Yes. 100% Cloud : Yes. Support: 24/7 email, chat, phone, and Zoom support. Documentation : Technical documentation and webinars. Free Trials : Yes. Cons No real-time data synchronization capabilities. Does not support pure data replication use cases. No on-premise data solution. Pricey. Pricing Includes 14-day free trial. No free tier. The Starter plan is at $15,000 / year. See their [pricing](https://www.integrate.io/pricing/) for more details. Talend [Talend](https://www.talend.com/) uses a unified approach to data integration, data quality, and data sharing. Integrate on-premise and in-the-cloud data. As of January 2023, [Qlik announced](https://www.talend.com/about-us/press-releases/qlik-intends-to-acquire-talend/) its acquisition of Talend. Pros Connectors : 900+. User Interface : Graphical pipeline designer. Data Transformations : Transformation components. Scheduler : Yes. 100% Cloud : Talend Open Studio is an app for Windows and Mac. Commercial products are 100% cloud. Support: 24/7 support through chat and portal. Documentation : Articles and webinars. Free Access and Trials : Yes. Cons Pricey, according to Gartner reviews. Limited technical documentation. Pricing Talend Open Studio is open source and free. Includes Free Trials of commercial products. No price figures on the pricing page. See the [pricing page](https://www.talend.com/pricing/) for more details. Informatica [Informatica Data Integration](https://www.informatica.com/products/cloud-data-integration.html) is one of Informatica\u2019s product lines. It allows you to build intelligent ETL and ELT for the enterprise. Pros Connectors : Vast array of connectors. User Interface : Graphical pipeline designer. Data Transformations : Pre-built transformations and formulas. Scheduler : Yes. 100% Cloud : Yes. Support: Informatica Community and Global Customer Support. 24/7 phone support is available for premium support plans. Documentation : Online technical documentation. Free Trials : Yes. Cons No free tier. Runtime logs are a bit challenging to read. Transitioning ETL experts need to familiarize the product\u2019s terminologies. Pricing 30-day free trial. Prepaid subscription based on Informatica Processing Unit (IPU). Contact Informatica sales for more details about IPUs or visit the pricing page [here](https://www.informatica.com/products/cloud-integration/pricing.html) . AWS Glue [AWS Glue](https://aws.amazon.com/glue/?nc1=h_ls) is Amazon\u2019s serverless data integration service. You can use this for ETL, ELT, and streaming. Pros Connectors : JDBC-compatible connectors and Amazon ecosystem connectors. Check [here](https://docs.aws.amazon.com/glue/latest/dg/glue-connections.html) for more details. User Interface : Graphical pipeline designer with drag-and-drop. Data Transformations : Pre-built transformation components and custom transform using Python and JSON. Scheduler : Yes. 100% Cloud : Yes. Support: Developer and Business support. It includes 24/7 access to engineers with architectural guidance. Documentation : Online technical documentation, webinars, and video tutorials. Free Access : Yes. Cons Customizing generated code from the designer requires knowing Spark and [Python](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python.html) (or [Scala](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-scala.html) ). Integrations work well in the AWS ecosystem. Tricky to integrate external data sources. Pricing Always free AWS Glue 1 Million. It includes 1 million objects stored in the AWS Glue Data Catalog and 1 million requests/month. More than the AWS Glue 1 Million, an hourly rate billed by the second. Rates may depend on region and type of AWS Glue Job. See their [pricing page](https://aws.amazon.com/glue/pricing/) for more details. Rivery Finally, [Rivery](https://rivery.io/) provides a unified solution for building ELT pipelines and orchestrating workflows. Pros Connectors : 200+ with a custom data source. User Interface : Simple wizard-based, fill-in-the-blanks form. With starter kits for easy development of pipelines. Data Transformations : Using SQL and Python scripts. Scheduler : Yes. 100% Cloud : Yes. Support: 3 types of support subscriptions: Starter, Professional, and Enterprise. Each varies in their response times. See their [support page](https://docs.rivery.io/docs/working-with-rivery-support) for more details. Documentation : Online technical documentation and community. Free Trials : Yes. Cons No free tier. No advanced scheduling for the Starter plan. Pricing 14-day free trial. You pay by Rivery Pricing Unit (RPU). The Starter plan starts at $0.75 / RPU credit. Check the details and their pricing calculator [here](https://rivery.io/pricing/) . Conclusion Do you need ETL, ELT, or streaming? Then, these 10 Mulesoft alternatives got you covered for every integration need. Each has its pros and cons. Each has its own unique feature sets. They\u2019ve also been around for years. Each of them has very loyal customers. For ease of use, flexibility, and low-code needs, try out Skyvia. It\u2019s free to try. [Register here](https://id.skyvia.com/core/register/) and start building pipelines today. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-mulesoft-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=10+Best+MuleSoft+Competitors+%26+Alternatives+in+2025+%5BFree+%26+Paid%5D&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-mulesoft-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-mulesoft-alternatives/&title=10+Best+MuleSoft+Competitors+%26+Alternatives+in+2025+%5BFree+%26+Paid%5D) [Edwin Sanchez](https://skyvia.com/blog/author/edwins/) Software developer and project manager with a total of 20+ years of software development. His most recent technology preferences include C#, SQL Server BI Stack, Power BI, and Sharepoint. Edwin combines his technical knowledge with his most recent content writing skills to help new breed of technology enthusiasts. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-no-code-etl-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Selecting the Best No-Code ETL Tools By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-no-code-etl-tools/#respond) 3155 April 3, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-no-code-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+No-Code+ETL+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-no-code-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-no-code-etl-tools/&title=Selecting+the+Best+No-Code+ETL+Tools) Modern businesses mainly deal with tons of information, but users don\u2019t want to waste time writing complex scripts or building pipelines from scratch. In such cases, no-code [ETL tools](https://skyvia.com/blog/etl-tools/) have become our new best friends. These platforms let people move , clean , and sync data across systems with just a few clicks. No technical background is required! Whether you\u2019re in sales ops, marketing, analytics, etc., they help focus on insights instead of infrastructure. In this guide, we\u2019ll walk through the best no-code ETL tools for 2025 and help companies choose the one that fits users\u2019 needs. But before we get into the tools themselves, let\u2019s quickly break down what ETL actually is and why it matters. Table of Contents What Is ETL, and What Are ETL Tools? What is a No-Code Data Pipeline? List of the Best No-Code ETL Tools Skyvia Integrate.io Keboola Fivetran Stitch Matillion Osmos Comparison of Tools Benefits of No-Code Data Integration Summary What Is ETL, and What Are ETL Tools? Extract, Transform, Load ( [ETL](https://skyvia.com/learn/what-is-elt) ) is a core process that helps businesses move data from one place to another and make it actually usable along the way. First, users extract data from their sources (Salesforce, Shopify, spreadsheets, etc.). Then, they transform it by cleaning, mapping, or reshaping it. Finally, load it into a destination like a data warehouse, dashboard tool, or another app where it can do some good. [ETL tools](https://skyvia.com/blog/etl-tools/) automate all that, so we\u2019re not manually exporting CSVs or stitching things together with scripts. And here\u2019s where things get even better. No-code ETL tools make this process doable with drag-and-drop builders and guided flows. No developers are required. Of course, some teams want a bit more flexibility, and that\u2019s where low-code ETL tools come in. They let organizations customize things further while still keeping it user-friendly. No-Code ETL Tools Such systems are about speed , simplicity , and accessibility . They\u2019re perfect for non-technical users who need to move and manage data without relying on developers. You can build reliable data pipelines in minutes with: Intuitive drag-and-drop interfaces. Prebuilt connectors. Guided workflows. These platforms solve manual data exports , scattered reporting , and syncing between tools like Salesforce, HubSpot, BigQuery, or Snowflake, as well as other common business pains. They\u2019re especially helpful for small teams, marketing ops, sales ops, and anyone who wants insights without coding. Low-Code ETL Tools These platforms are the sweet spot between power and accessibility. They give users the flexibility to build more complex , customized data workflows while still keeping things faster and simpler than writing full code from scratch. You might still use SQL, Python snippets, or scripting here and there, but most of the heavy lifting is done through visual interfaces and prebuilt modules. Low-code ETL tools are ideal for growing teams, data engineers , or tech-savvy analysts who need to: Handle bigger volumes. Trickier logic. Hybrid (cloud + on-prem) data setups. If no-code tools feel a bit too limiting, but full-blown coding feels like overkill, low-code ETL is probably your best bet. What is a No-Code Data Pipeline? Now, let\u2019s talk about the automated workflow that moves data from one system to another without writing code. Consider it a smart assembly line that: Grabs data from tools like Salesforce or Shopify, Transforms it (if needed), Sends it to destinations like a data warehouse, spreadsheet, or dashboard. The magic is that users may build it all using visual tools: drag , drop , click , done . No developers, no scripts, no late-night debugging. No-code pipelines are a game-changer for non-technical teams who still need reliable, real-time data to make decisions. They help: Eliminate manual exports. Reduce human error. Save hours of repetitive work. So companies can focus on what actually moves the needle. List of the Best No-Code ETL Tools As said above, no-code ETL tools make it easy to pull data from apps , databases , or cloud services and send it wherever users need it. They also are: Cost-effective. Companies don\u2019t need to hire a team of engineers to get clean, connected data. User-friendly. Business teams can build and manage pipelines themselves with no technical background needed. Fast to deploy. Most platforms let users get up and running in under an hour. Secure by design. Many tools include built-in data masking, field-level encryption, and compliance support for GDPR, HIPAA, and CCPA to help protect sensitive customer info. Below, we\u2019ll walk through the top no-code ETL tools to help each business find the one that fits its workflow , team , and budget . Skyvia [Skyvia](https://skyvia.com/data-integration) is a universal SaaS (Software as a Service) platform for organizing data-related tasks like integration, backup, synchronization, API management, etc., without coding. This service is cloud-based, so it requires no software except a web browser. This platform has the following tools for various integration scenarios: [Import](https://docs.skyvia.com/data-integration/import) tool loads data from one cloud app, database, or CSV file to another cloud app or database according to the ETL or reverse ETL scenarios. [Export](https://docs.skyvia.com/data-integration/export/) extracts data from a cloud app or database and loads it into a CSV file on local or cloud storage (like Dropbox, OneDrive, etc.). [Replication](https://docs.skyvia.com/data-integration/replication) operation creates an exact copy of the cloud app in a database, which corresponds to the ELT scenario, and maintains its up-to-date state automatically. [Synchronization](https://docs.skyvia.com/data-integration/synchronization) connects two sources (databases and cloud apps) and synchronizes data in both directions. [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) packages are suitable for building complex data pipelines with several sources based on certain conditions. Data Flow handles the transformations and movement, while Control Flow manages orchestration logic like loops, conditions, and execution sequences. Let\u2019s have a look at Skyvia\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 10/10 Skyvia\u2019s interface is a hidden gem for both programmers and non-technical users. Existing users appreciate its easy-to-understand interface with clear structure and drag-and-drop functionality. Data Source and Destination Compatibility 9/10 Currently, this tool works with around 200+ connectors \u2013 cloud apps, relational databases, and data warehouses. Additionally, Skyvia offers: [OData connector](https://docs.skyvia.com/connectors/cloud-sources/odata_connections.html) for sources supporting OData protocol. REST connector for sources with REST API. Data Transformation Capabilities 10/10 It\u2019s possible to perform data transformations with multiple mapping settings: \u2013 Map a target column to a source column. \u2013 Map a target column to an expression. \u2013 Set a target column to a constant. \u2013 Obtain a value for a target column from a target or source object through lookup. \u2013 When loading several source objects, specify the relation between them, and the system builds the corresponding relation for target data. This mapping type is available only for foreign key fields. \u2013 Zip file mapping is available for importing a CSV file together with a .zip file with imported binary data. Scalability 10/10 Skyvia is suitable for any business size and type. There are plans with basic integration scenarios and simple data mapping settings, as well as plans with advanced options and a practically unlimited number of data records for processing and transfer. Security and Compliance 10/10 As Skyvia resides on Microsoft Azure cloud servers kept in secure locations, it\u2019s compliant with industry-specific standards \u2013 SOC 2, ISO 27001. Skyvia encrypts users\u2019 data with AES 256-bit standard and complies with GDPR regulations and HIPAA requirements. Pricing 10/10 This service offers a [free plan](https://skyvia.com/pricing/) for those looking for simple ETL tools with a basic set of features. Businesses eager to experience a greater number of data integration options and mapping functions would enjoy Standard, Professional, or Enterprise plans. [See pricing details here](https://skyvia.com/pricing/) . Summary 9.9/10 As you see, Skyvia could be an ideal no-code ETL tool for everyone \u2013 waste no time exploring it! Integrate.io [Integrate.io](https://www.integrate.io/) provides various solutions, including a no-code ETL and reverse ETL tool. It allows companies to integrate data and prepare it for further use in statistical analysis, data actualization, and other processes. With built-in connectors for cloud apps, databases, and storage platforms, it simplifies even complex data workflows. Plus, its drag-and-drop interface and scheduling features make it easy to automate data movement and don\u2019t need coding. Let\u2019s have a look at Integrate.io\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 9/10 This tool is easy to navigate and create tasks within the five main tabs: Pipeline, Destination, Source, Observability, and Settings. Each tab\u2019s name presumes which kind of operation a user could perform there. Data Source and Destination Compatibility 8/10 Integrate.io ETL tool supports around 200+ connectors. Data Transformation Capabilities 10/10 There are many options for data transformations, among which the most popular ones are: \u2013 Aggregate Transformation groups input data according to a certain criterion. \u2013 Assert Transformation checks data for compliance with a set of conditions. \u2013 Clone Transformation serves for splitting a dataflow into several ones. \u2013 Filter Transformation checks for duplicate records. Scalability 10/10 Intergrate.io offers unlimited integration packages in the workflow, so it would be suitable for any business. Security and Compliance 10/10 During data integration, there is an option to encrypt particular fields using AWS Key Management Service (KMS). Integrate.io ETL solution also allows setting up a reverse SSH channel on Windows to connect to remote databases secured under extra protection options. Pricing 8/10 The pricing starts from $15,000/year for the most simple plan and may vary depending on the particular company\u2019s requirements. Summary 9.2/10 Keboola [Keboola](https://www.keboola.com/) is another ETL solution for data integration with no coding skills as well as with low code for those willing to add more customizability to data processing. This service helps non-technical users easily collect data from various sources, schedule data collection, and load it to the required destination. It also offers built-in transformation capabilities, version control, and collaboration features that are especially useful for teams. With support for both structured workflows and custom scripts, the platform scales well from small data tasks to enterprise-level pipelines. Let\u2019s have a look at Keboola\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 10/10 This tool has an intuitive UI that makes it easy to navigate and perform any kind of data-related operations. Data Source and Destination Compatibility 9/10 More than 200 connectors are already implemented in Kebools. Thus, users may easily connect to CRMs, SaaS platforms, cloud apps, databases, and a variety of other tools. However, some of these connectors may require manual configuration, and others are community-built, which can vary in stability and support. Keboola does support custom connector development, which boosts flexibility but may not be considered \u201cout-of-the-box\u201d by all users. Data Transformation Capabilities 7/10 While Keboola generally sticks to a no-code approach, running transformations requires SWL, Python, or R programming language. Scalability 10/10 Keboola suits any business type and size. Security and Compliance 9/10 This ETL tool implements modern security standards on the web and complies with data protection regulations. Pricing 10/10 Keboola offers a free plan with one hour of computational capacity on a monthly basis. Then charging occurs depending on the computational capacity consumed extra. Summary 9.3/10 Fivetran [Fivetran](https://www.fivetran.com/) focuses on real-time data replication and transfer between sources. This helps to increase the operational capacity of the particular team or entire business and boosts the basis for analytics. It offers fully managed connectors that automatically adapt to schema changes, reducing maintenance overhead. With built-in transformations and robust destination support, the solution ensures that data is analytics-ready as soon as it lands. It\u2019s especially valuable for companies looking to scale their data infrastructure without spending on their engineering resources. Let\u2019s have a look at Fivetran\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 9/10 It\u2019s possible to build ETL pipelines and set up everything for data movement in minutes without coding knowledge. Data Source and Destination Compatibility 9/10 Fiveran supports over 700 data sources. Data Transformation Capabilities 8/10 While Fivetran emphasizes ELT\u00a0 processes, it integrates with dbt (data build tool) for transformations, which requires users to have a basic understanding of SQL. This approach offers flexibility but may necessitate some technical proficiency for complex transformations. \u200b Scalability 10/10 Both small businesses and large enterprises would find Fivetran useful in their data stack. Security and Compliance 10/10 Fivetran meets all the requirements for digital data privacy and security standards. Pricing 8/10 There is a free plan for those willing to work with small amounts of data. The company also offers Starter, Standard, and Enterprise plans. Summary 9.2/10 Stitch To move data from any source to a data warehouse, there\u2019s another no-code ETL tool for that called [Stitch](https://www.stitchdata.com/) . It allows setting an ETL data pipeline in minutes and applying the needed transformation settings. It supports wide range of different sources and offers integration with major data warehouses like Snowflake, BigQuery, and Redshift. The platform is designed for analysts and data teams who want fast setup without ongoing maintenance. While it focuses on simplicity, it also allows custom transformations through integration with tools like dbt. Let\u2019s have a look at Stitch\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 9/10 Similarly to other tools mentioned above, Stitch follows a no-code concept for designing ETL data pipelines. Everything is done in the visual wizard, where an integration (source) and destination (target) are selected. All other parameters of the data pipelines don\u2019t require any coding knowledge as well. Data Source and Destination Compatibility 9/10 Stitch supports around 160+ various SaaS apps, online services, and databases. Data Transformation Capabilities 7/10 The principal concept of Stitch\u2019s approach to transformation is to keep data as close to its original format as possible. This tool performs only arbitrary transformations to ensure data is compatible with the destination. Scalability 10/10 Stitch would be suitable for small businesses as well as for fast-growing organizations. Security and Compliance 10/10 This service resides on AWS services, which automatically makes it compliant with ISO/IEC 27001 security standards. It also provides an option to create SSH tunnels and IP whitelisting for sources that can handle such functions. Stitch also complies with GDPR, HIPAA, and Privacy Shield regulations so that users\u2019 data is stored and processed in a safe manner. Pricing 8/10 Stitch\u2019s pricing begins at $100 per month for the Standard plan, which includes up to 5 million rows per month. The Advanced plan is priced at $1,250 per month, and the Premium plan at $2,500 per month, both billed annually. While the entry-level pricing is competitive, costs can escalate with higher data volumes, which may be a consideration for larger organizations. Summary 9/10 Matillion Whether there\u2019s a need to create a single source of truth for keeping data in one place or preparing data for business analysis, [Matillion](https://www.matillion.com/) comes in handy. It also makes data ready for consumption in business apps and gets insights out of it. With deep integration into cloud data warehouses like Snowflake, BigQuery, and Redshift, it streamlines both data ingestion and transformation at scale. The system\u2019s flexible, low-code environment supports powerful SQL-based workflows while offering a visual interface for faster pipeline development. Let\u2019s have a look at Matillion\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 9/10 Matillion offers an intuitive interface designed for ease of use, allowing users to set up data pipelines efficiently. The platform\u2019s visual, low-code approach enables both technical and non-technical users to design, deploy, and manage ETL processes with minimal effort. Data Source and Destination Compatibility 9/10 This platform offers over 100 pre-built connectors for popular data sources. Otherwise, there\u2019s an option to create a custom connector using REST API. Data Transformation Capabilities 9/10 The platform offers a comprehensive set of pre-built transformation components, such as Aggregate, Pivot, Rank, and Distinct, supporting complex data manipulation without coding. For advanced customization, users can implement custom scripts, providing a balance between no-code and low-code approaches. Scalability 10/10 There are no restrictions on the company size or industry of operation for companies willing to implement Matillion in their data stack. Security and Compliance 9/10 Matillion operates on secure cloud platforms and adheres to industry-standard security practices. The platform offers features such as role-based access control and audit logging to ensure data security. However, specific compliance certifications (e.g., ISO/IEC 27001) are not explicitly mentioned in the available sources. Pricing 8/10 It provides a pay-as-you-go pricing model using Matillion Credits, with costs starting at $2.00 per credit. This flexible approach allows businesses to pay for resources based on their current needs. However, the initial monthly pricing begins at $1,000, which may be a consideration for smaller organizations. Summary 9/10 Osmos [Osmos](https://www.osmos.io/) automates data ingestion and processing with no-code data pipelines. All one needs to do is set up source and target data applications, define data mapping transformations, and set up parameters for data cleaning. Its user-friendly interface allows business users to upload, clean, and transform data without involving engineering teams. The platform also includes AI-powered data mapping and anomaly detection to reduce errors and manual work. It\u2019s especially useful for businesses regularly importing messy external data from partners, vendors, or customers. Let\u2019s have a look at Osmos\u2019s ETL tool based on evaluation criteria. Parameter Grade Description Ease of Use 9/10 Osmos offers a user-friendly, no-code platform that simplifies data ingestion and transformation processes. Users can set up data pipelines through a visual interface without requiring coding skills. Data Source and Destination Compatibility 6/10 It provides a limited number of pre-built connectors for popular data sources and destinations. However, users can create custom connectors using REST APIs to extend compatibility. Data Transformation Capabilities 8/10 Osmos performs standard data mapping as well as formula-based transformations and quick fixes for data cleanup. Scalability 9/10 This tool is suitable for small teams, rapidly growing businesses, and enterprise-level companies. Security and Compliance 9/10 Osmos meets all industry-standard security requirements for application architecture, communication, and data encryption. It also respects data privacy laws to comply with legal requirements. Pricing 7/10 There is a free Starter plan for small teams with moderate data volumes, charging $0.0010 per record for up to 100,000 records per month. The Scale plan is priced at $500 per month, plus data costs, offering additional features and higher usage limits. While the pricing structure is transparent, the costs can accumulate with higher data volumes, which may be a consideration for larger organizations. Summary 8.2/10 Comparison of Tools Let\u2019s sum up everything mentioned about the no-code ETL tools for building data pipelines with no programming. All services offer high usability and scalability, which makes them suitable for any organization. The same goes for the security and compliance parameter; all platforms offer high standards for data protection, which makes them really safe and credible. Parameter Skyvia Integrate.io Keboola Fivetran Stitch Matillion Osmos Ease of Use 10/10 9/10 10/10 9/10 9/10 9/10 9/10 Data Source and Destination Compatibility 9/10 8/10 9/10 9/10 9/10 9/10 6/10 Data Transformation Capabilities 10/10 10/10 7/10 8/10 7/10 9/10 8/10 Scalability 10/10 10/10 10/10 10/10 10/10 10/10 9/10 Security and Compliance 10/10 10/10 9/10 10/10 10/10 9/10 9/10 Pricing 10/10 8/10 10/10 8/10 8/10 8/10 7/10 Summary 9.9/10 9.2/10 9.3/10 9.2/10 9/10 9/10 8.2/10 Benefits of No-Code Data Integration Such systems make powerful data operations accessible to everyone so users can move faster, work smarter, and stay focused on what matters. Let\u2019s look at the advantages this integration may provide. Faster Implementation. Get up and running in hours, not weeks. Drag-and-drop interfaces and prebuilt connectors eliminate setup delays. Reduced IT Dependency. Business teams can build and manage pipelines independently, freeing up IT for more strategic or complex work. Cost Efficiency. Skip expensive dev time. No-code tools automate data movement and transformation with minimal upkeep. Scalability. Easily scale from a few thousand rows to millions. Most platforms handle additional sources, destinations, and workflows as you grow. Improved Data Accuracy. Say goodbye to manual exports and human error. Automated processes deliver consistent, reliable data for reporting and analytics. Built-In Security and Compliance. Many platforms include encryption, access controls, and regulatory compliance out of the box; no extra setup is required. Summary Not every ETL tool offers powerful data transformations and connectivity options, which could be crucial for companies as they would want to connect data sources they use on a daily basis. Based on our subjective evaluation of no-code ETL tools, Skyvia has one of the highest scores for each of the technical parameters as well as for the pricing component. So don\u2019t hesitate to start your journey with Skyvia today! FAQ for No-Code ETL Tools What is no-code ETL? No-code ETL is a data integration approach that lets users extract, transform, and load data between systems using visual tools without writing code. It\u2019s ideal for non-technical users who need to manage data flows quickly and efficiently. What is a no-code data pipeline? A no-code data pipeline is an automated workflow built using drag-and-drop interfaces to move and prepare data from one system to another. It helps teams clean, map, and sync data without relying on developers or custom scripts. What\u2019s the difference between no-code and low-code ETL? No-code ETL tools require zero programming; everything is handled through visual interfaces. Low-code ETL tools allow visual design but include optional scripting or SQL for more advanced logic and flexibility. What are the best no-code ETL tools in 2025? Top no-code ETL tools include\u00a0Skyvia,\u00a0Stitch,\u00a0Integrate.io,\u00a0Osmos, and\u00a0Workato. These platforms offer easy setup, prebuilt connectors, strong cloud apps, and database support. Who should use a no-code ETL tool? No-code ETL tools are perfect for marketing ops, sales teams, analysts, and SMBs. Basically, anyone who needs to move and clean data but doesn\u2019t want to (or can\u2019t) code. Can no-code tools scale with growing data needs? Yes! Many no-code platforms are built to scale, supporting large volumes, multiple connectors, and complex workflows as your business grows. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-no-code-etl-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Selecting+the+Best+No-Code+ETL+Tools&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-no-code-etl-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-no-code-etl-tools/&title=Selecting+the+Best+No-Code+ETL+Tools) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/top-saas-integration-platforms/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Top SaaS Integration Platforms: Examples and Importance By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/top-saas-integration-platforms/#respond) 1385 August 12, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-saas-integration-platforms%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+SaaS+Integration+Platforms%3A+Examples+and+Importance&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-saas-integration-platforms%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-saas-integration-platforms/&title=Top+SaaS+Integration+Platforms%3A+Examples+and+Importance) Businesses are always finding ways to run more smoothly and accomplish more. One tool that helps them achieve better performance is the SaaS integration platform. SaaS stands for Software as a Service. It connects different software tools together and ensures their integration works without any disruption. It handles tasks like transferring information between the integrated tools and automating processes. This article will explain SaaS integration and how it works. We will also examine the benefits of SaaS integration platforms. Moreover, we will understand the important factors that help choose the right SaaS platform and list the future trends in these platforms. Table of Contents What is SaaS integration? How does SaaS integration work? Benefits and Importance of SaaS Integration Platforms Top SaaS Integration Platforms How to Choose the Right SaaS Integration Platform Future Trends in SaaS Integration Platforms Final Thoughts What is SaaS integration? SaaS integration is about making different apps work well together. SaaS integration platforms link various business tools, like how your smartphone syncs your contacts, emails, and calendars across apps. Let\u2019s say you run an online store that uses separate apps for inventory management, sales analytics, and customer support. SaaS integration platforms help link these tools. They automatically update inventory, analyze sales data, and alert support teams when a customer orders. This connectivity saves time, keeps operations organized, and enhances efficiency across all software tools. How does SaaS integration work? SaaS integration connects different software applications to exchange data and apply their functionality seamlessly. Connecting apps : SaaS integration links various apps, such as CRM, accounting software, and project management tools. These apps may run on different platforms or via different vendors. Usually, it uses APIs that allow different applications to communicate with each other. Data exchange : Once the integration is complete, apps communicate and share information with each other. For example, customer data entered into the CRM system can automatically be updated in the accounting software. This ensures any team has the latest information without the need to input it manually. Automation : Many SaaS integration platforms provide automation features. These features allow users to automate tasks that previously required manual intervention. So, tasks like transferring data between systems or triggering actions based on specific events (like a new customer sign-up) can happen automatically. Benefits and Importance of SaaS Integration Platforms Integrating the SaaS platform with your apps can provide several benefits. Some of these benefits are listed below: Scalability : SaaS integration platforms can handle massive data volumes. They help businesses streamline their operations and expand their functionalities without performance issues. Cost efficiency : Most SaaS integration platforms operate on a subscription rather than a traditional license model. They offer various pricing options that users can choose from, allowing businesses to choose a subscription based on their budget, data volume, user access, etc. Flexibility : SaaS platforms allow integration with various tools and technology stacks. This flexibility enables businesses to connect with their choice of tools and integrate all the data in one place. Accessibility : SaaS applications are cloud-based, making them accessible from anywhere with an internet connection. This accessibility fosters collaboration and allows remote work without compromising functionality. Security : The SaaS integration platform ensures data protection from external attacks. It often has dedicated security teams that monitor such threats and ensure regulatory compliance. Support : The SaaS platform teams ensure you have all the support you need for troubleshooting. Many platforms provide detailed documentation and real-time support to help users resolve their issues. Usability : The platforms are designed with user-friendly interfaces, making it easy for users to setup integrations. Many platforms offer low or no-code integration, letting non-tech users manage integrations on their own without needing developers. Easier Collaboration : SaaS integration platforms let teams share data and collaborate easily, boosting productivity and keeping everyone aligned. Unified Data Source : Integrating tools with the SaaS platform centralizes all your data, making it the ultimate source of truth. You don\u2019t need to search for data through different systems. Data Accuracy : These platforms help improve data accuracy by reducing manual input errors and ensuring information stays consistent across all integrated systems. Top SaaS Integration Platforms Skyvia [Skyvia](https://skyvia.com/) is a cloud data platform consisting of several integrated products that solve various data-related tasks. These tasks include data integration, query, backup, and automation. Key Features Supports bi-directional synchronization, ETL, ELT, reverse ETL and replication across various data sources. Provides automated backup and restore capabilities for cloud data. Integrates with 190+ popular cloud applications and databases such as Salesforce, Dynamics 365, QuickBooks, MySQL, PostgreSQL, etc. Allows scheduling of integration tasks and workflows. Includes mapping, filtering, and transformation capabilities to ensure data accuracy and consistency. Pros Easy to set up and use with a user-friendly interface. Supports a variety of data integration scenarios, including ETL (Extract, Transform, Load) and replication. Built-in data mapping and transformation capabilities. Offers data backup and restore functionality. Provides simple and flexible pricing plans. Cons Advanced features may require higher-tier plans. More video tutorials would be useful. Zapier [Zapier](https://zapier.com/) is an automation platform that connects with over 7,000 apps to automate workflows. They have three main products under Zapier \u2013 Zaps, Tables, and Interfaces. Key Features Provides integration with over 7,000 apps across different segments. Enables task automation using triggers and actions. Offers workflow templates for common integration scenarios. For custom workflow, users can use Webhooks and API integrations. Pros Provides an extensive library of supported apps and services. Supports multi-step workflows (Zaps) and conditional logic. Easy-to-use interface with a drag-and-drop editor. Cons Multi-step Zaps and premium apps require higher-tier plans. Expensive than most other SaaS integration platforms. Limited customization options for complex workflows. Make [Make](https://www.make.com/en) (formerly Integromat) is a powerful automation and integration platform that connects various apps and services through visual workflows. Key Features Visual workflow builder with drag-and-drop functionality. Conditional logic and branching for workflow automation. Support for HTTP and JSON data formats. Built-in [tools for data mapping](https://skyvia.com/blog/best-data-mapping-tools/) , filtering, and transformation. Pros Visual builder with a wide range of modules for creating complex workflows. Supports advanced data manipulation and transformation. Extensive app integrations, including APIs and webhooks. Includes scheduling and error-handling features. Cons Steep learning curve for users new to automation and visual workflow builders. Pricing can be high for heavy users or large enterprises. Complex workflows may require additional time to set up and optimize. Workato [Workato](https://www.workato.com/) is an integration platform offering automation and integration capabilities for cloud and on-premise applications. Key Features Enterprise-grade integrations with cloud and on-premise systems. Workflow automation with triggers, actions, and approvals. Pre-built connectors and recipes for common integration scenarios. Pros Designed for enterprise needs with robust security and compliance features. Supports complex integrations and business process automation. Extensive library of connectors for popular enterprise apps. Cons Non-transparent pricing. Requires connecting with the sales team. Steep learning curve for advanced features and customizations. Boomi [Boomi](https://boomi.com/) is an [integration platform as a service](https://skyvia.com/blog/what-is-ipaas/) that supports cloud-to-cloud, cloud-to-on-premise, and on-premise-to-on-premise integrations. Key Features Integration platform as a service for cloud and hybrid environments. API management and lifecycle management for integration projects. Data hub for centralized data management and governance. Real-time integration with event-driven architecture support. Pros Scalable for enterprise use with high availability and performance. Comprehensive connectivity options with pre-built connectors and APIs. Low-code development environment for faster integration and deployment. Built-in features for data governance and compliance. Cons Complex pricing structure based on usage and connectors. Requires specialized knowledge for setup and customization. Limited support for niche applications compared to specialized platforms. How to Choose the Right SaaS Integration Platform You need to ensure the platform provides everything your business needs. You must consider several critical factors before choosing the right SaaS integration platform. Some of the factors are listed below: Cost and Value: Consider the overall financial investment, including subscription fees, setup costs, and any additional expenses for features or usage limits. Evaluate these against the platform\u2019s capabilities to determine its worth to your business. Integration Capabilities : Look into the platform\u2019s integration ecosystem and partnerships. Try to look for a platform that connects with your current tools and prepares your business to integrate with your future requirements. Support and Documentation : Assess how the platform provides troubleshooting support, how well it has documented the integration processes, and how much the community engages with the platform. Practical support is crucial for resolving issues promptly and ensuring smooth operations. Performance Metrics : Scrutinize reviews and performance metrics to gauge reliability. Look for guarantees on uptime and benchmarks that meet your business standards. Security and Compliance : Prioritize platforms with robust security measures to protect your data. Ensure they adhere to industry regulations and standards to mitigate potential risks effectively. By evaluating these factors thoroughly, you can select a SaaS integration platform that supports your business growth and operational efficiency. Future Trends in SaaS Integration Platforms AI and Machine Learning Integration : AI and machine learning capabilities are used in many areas, including SaaS integration platforms. These platforms are starting to use AI to automate tasks, predict trends, and make things easier for users. For example, they can automatically analyze data, use chatbots to help customers, and give personalized suggestions based on your needs. Increased automation and integration : Imagine you started using the latest tools to help with different business needs. SaaS platforms must always try to increase their list of integrations. They work towards integrating these latest tools, and they can all provide automation. This means less hands-on work for people, smoother operation, and more time for companies to create new ideas and plans. Enhanced customizations : SaaS platforms are getting better at letting each user personalize how they use their software. You might be able to change how things look and work together or even add new features that fit your needs. This flexibility helps businesses work more efficiently because they can tailor the software to fit exactly how they do things, making everyone\u2019s job easier. Event-driven integrations : Imagine getting instant notifications when something important happens, like receiving a message when your package arrives. In software, the event-driven integrations enable SaaS platforms to react in real-time to events across integrated applications. For example, when a new customer signs up (event), the platform can trigger actions such as updating customer data in CRM and sending a welcome email, ensuring seamless and immediate responses to business events. Such event-driven integrations can also be applied to developer tools that ease the software development lifecycle. Final Thoughts SaaS integration platforms do wonders for businesses, making all your software tools work together smoothly. They help companies complete tasks faster and collaborate better across teams. They also help you make smarter decisions using the data unified by the integrations. Using these platforms can help your business thrive in today\u2019s digital world. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-saas-integration-platforms%2F) [Twitter](https://twitter.com/intent/tweet?text=Top+SaaS+Integration+Platforms%3A+Examples+and+Importance&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-saas-integration-platforms%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-saas-integration-platforms/&title=Top+SaaS+Integration+Platforms%3A+Examples+and+Importance) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-salesforce-reporting-tools/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Most Popular Salesforce Reporting Tools in 2025 By [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) [0](https://skyvia.com/blog/top-salesforce-reporting-tools/#respond) 2435 December 29, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-salesforce-reporting-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Most+Popular+Salesforce+Reporting+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-salesforce-reporting-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-salesforce-reporting-tools/&title=Most+Popular+Salesforce+Reporting+Tools+in+2025) In today\u2019s highly competitive business world, strategic insights are more critical than ever. An excellent report helps businesses succeed by turning data into useful information. Global CRM platforms, such as Salesforce, are catching up with trends in meeting reporting needs. Together with that, third-party specialized reporting tools are still popular among organizations. In this article, we talk about reporting in general, review the most popular reporting tools for Salesforce, and share some helpful ideas on effective reporting. Table of Contents Introduction to Salesforce Reporting TOP Salesforce Reporting Builders Salesforce Reporting Tools: A Feature-by-Feature Comparison The Benefits of Using the Salesforce Reporting Tool How to Build an Advanced Salesforce Report How Skyvia Enhances Your Salesforce Reporting Experience Exporting Data to a File Creating Cloud Data Source Copy in the Database Getting Real-time Data With Skyvia Wrapping Up Salesforce Reporting Insights Introduction to Salesforce Reporting When it comes to Salesforce, analysis and visibility are vital for teams. Companies analyze progress, find obstacles, and improve strategies to succeed in business. Reports demonstrate Salesforce data in a readable form. They help users to analyze and visualize data for further use to achieve global business goals. Reports assist organizations in using data to make decisions, track accomplishments, and assess company effectiveness effortlessly. Salesforce users\u2019 reporting needs depend on the data they work with, the team they work with, and the goals they face. Account and Contact reports help sales teams learn information about customers. Administrative reports help users analyze data about tasks and appointments and explore the general data about Salesforce users\u2019 activities, Salesforce documents, pending approvals, and more. Marketing teams use various reports in Salesforce to track marketing indicators, manage leads and opportunities, and evaluate campaign success. Sales engagement reports help track the sales representatives\u2019 efforts and achievements. Teams use opportunity reports to retrieve information about opportunities, including owners, accounts, stages, amounts, and more. Salesforce users often build custom reports to compile comprehensive reports for more flexibility. Many tools offer various reporting capabilities for users. Let\u2019s review the top five of them you can consider for your business. TOP Salesforce Reporting Builders We have collected the characteristics and advantages of the most popular reporting software to help you pick the best tool to build the most compelling reports for your Salesforce organization. Native Salesforce Reporting [Salesforce](https://www.salesforce.com/) offers built-in software for analysis and reporting. It allows having Salesforce data and reports in one place. Native Salesforce reporting provides a comprehensive and user-friendly solution for analyzing and visualizing data within Salesforce. It offers seamless integration with your data by real-time access. The key benefits of the native Salesforce reporting are the following. Native Salesforce Reporting Benefits Seamless integration to Salesforce objects and real-time access to data. Intuitive Salesforce report builder. Wide range of report types and customization capabilities. Seamless dashboard integration and report sharing. Security and audit trails. Salesforce protects sensitive data and ensures the transparency of controlling access and actions with the reports. Native Salesforce Reporting Cons Quite limited advanced reporting capabilities. Limited cross-object reporting and external data integration. Narrow visualization options. Tableau for Salesforce Reporting [Tableau](https://www.tableau.com/) is a data analytics and visualization platform. It offers you a wide range of advanced features for your Salesforce data. Tableau integrates seamlessly with Salesforce, enabling users to connect directly to Salesforce data and create rich visualizations without complex data extraction processes. It is user-friendly and comfortable for users with any level of technical expertise. Tableau Benefits Advanced features for data visualization and reporting. Sophisticated charts, graphs, and dashboards enable interactive reporting. Direct integration with Salesforce. Data blending and cross-database joins allow you to analyze data from multiple sources beyond Salesforce. Flexibility and customization capabilities. You can tailor any special dynamic and interactive report for your Salesforce data. Tableau Cons It may be quite expensive for large organizations. Steep learning curve for those who is new to business intelligence and data visualization. Limited data transformation capabilities. Klipfolio for Salesforce Reporting [Klipfolio](https://www.klipfolio.com/) is a cloud-based dashboard platform designed to empower your data reporting. By leveraging Klipfolio for Salesforce data reporting, organizations can benefit from a versatile and user-friendly platform that enhances the visualization, analysis, and sharing of Salesforce data. Klipfolio Benefits Customization capabilities. It allows you to tailor dashboards with various visualizations, heat maps, scatter plots, and gauge charts unique to your individual reporting needs. Flexible data manipulation. You can prepare, transform, and apply calculations to your data for the report. Easy to use and integrate with Salesforce and other supported connectors. Automated reporting. Klipfolio supports scheduling reports and automatically distributing reports to users by email and shared links. Klipfolio Cons Steep learning curve. Users, who are new to advanced reporting tools may need some time to get used to. Data integration limitations. Smartsheet for Salesforce Reporting [Smartsheet](https://www.smartsheet.com/) is a cloud tool for collaboration, reporting, and management. You can integrate Smartsheet with Salesforce to enhance the data analysis process. Here are some benefits of using Smartsheet for Salesforce reporting. Smartsheet Benefits Flexible and customizable reports and dashboards that you can tailor to specific business needs. Easy integration with Salesforce and real-time reporting. Smartsheet allows multiple users to work on reports simultaneously. Smartsheet supports forms and data collection. It supports forms to collect user or stakeholder data and integrate it with Salesforce. Smartsheet includes Gantt chart functionalities, enabling users to visualize project timelines and dependencies. Audit trails. Smartsheet Cons Steep learning curve. Advanced reporting and visualization capabilities require time to learn. Data transformation limitations. G-Connector for Salesforce Reporting [G-Connector](https://www.xappex.com/g-connector/) is a tool that functions as a virtual bridge between Salesforce and Google Sheets. It enables direct bi-directional synchronization between Salesforce and Google Sheets. It allows you to share Salesforce reports with users outside your organization and ingest data back to Salesforce. G-Connector Benefits Bi-directional data flow. Automatic and manual data exchange. Intuitive drag and drop interface. G-Connector Cons Limited visualization and advanced reporting features. Flexibility and customization capabilities. Skyvia for Salesforce Reporting [Skyvia](https://skyvia.com/) is an all-embracing SaaS platform that provides solutions for various data-related tasks. Skyvia enables querying data from Salesforce or any other supported data source to Excel or Google Sheets reports using [Skyvia Query](https://skyvia.com/query/) tools. User-friendly query builder allows non-tech people to execute complex SQL queries against multiple Salesforce objects. You can create Salesforce reports in Excel and Google Sheets using [Skyvia Query Excel Add-in](https://skyvia.com/excel-add-in/) and [Skyvia Query Google Sheets Add-in](https://skyvia.com/google-sheets-addon/) . Skyvia ETL, Reverse ETL, and ELT tools can also enhance your reporting experience with other tools. Find more details [below](https://docs.google.com/document/d/18gUgM6wU02gBGAoXwbjDBHTtPYU6zGxZApt2Mj9ltjw/edit#heading=h.hi40g68b13t1) . Skyvia Benefits Powerful data integration capabilities. Connectivity services. Cross-object reporting, automatic and manual data exchange. Great data transformation opportunities Skyvia Cons Limited visualization capabilities. Salesforce Reporting Tools: A Feature-by-Feature Comparison As you can see, there are various reporting opportunities for Salesforce users. We listed the top 5 tools for Salesforce data analysis. And we summed up their most important benefits. We also compared the benefits and features of each tool for you. See the comparison table below. Feature Tableau Klipfolio Smartsheet Native Salesforce Reporting G-Connector Data Visualization Yes Yes Yes Yes No Real-Time Reporting Yes Yes Yes Yes No Custom Dashboards Yes Yes Yes Yes No Data Integration Yes Yes Yes Yes Yes Collaboration Features Yes Yes Yes Yes No Automated Reporting Yes Yes Yes Yes Yes Mobile Accessibility Yes Yes Yes Yes No User-Friendly Interface Yes Yes Yes Yes No Advanced Analytics Yes No No Yes No Budget-Friendly No Yes Yes Yes Yes Security Features Yes Yes Yes Yes Yes Scalability Yes Yes Yes Yes No Customer Support Yes Yes Yes Yes No The Benefits of Using the Salesforce Reporting Tool Users face various challenges when preparing reports. The data may originate from multiple sources.\u00a0 Data must always be relevant, fresh, and available in real-time. It is crucial to have an opportunity to customize a report and transform data if needed. Plus, report visibility and appearance play an important role. And last but not least is data security and compliance with safety standards. The ability to meet all these challenges forms the list of features that make the Salesforce reporting tools valuable. Data aggregation and integration. The ability to consolidate data of different structures from multiple sources may solve the problem of manual integrations and transformations. Customizable reporting allows users to tailor the special reports for specific business needs. Real-time insights. The report, able to deliver real-time data, eases the pain of constantly keeping data up-to-date. Advanced visualization does all the work of transforming raw data into meaningful insights. Automated reporting and scheduling help to organize the reports\u2019 delivery. Audit trailing and other security features help to keep data safe. User-friendly interface. Comfortable Salesforce report builder saves time on learning and exploring the reporting or analytics tool. How to Build an Advanced Salesforce Report Reporting is a process. It requires more than just a user-friendly tool. Although an effective reporting tool is essential, it is not enough for an excellent Salesforce report. Below,\u00a0 we outline some tips to make your report stand out. See ten steps to perfect Salesforce reporting. STEP 1\u00a0 Clarify Your Objectives A good plan is a significant part of success. Knowing the data and desired results makes building a report easier. STEP 2 Simplify Your Data Sources Having diverse data sources and varied data structures is wonderful. This expands the possibilities for your Salesforce report. But more doesn\u2019t mean better. The overloaded report risks losing meaning. You should accurately define the major data sources and fully understand what exact data you need from these data sources. STEP 3 Establish Key Performance Indicators KPIs reflect the specific processes, teams, and functions achievements. Determine the most representative measurements which reflect performance the most accurately. STEP 4 Select Appropriate Data Visuals The design of your Salesforce report determines how the audience perceives it. It has to be clear and transparent. Accurate data, in combination with precise design, make your report tell a story. STEP 5 Delve into Detailed Metrics Observing far-going goals and overall perspectives is helpful for strategic insights. However, you should pay attention to short-term monitoring and specific details. This helps you to detect performance bottlenecks and understand your strengths and weaknesses. STEP 6 Utilize Multiple Dashboards Salesforce produces the all-embracing data. Sometimes, it is helpful to split your data into multiple dashboards. Or you can use several dashboards to display the same data from another perspective. STEP 7 Create a Visual Narrative: You should use logic in your data and visual illustrations. Your report must be consistent. All the points, goals, KPIs, and dashboards in your Salesforce report should flow logically and lead to a reasonable conclusion. STEP 8 Provide Broad Data Accessibility Enhance the report with flexibility. Divide dashboards for specific audience roles and make the report available online 24/7.\u00a0 Ensure your Salesforce report works on different devices. STEP 9 Consider Both Immediate and Future Implications You should include historical data and future prognosis in your report with real-time analytics. STEP 10 Monitor, Evaluate, Tweak, and Iterate You should track the relevance of your report. Evaluate its effectiveness. It is helpful to ask your report audience for feedback occasionally. And make adjustments according to the results you get. This way,\u00a0 your Salesforce report will be continuously improved. How Skyvia Enhances Your Salesforce Reporting Experience It seems like we have covered all the possible subtleties on the reporting topic. Yet, there are more ways to enrich reporting in Salesforce. To get data from sources not supported by the mentioned reporting tools, Skyvia is here for you. [Skyvia](https://skyvia.com/) is a cloud platform that performs various data-related tasks. Haven\u2019t you heard about Skyvia yet? Register now and get acquainted with Skyvia\u2019s features. Most of the reporting tools support various data sources like files, databases, storage services, and more. But when it comes to cloud platforms and tools, their list is quite limited. For instance, Tableau does not support a connection to Stripe.\u00a0 However, Skyvia can help you [connect any data source with Tableau](https://skyvia.com/data-integration/tableau) in several ways. Skyvia supports nearly 200 various tools to connect to. Below, we describe how to get data from a source not supported by popular Salesforce analytics tools using Skyvia capabilities.\u00a0 You have several options to do this. With Skyvia,\u00a0 you can export data from a cloud source to a file and then ingest it by the reporting tool. Or you can create a copy of the cloud data source in a database or a cloud data warehouse, like Redshift or Snowflake. This can be useful even for data sources that can be connected directly, like Salesforce, because databases, especially data warehouses, provide much higher performance and flexibility for data analysis. Another way is to create an OData API layer for your data source and connect reporting tools to your data via OData. Skyvia\u2019s OData endpoints work directly with the source data and support many OData protocol features. To start leveraging Skyvia, you need an active Skyvia account. Register now and go for an easy ETL, ELT, and Reverse ETL\u00a0 journey. Exporting Data to a File As most of the reporting tools support file data sources, Skyvia allows exporting data from a cloud source to a file, putting this file to a storage service. This option is the easiest way to create a report from any cloud data source, even if your analytics tool doesn\u2019t support it. Go to [Skyvia](https://app.skyvia.com/) . Click +NEW and select Export to create a new Export integration. Select the source. Select the existing connection or create a new one. You can do it directly in the integration editor or create it in advance by clicking the +NEW. Set the Target Type. Select CSV Download Manually to get the file manually or CSV To Storage Device to save the file in the storage service. Skyvia supports multiple [storage services.](https://skyvia.com/connectors/#storage) Add the export task by clicking Add new. Export a separate source object or query the data with a custom SQL command in the Advanced Task Mode. Name the file and adjust the fields\u2019 order and names. Save the integration and run it. When the run is completed, you get the new file in your storage, ready to be processed by your reporting tool. Creating Cloud Data Source Copy in the Database If working with files is not an option for your business needs, Skyvia offers another approach to accessing the data in unsupported data sources. Reporting tools support various databases and data warehouses. So, another Skyvia [data integration](https://skyvia.com/data-integration/) tool, [Replication](https://skyvia.com/data-integration/replication) , creates a copy of your cloud source data in a [database](https://skyvia.com/connectors/#db) or a data [warehouse](https://skyvia.com/connectors/#dwh) . You can directly access this database from your reporting tool. To replicate the cloud data to a database, perform the following steps. Go to [Skyvia](https://app.skyvia.com/) . Click +NEW and select Replication . Select the cloud app as a Source and a database or a data warehouse as a Target. You can create connections in advance or do it directly in the integration editor. Select the objects you want to replicate. Save the integration and run it. As a result, your cloud app data is available for your reporting tool in your database. Moreover, Skyvia can keep the database up-to-date by applying updates from the source automatically. Getting Real-time Data With Skyvia There is another alternative to connect to an unsupported data source using Skyvia. Skyvia can provide real-time access to almost any cloud data source using [OData](https://skyvia.com/solutions/odata-solutions) . Popular reporting tools, such as Tableau, support direct connections via OData. You create an OData endpoint to your cloud app in Skyvia and use this endpoint as a data source in your reporting and analysis tool. You just connect to the cloud data source in Skyvia, create an OData endpoint, and follow the wizard instructions. After that, you connect to this endpoint from the reporting tool and enjoy real-time data access. Wrapping Up Salesforce Reporting Insights Good reporting is essential for evaluating business effectiveness. Salesforce reporting is a complicated process that demands a lot of effort. There is a variety of report types in Salesforce. And there are lots of tools designed to simplify and optimize the reporting process. The top five tools are the following:\u00a0 built-in Salesforce reporting tool, Klipfolio, Tableau, Smartsheet, and G-Connector. Every tool has its own features and benefits. Most of them offer visualization, customization, and advanced analytics capabilities. They all support direct connection with Salesforce and ease the pain of creating powerful reports. Besides a good reporting tool that does calculations and visualizes your data, there is also work for you. We listed ten helpful reporting tips that can help you to build an excellent report. Despite the popular Salesforce reporting tools capabilities, there is still room for improvement. The mentioned reporting tools may not support some cloud data sources or offer limited opportunities for data integration and consolidation. There is a solution to these problems. Skyvia provides ETL, ELT, and reverse [ETL solutions](https://skyvia.com/blog/etl-tools/) for various data-related tasks. It can help you with access to the cloud apps that you can\u2019t access directly from a reporting tool. With Skyvia, you can export data to a file and ingest this file in your reporting tool. Skyvia Replication allows getting a copy of cloud app data to a database supported by a reporting tool and providing better performance for data analysis. Moreover, with Skyvia, you can have real-time access to any cloud app from your reporting tool using OData. Don\u2019t hesitate to [register](https://app.skyvia.com/) in Skyvia and explore various opportunities for data management and reporting. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-salesforce-reporting-tools%2F) [Twitter](https://twitter.com/intent/tweet?text=Most+Popular+Salesforce+Reporting+Tools+in+2025&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-salesforce-reporting-tools%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-salesforce-reporting-tools/&title=Most+Popular+Salesforce+Reporting+Tools+in+2025) [Olena Romanchuk](https://skyvia.com/blog/author/olenar/) Olena is a skilled writer with a unique blend of technical and FMCG industry expertise. She began her career at Skyvia as a technical support engineer, where she honed her technical problem-solving skills. Prior to Skyvia, Olena held HR and IT roles in global FMCG giants such as AB InBev, Nestl\u00e9, and Philip Morris International, where she developed analytical skills, service-oriented thinking, and excellent communication to create engaging and accessible content. From a diverse and inclusive professional background, Olena excels in breaking down complex concepts and delivering clear, impactful writing tailored to varied audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/top-snaplogic-alternatives/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Best 7 SnapLogic Alternatives By [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) [0](https://skyvia.com/blog/top-snaplogic-alternatives/#respond) 3301 July 7, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-snaplogic-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+7+SnapLogic+Alternatives&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-snaplogic-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-snaplogic-alternatives/&title=Best+7+SnapLogic+Alternatives) [SnapLogic](https://www.snaplogic.com/) is one of the best data integration solutions for cross-functional working scenarios, used in the IT industry, finance & accounting, sales management, etc. It\u2019s IPaaS, so cloud-based applications\u2019 integration, API, data integration and management are available here. Another impressive benefit is the number of connectors: 700+! But despite being pretty useful, the solution is complex enough, especially for companies starving for high customization levels. So, what about the competitors? Let\u2019s compare the 7 top ones and start with Skyvia as one of the best alternatives to SnapLogic functionality. Let\u2019s see why. Table of Contents Skyvia MuleSoft Boomi Informatica Workato Jitterbit Talend More alternatives to SnapLogic Skyvia [Skyvia](https://skyvia.com/) is a top-rated [data integration solution](https://skyvia.com/blog/data-integration-tools/) for different business areas, including healthcare, e-commerce, sales, finance, and many others. It\u2019s [ETL, ELT, and reverse](https://skyvia.com/etl-tools-comparison/snaplogic-alternative-skyvia) , cloud-based, so accessible from anywhere, with no code, and easy to implement, set up and use even for non-technical staff without any extra fees to learn it. The app is pretty balanced between the price and offer, with the range from [Freemium](https://skyvia.com/pricing/) to Enterprise model, and fits as for the startup, as for the powerful enterprise company. Pros The interface is user-friendly and easy to set up and navigate. The system integrates various cloud applications, CRMs, databases, and data warehouses. The robust data mapping, transformation, and synchronization features are available here. The platform offers automation of data schedules and backup. Cons It might be a good idea to increase the set of connectors a bit. Pricing The [pricing](https://skyvia.com/pricing/) model here is tasty enough; you can try the [free plan](https://skyvia.com/pricing/) or choose some paid version depending on your organization\u2019s requirements. See Skyvia in action and compare the functionality with [SnapLogic](https://skyvia.com/etl-tools-comparison/snaplogic-alternative-skyvia) . Skyvia vs. SnapLogic Parameter Skyvia SnapLogic The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. Sources 180+ 700+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. MuleSoft The common areas of usage of [MuleSoft](https://www.mulesoft.com/) are retail and e-commerce. The system ingests data from ERP, CRM, MDM, etc., speeding up e-commerce platforms. MuleSoft\u2019s healthcare integration solutions enable seamless data transfer between healthcare and non-healthcare systems, both on-premises and in the cloud. Its financial services solutions help unlock core banking systems and digitize the customer experience. The solution focuses on API management and integration and supports 400+ sources. Pros The cloud-based solution reduces the integration complexity and simplifies its process without additional on-premises software and hardware. API-based approach to connectivity process logic into components, allowing for more excellent reusability. In other words, it\u2019s fast, productive and cost-effective. The platform can perfectly communicate between various systems, including legacy ones. Cons The system is complex enough and may block non-technical users from setting up, configuring, and maintaining the integration process. The solution does not support Android and iPhone/iPad. The license may be too expensive for small and mid-scale clients. Pricing The [pricing](https://www.mulesoft.com/anypoint-pricing) is flexible and based on the subscription models depending on data volumes and features you\u2019d like to choose. The free trial is also available. Let\u2019s compare Skyvia, MuleSoft, and SnapLogic. Skyvia vs. MuleSoft vs. SnapLogic Parameter Skyvia SnapLogic MuleSoft The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. API Management and Integration. Sources 180+ 700+ 400+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Complex coding solution with incremental database replication that depends upon manually written SELECT statements. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. Runtime Manager REST API, CloudHub API. Boomi [Boomi](https://boomi.com/) is a popular data integration platform in the healthcare, retail, e-commerce, finance, and banking industries. It\u2019s an iPaaS, ETL, workflow automation low-code system providing users with 90+ connectors. The solution allows bi-directional real-time integration with the most common data sources like databases, applications, warehouses, or lakes. Pros Boomi allows the optimizing integration of data and apps across your business area to save time and decrease human-impacted errors. It\u2019s a good choice for data quality and accuracy improvement. Boomi automation features are helpful for data silo elimination. Cons The customization options here are limited, so you have to be very flexible to meet your needs, especially if we mean complex integration. The system\u2019s dependency on the internet connection may be a blocker in case of any network issue, and some data, e.g., the audit log, may be missing. Pricing The [pricing](https://boomi.com/pricing/) depends on the data volume and integration complexity, but you can start with a 30-day free trial. Let\u2019s compare Skyvia, Boomi, and SnapLogic. Skyvia vs. Boomi vs. SnapLogic Parameter Skyvia SnapLogic Boomi The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. Data integration, ETL, workflow automation. Sources 180+ 700+ 90+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Low code solution that supports bi-directional integration. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. The Boomi Flow OpenAPI connector for connecting and performing actions and operations on RESTful APIs. Informatica [Informatica](https://www.informatica.com/) is a nice choice for businesses starving with data aggregation, cleaning, masking, filtering, parsing, ranking, etc. It offers real-time data transformation and batch mode. The solution supports Apache Velocity transformation scripts to transform hierarchical input data like JSON or XML to JSON, XML, or text output without data aggregation and provides 180+ connectors. Pros The platform lets users cleanse and improve data from on-premises, cloud-based, and third-party systems, which is helpful in data accuracy and reliability, and, as a result \u2014 in decision-making. By selecting the most cost-effective option, you may save computing time and transfer costs with the Optimisation Engine. An ability to predictive analysis, actionable recommendations, and integrated monitoring with Operational Insights is also helpful to increase current operations efficiency. Cons The costs policy may be high enough for small and medium-scale businesses. The tool is not so user-friendly. Pricing The Informatica [pricing](https://www.informatica.com/products/cloud-integration/pricing.html) uses the IPU and/or Flex IPU model of consumption services, so you purchase the set according to your needs. Let\u2019s compare Skyvia, Informatica, and SnapLogic. Skyvia vs. Informatica vs. SnapLogic Parameter Skyvia SnapLogic Informatica The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. Data Ingestion, ETL. Sources 180+ 700+ 180+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Varies from low-code to high-code, depending on the product. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. Informatica Developer Tool, REST API, Connector Toolkit. Workato [Workato](https://www.workato.com/) is a powerful cloud or on-premise platform for the integration, automation, and orchestration of business operations complete set, providing a one-stop shop for all companies\u2019 automation needs. It\u2019s no code and offers impressive integration abilities with 1000+ APIs, databases, SaaS applications, and AI solutions. In case of a lack of connectors, users may create their own ones with SDK tools. Pros The repetitive tasks automation and disparate systems integration reduce the daily routine and risks of human impact. Using trigger events to initiate specific actions optimizes end-to-end business processes and decreases errors. Workato tools help to remove data silos and enhance data accuracy and consistency. Cons The back side of the robust integration abilities is the platform complexity despite no code. It might be difficult to handle large and complex enterprise integrations. The customization options for out-of-the-box connectors and pre-built workflows are also limited. The support team often is not so fast to fit users\u2019 needs. Pricing The [pricing](https://www.workato.com/pricing) models here are flexible enough depending on various business requirements, so you can select the one you wish. Let\u2019s compare Skyvia, Workato, and SnapLogic. Skyvia vs. Workato vs. SnapLogic Parameter Skyvia SnapLogic Workato The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. The workflow automation across cloud and on-premises apps. Sources 180+ 700+ 1000+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Low-code, cloud-based web solution hosted on AWS and Google Cloud. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. Workato Aegis. Jitterbit [Jitterbit](https://www.jitterbit.com/) can offer its customers a set of tools and abilities like IPaaS, API Gateway, and API management. The solution is low-code and oriented on the application integration area, so the clients may use 120 + automated pre-built data workflow templates between various data systems or create new apps to optimize business processes according to their needs. In this case, the business size doesn\u2019t matter; the system fits anyone. Pros You can reduce the daily routine of your business processes and human impact with repetitive tasks automation. The data are always consistent and up-to-date because of streamlining data communication and sharing between different systems. The drag-and-drop editor and pre-built connectors for popular applications allow easy integration and automation for various apps. Cons The customization options and the number of connectors here are quite limited. Pricing The [pricing](https://www.jitterbit.com/harmony/pricing/) models vary up to the sources\u2019 number. Let\u2019s compare Skyvia, Jitterbit, and SnapLogic. Skyvia vs. Jitterbit vs. SnapLogic Parameter Skyvia SnapLogic Jitterbit The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. Application integration. Sources 180+ 700+ 120+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Low-code solution allowing you to reach major enterprise applications, relational databases, flat files, XML, and SaaS/Cloud data. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. Jitterbit\u2019s Harmony API is designed for building integrations with data, apps, or devices. Talend [Talend](https://ua.talend.com/) is quite a strong data integration market player that can offer clients 1000 sources, components, and pre-build templates. The area of its usage is wide enough, e.g., healthcare, telecommunications, banking, retail, and even governmental organizations. The platform is low-code and focused on data management and integration. The services vary from the open-source ones that are free for usage to Talend Data Fabric, providing customers with a set of powerful tools like API Services, Data Inventory, Talend Big Data, Talend Studio, etc. Pros Talend\u2019s ability to support big pull of data sources and formats allows it to provide data ingestion and integration between and among data sources and applications, supporting batch, real-time, and big data. Talend Open Studio is a valuable enough open-source, free solution for businesses with limited budgets that successfully performs simple [data integration and ETL](https://skyvia.com/blog/data-integration-and-etl/) tasks and builds basic data pipelines. Cons Talend is a flexible platform, but it\u2019s complex and fits more advanced users experienced enough to understand what they really need and do. Performance and scalability issues may arise while working with large amounts of data. For instance, the solution uses Java 8 to run Spark Jobs and Standard Jobs or Metadata Repository, including large data distribution. So the system\u2019s response may not be as fast as needed. Pricing Talend offers a few [pricing](https://ua.talend.com/pricing/) models depending on the client\u2019s request. You can select a free trial here. And, of course, the open-source, free version is also available. Let\u2019s compare Skyvia, Talend, and SnapLogic. Skyvia vs. Talend vs. SnapLogic Parameter Skyvia SnapLogic Talend The focus on Data extraction, ETL, reverse ETL, ELT, data replication, data synchronization, advanced ETL with Data Flow and Control Flow. Cloud, on-premises, and hybrid data operations. Data integration and management platform. Sources 180+ 700+ 1000+ How it works No-code, cloud-based, solution, visual ETL data pipeline designer. ETL, ELT, reverse solution. Low-code environment supporting all major data warehouses, data lakes, and databases. Tools REST connector for data sources with REST API. SnapLogic Designer for designing pipelines and SnapLogic API Management. Low-code environment supporting all major data warehouses, data lakes, and databases. More alternatives to SnapLogic Alternative to SnapLogic Comparison Focus Usage Rivery [SnapLogic vs. Rivery](https://skyvia.com/etl-tools-comparison/snaplogic-vs-rivery) ELT, Reverse ETL, and data ingestion. Custom data integration, cloud data migration, cloud data lake ETL, marketing data management, CRM data management. Integrate.io [SnapLogic vs. Integrate.io](https://skyvia.com/etl-tools-comparison/snaplogic-vs-integrateio) ETL, ELT, and Reverse. Workflow automation, multiple sources data connection. [SSIS](https://skyvia.com/blog/difference-between-etl-and-ssis/) of CDATA [SnapLogic vs. SSIS](https://skyvia.com/etl-tools-comparison/snaplogic-vs-ssis) ETL, ELT, and Reverse. BI and analytics, data connectivity, data virtualization,low-code connectivity, Salesforce integration, API development, and Snowflake integration. AWS Glue [SnapLogic vs. AWS Glue](https://skyvia.com/etl-tools-comparison/snaplogic-vs-aws-glue) ETL, ELT, Reverse ETL, streaming. Data backup and recovery, cloud migration, content delivery, database migrations, archiving data, etc. Azure Data Factory [SnapLogic vs. Azure Data Factory](https://skyvia.com/etl-tools-comparison/snaplogic-vs-azure-data-factory) ETL, ELT, Reverse ETL, streaming. Data integration, transformation, migration and synchronization, analytics, and BI. Airbyte [SnapLogic vs. Airbyte](https://skyvia.com/etl-tools-comparison/snaplogic-vs-airbyte) ELT. Marketing, sales, customers, engineering, finance & ops, and product analytics. Database replication. Apache Airflow [SnapLogic vs. Apache Airflow](https://skyvia.com/etl-tools-comparison/snaplogic-vs-apache-airflow) ETL, ELT, and Reverse. The ETL processes and data pipeline building and mapping.The workflow management and scheduling. Alteryx Designer [SnapLogic vs. Alteryx Designer](https://skyvia.com/etl-tools-comparison/snaplogic-vs-alteryx-designer) ETL, Data Prep. Sales operations, pipeline consolidation, and reporting, analytics automation, etc. Pentaho Data Integration [SnapLogic vs. Pentaho Data Integration](https://skyvia.com/etl-tools-comparison/snaplogic-vs-pentaho-data-integration) ETL, streaming data. The ETL for data warehousing. Multiple source data integration scenarios might be integrated here, including structured, semi-structured, and unstructured ones. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Ftop-snaplogic-alternatives%2F) [Twitter](https://twitter.com/intent/tweet?text=Best+7+SnapLogic+Alternatives&url=https%3A%2F%2Fblog.skyvia.com%2Ftop-snaplogic-alternatives%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/top-snaplogic-alternatives/&title=Best+7+SnapLogic+Alternatives) [Nata Kuznetsova](https://skyvia.com/blog/author/nata-kuznetsova/) Nata Kuznetsova is a seasoned writer with nearly two decades of experience in technical documentation and user support. With a strong background in IT, she offers valuable insights into data integration, backup solutions, software, and technology trends. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/understanding-sftp-automation/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Boost efficiency with SFTP automation By [Aveek Das](https://skyvia.com/blog/author/aveekd/) [0](https://skyvia.com/blog/understanding-sftp-automation/#respond) 1616 June 28, 2024 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Funderstanding-sftp-automation%2F) [Twitter](https://twitter.com/intent/tweet?text=Boost+efficiency+with+SFTP+automation&url=https%3A%2F%2Fblog.skyvia.com%2Funderstanding-sftp-automation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/understanding-sftp-automation/&title=Boost+efficiency+with+SFTP+automation) Automation is essential for many organizations to operate efficiently. It helps simplify tasks, allocate resources more effectively, and improve data security. It also gives organizations more time and resources to focus on growth and staying competitive. This article explains how SFTP automation works and why it\u2019s so helpful. We will also examine the benefits of automating SFTP file transfers, including optimizing workflows, increasing data security, and keeping businesses running smoothly. Table of Contents What is SFTP automation, and why is it important? Challenges of manual SFTP processes Benefits of automating SFTP Ways to automate SFTP Automation with Skyvia Automation with scripts Choosing the right tools and platforms SFTP stands for Secure File Transfer Protocol. It\u2019s a network protocol that allows file transfers between a client and server in an encrypted manner. Data is transferred using a [secure shell (SSH)](https://www.ssh.com/academy/ssh/protocol) connection. Organizations widely use SFTP to exchange sensitive data securely. It protects against unauthorized access and interception. Key Features of SFTP Security : Encrypts both data and commands over SSH connections. Authentication : Offers various methods like passwords and public key authentication to verify the identity of the users. File Management : Allows renaming, deleting, and viewing files and directories. Monitoring and Logging : Logs the file transfer process for admin control. Automation : Enables workflow automation using scripts or tools for better efficiency and reducing manual work. Resume and Compress : Supports resuming interrupted file transfers and file compression for reducing bandwidth usage. What is SFTP automation, and why is it important? Shell scripts or [data automation tools](https://skyvia.com/blog/data-automation-tools/) streamline file transfer over the SFTP protocol, minimizing human intervention. These workflows help in performing the following functions: Automate File Transfers : Securely automate file transfers between systems. Schedule Tasks : Schedule file transfers at regular, specified intervals. Execute Commands : Rename or delete files and list the directories with the command execution. Monitor and Alert : Monitor the status of file transfers and receive alerts after successful or failed transfers. Automating SFTP transfers is essential for organizations that deal with sensitive data. It ensures files are sent securely, protecting them from unauthorized access or interception. This approach guarantees regulatory compliance and enhances trust with clients and stakeholders reliant on data confidentiality. Encryption : SFTP encrypts the files and the commands during transfer. Even if someone intercepts the data, they can only read it with the correct decryption key. Authentication : SFTP ensures that only authorized users can access and transfer files using passwords or digital keys. It also verifies the identity of servers to prevent imposters from intercepting data. Integrity Verification : SFTP guarantees files arrive exactly as sent. It uses special codes (hashes) to verify that the files have not been altered or corrupted during transmission. This provides confidence that received files are unchanged from their original state. Apart from the security features, automating SFTP file transfers reduces human error. It helps to optimize resource allocation, which in turn increases their productivity. It also reduces the operational costs. To summarize, SFTP automation enhances efficiency and productivity while safeguarding organizational data integrity and confidentiality. Challenges of manual SFTP processes Risk of Human Errors When handling SFTP transfers manually, mistakes like typing errors, selecting the wrong files, or entering incorrect commands can quickly occur. These errors not only waste time but can also lead to data being sent to the wrong destination or lost altogether. Time-Consuming Nature File transfers can also be time-consuming for large volumes of data. Without automation, someone always needs to monitor these tasks while they are in progress. This dependency can become a liability if the team is unavailable or if knowledge gaps exist within the team. Inefficiencies in Handling Large Data Volumes As the organization grows, manual processes become challenging to maintain. Handling increased data volumes or more frequent transfers manually can lead to bottlenecks and inefficiencies, hindering organizational growth. This manual effort becomes a significant drain on the resources\u2019 productivity. While managing SFTP manually may suffice for smaller tasks or occasional transfers, it often becomes challenging with increased business size due to issues with efficiency, security, scalability, and compliance. Adopting automated SFTP solutions resolves these concerns, delivering enhanced reliability, flexibility, security, and operational efficiency. Benefits of automating SFTP Increased Efficiency and Productivity Automating SFTP transfers streamlines tasks that previously needed manual handling, such as executing and monitoring transfers. This saves time and resources, enabling teams to focus on more productive tasks. Enhanced Data Security Automated SFTP processes apply robust security measures, including encrypted transmissions and secure authentication methods. By eliminating the need to share credentials, they protect against unauthorized access and potential breaches. Scalability and Flexibility Automated SFTP solutions can quickly scale to handle increasing data volumes and transfer frequencies. Schedule multiple automated processes ensures data transfer as the data grows. This allows organizations to adjust to their needs seamlessly without needing significant changes or extra manual work. Furthermore, automated SFTP transfers can be scheduled around the clock. It allows organizations to schedule data transfers outside regular business hours. This reduces the risk of network overloading and minimizes labor costs associated with manual oversight. Ways to automate SFTP There are several ways to automate data transfer to and from SFTP. Native \u2014 scripting and scheduled tasks: Write scripts using shell scripting (Unix/Linux) or batch scripting (Windows) to automate SFTP commands for file transfers. Use cron jobs (Unix/Linux) or Task Scheduler (Windows) to schedule the execution of these scripts at specified intervals. Programming: Develop automation scripts using programming languages that offer SFTP libraries (e.g., Paramiko for Python, JSch for Java, SSH.NET for PowerShell). Third-party solutions: Utilize specialized MFT software that offers comprehensive features for automating and managing file transfers, including SFTP. Use platforms like Zapier, Skyvia, Make, WinSCP, FileZilla, or cloud service APIs (e.g., AWS S3, Azure Blob Storage) to integrate and automate SFTP transfers as part of larger workflows. We\u2019d focus on the two methods: Automation with Skyvia Automation with scripts. Automation with Skyvia One of the modern solutions that offers automated data transfers is [Skyvia](https://skyvia.com/) . This universal data platform can send data to an SFTP server or extract files from it. You just need to design the pipeline and set a schedule to enjoy the automated integration. Skyvia offers a comprehensive set of [Data Integration](https://skyvia.com/data-integration) features that suit SFTP automation. [Import](https://docs.skyvia.com/data-integration/import/) data from files stored on the SFTP Server to [180+ supported connectors](https://skyvia.com/connectors) . [Export](https://docs.skyvia.com/data-integration/export/) data from the preferred app, database, or data warehouse to the SFTP server. Construct compound pipelines and apply complex transformations on data with [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) and [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) . See how to carry out these operations with the example of the [Salesforce to SFTP integration](https://skyvia.com/blog/how-to-connect-salesforce-to-sftp-server/) : As you see, Skyvia offers a bunch of options for constructing automation pipelines with no coding. All this can be done within a web browser. What\u2019s more, you can try to set up SFTP automation with Skyvia for free. Automation with scripts Create Scripts Another method is to develop a script that automates the file transfer process. Such scripting languages like Bash and PowerShell are suitable for creating these scripts. Programming languages such as Python can do the trick as well. It\u2019s necessary to download and import some packages for the selected programming languages for automation. Ensure scripts include error handling and logging for troubleshooting. Run the script once to ensure SFTP connectivity, directory permissions, authentication mechanisms, file transfers, etc, are working as expected. Here\u2019s a sample bash script for sending data to a remote server. The script assumes that the authentication is already done between the systems by sharing SSH keys. Click [here](https://www.strongdm.com/blog/ssh-passwordless-login) to learn how to configure SSH and SFTP without passwords. #!/bin/bash\n\n# Configuration \nREMOTE_HOST=\"\"\nREMOTE_USER=\"\"\nREMOTE_DIR=\"/path/to/remote/directory\"\nLOCAL_FILE=\"/path/to/local/file.txt\"\nLOG_FILE=\"/path/to/log/sftp.log\"\n\n# Start of SFTP operation\necho \"$(date +'%Y-%m-%d %H:%M:%S') - Starting SFTP operation\" >> \"$LOG_FILE\"\n\n{\n echo \"cd \\\"$REMOTE_DIR\\\"\"\n echo \"$(date +'%Y-%m-%d %H:%M:%S') - Uploading $LOCAL_FILE to $REMOTE_DIR\"\n echo \"put \\\"$LOCAL_FILE\\\"\"\n echo \"bye\"\n} | sftp -oBatchMode=no -b - \"$REMOTE_USER@$REMOTE_HOST\" >> \"$LOG_FILE\" 2>&1\n\n# Check if the SFTP operation was successful\nif [ $? -eq 0 ]; then\n echo \"$(date +'%Y-%m-%d %H:%M:%S') - File successfully uploaded to $REMOTE_DIR\" >> \"$LOG_FILE\"\nelse\n echo \"$(date +'%Y-%m-%d %H:%M:%S') - Error uploading file to $REMOTE_DIR\" >> \"$LOG_FILE\"\n exit 1\nfi\n\n# End of script\necho \"$(date +'%Y-%m-%d %H:%M:%S') - SFTP operation completed\" >> \"$LOG_FILE\"\nexit 0 Schedule Automated Tasks After the validation is complete, schedule the automation script. Operating systems provide apps/tools that help in scheduling such scripts. Windows users may consider the Task Scheduler application. The crontab would work on Linux-based systems and macOS. Monitor and Maintain Processes Set up systems to monitor how automated SFTP tasks are running. Ensure logs of every file transfer are stored. Create alerts that notify admins if transfers succeed, fail, or need attention. Regularly check logs and update scripts as necessary to handle changes in how files are transferred. Choosing the right tools and platforms Traditionally, such tools as [WinSCP](https://winscp.net/eng/index.php) and [FileZilla](https://filezilla-project.org/index.php) were used for SFTP file transfer and management. However, their automation capabilities are limited and don\u2019t correspond to the contemporary workloads businesses encounter. Scripts partially resolve the approach to automation. Such tools as Task Scheduler for Windows and Crontab for Linux or macOS allow for creating scripts and scheduling automation tasks. However, they require a strong knowledge of scripting language or Python and the installation of additional libraries. This obviously requires much of the technical expertise and time. Skyvia simplifies everything by providing a no-code solution for transferring data between an SFTP Server and a range of other data sources. It also allows you to set up the integration schedule for automated data flows and minimize human intervention in the process. Similarly to other tools for SFTP automation mentioned here, Skyvia is available for free. So, feel free to try it now for setting up SFTP automation. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Funderstanding-sftp-automation%2F) [Twitter](https://twitter.com/intent/tweet?text=Boost+efficiency+with+SFTP+automation&url=https%3A%2F%2Fblog.skyvia.com%2Funderstanding-sftp-automation%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/understanding-sftp-automation/&title=Boost+efficiency+with+SFTP+automation) [Aveek Das](https://skyvia.com/blog/author/aveekd/) Senior Data Engineer Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) Using Salesforce API in Skyvia: SOAP API vs Bulk API By [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) [0](https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/#respond) 4456 August 4, 2020 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fusing-salesforce-api-in-skyvia-soap-api-vs-bulk-api%2F) [Twitter](https://twitter.com/intent/tweet?text=Using+Salesforce+API+in+Skyvia%3A+SOAP+API+vs+Bulk+API&url=https%3A%2F%2Fblog.skyvia.com%2Fusing-salesforce-api-in-skyvia-soap-api-vs-bulk-api%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/&title=Using+Salesforce+API+in+Skyvia%3A+SOAP+API+vs+Bulk+API) In this article, you can find a general description of Salesforce APIs, which APIs you can use in Skyvia and when. You can also find useful information about Skyvia\u2019s support of Salesforce Professional edition. Salesforce API Overview Being a leader of SaaS CRM solutions, Salesforce provides powerful APIs that allow flexible data management, integration with other systems, etc. In fact, Salesforce provides a number of different APIs for different use cases: SOAP API \u2013 the first introduced synchronous API for creating, retrieving, updating, or deleting records, performing searches, executing SOQL, etc., using SOAP web services protocol. REST API \u2013 synchronous API for creating, reading, updating, searching, and deleting records, etc., using REST protocol. Bulk API \u2013 asynchronous REST-based API for managing large sets of data with large batches processed in background. Chatter API \u2013 REST API to work with Chatter feeds, users, groups, and followers, etc. Metadata API \u2013 SOAP-based asynchronous API to retrieve or modify metadata customization of a Salesforce org. Streaming API \u2013 asynchronous API for near-real-time streams of data based on changes in Salesforce records, using Bayeux protocol. Apex APIs, Tooling API, Analytics API, and others. SOAP API vs Bulk API Skyvia uses Salesforce SOAP and Bulk APIs from the above-mentioned list. That is why in our article we consider these two APIs and differences between them in more details. Both SOAP API and Bulk API have the same purpose: they are used for Salesforce data retrieving and modifying \u2013 creating, updating and deleting records. However, there are big differences between them. Bulk and SOAP API Overview SOAP API is based on SOAP WSDL protocol. It uses XML format for loaded data and synchronous processing. This API was the first API introduced in Salesforce, and it offers more flexible functionality. It is optimized for real-time client applications that update a few records at a time. Bulk API uses REST protocol and allows XML, JSON, and CSV data format. This API is more limited (for example, some SOQL features, like subqueries, SUM, ROLLUP, or COUNT, are not supported), and it is optimized for loading large volumes of data. Bulk API is asynchronous, it allows sending batches of records to Salesforce, and then these batches are processed in the background. Batch Size Limits Being optimized for large data volumes, Bulk API allows larger record batches. While SOAP API retrieves data in batches of up to 2,000 records (or up to 200 records, if two or more custom fields of type long text are selected), Bulk API allows batches of up to 10,000 records. Bulk API batches have the same record count limit for both querying and loading data to Salesforce. SOAP API batches, however, are always limited to 200 records when loading data to Salesforce. Salesforce API Use in Skyvia We have performed performance testing and found out that using Bulk API for reading data does not provide a noticeable performance gain in our case, that is why we do not use Bulk API for reading data. Loading data to Salesforce is, however, more efficient with Bulk API than with SOAP API, especially for large data volumes. So, by default, Skyvia uses SOAP API to get data from Salesforce, and Bulk API \u2013 to load data to Salesforce. Skyvia has its own limits when loading data to Salesforce. It limits Bulk API batches to 5,000, and even this theoretical limit is rarely reached. When loading data to Salesforce, batch size depends on the number and types of loaded fields and other factors, and batches are usually shorter. If you are interested in how Skyvia uses Salesforce APIs in details, you can find a more detailed information in [our documentation.](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections/salesforce_api_and_api_calls.html) Salesforce API Availability in Different Salesforce Editions Salesforce provides APIs in the following editions: Enterprise, Unlimited, and Developer. API can also be enabled for the Professional edition for an additional fee by contacting your Salesforce Account Executive. However, even if APIs are not enabled for a Professional Salesforce edition, a Salesforce partner application can use SOAP API or REST API with such org. Skyvia is a Salesforce partner application, and thus, it can be used even with Salesforce Professional org without API enabled. This, however, is related only to SOAP API. Bulk API cannot be used for such an org. To workaround this problem, Skyvia allows completely disabling Bulk API use when configuring a Salesforce connection. Thus, Skyvia fully supports Salesforce Professional edition orgs . You only need to disable Bulk API use in Salesforce connection settings in Skyvia. We hope this article has been of use. In case you have any questions regarding Salesforce APIs used by Skyvia, [contact](https://skyvia.com/company/contacts) our technical support. It is always at hand to help you and give all necessary information. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fusing-salesforce-api-in-skyvia-soap-api-vs-bulk-api%2F) [Twitter](https://twitter.com/intent/tweet?text=Using+Salesforce+API+in+Skyvia%3A+SOAP+API+vs+Bulk+API&url=https%3A%2F%2Fblog.skyvia.com%2Fusing-salesforce-api-in-skyvia-soap-api-vs-bulk-api%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/using-salesforce-api-in-skyvia-soap-api-vs-bulk-api/&title=Using+Salesforce+API+in+Skyvia%3A+SOAP+API+vs+Bulk+API) [Anna Tereshchenko](https://skyvia.com/blog/author/annat/) Technical Writer Continue Reading [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [Salesforce Connect for Your Business](https://skyvia.com/blog/salesforce-connect-guide/) [Data Connectivity](https://skyvia.com/blog/category/data-connectivity/) [OData REST API for MySQL](https://skyvia.com/blog/odata-rest-api-for-mysql/)" }, { "url": "https://skyvia.com/blog/what-are-the-benefits-of-data-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) What are the Benefits of Data Integration By [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) [0](https://skyvia.com/blog/what-are-the-benefits-of-data-integration/#respond) 225 April 16, 2025 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fwhat-are-the-benefits-of-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=What+are+the+Benefits+of+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fwhat-are-the-benefits-of-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/what-are-the-benefits-of-data-integration/&title=What+are+the+Benefits+of+Data+Integration) Picture this: A growing e-commerce company struggles to keep up with rising demand. Their marketing team runs campaigns on one platform, customer data sits in a CRM, and financial records are locked in another system. The result? Every morning, employees spend hours manually transferring information between systems \u2014 a process that is prone to errors, slows down decision-making, and creates confusion across departments. As the business grows, the problem intensifies, with data silos becoming more entrenched, and the manual work piling up. Does this sound familiar? If you\u2019ve ever worked in a company where systems aren\u2019t integrated, you know the toll it takes on productivity. The solution to this problem? Data integration. In this article, we\u2019ll dive into the top benefits of data integration and explore how businesses, big or small, can transform their operations and improve their bottom line. If you\u2019re looking to make smarter decisions and boost productivity, the key might just be integrating. Let\u2019s explore how. Table of Contents Why Data Integration Is Crucial for Business Success in 2025 Key Benefits of Data Integration Best Practices for Implementing Data Integration Key Factors to Consider When Choosing Data Integration Tools Popular Data Integration Tools for Businesses in 2025 Skyvia Talend Fivetran Airbyte Hevo Data Informatica Real-World Case Studies: Data Integration in Action Conclusion Why Data Integration Is Crucial for Business Success in 2025 In 2025, data is not the new oil; it\u2019s the new currency. Every business \u2014 whether a startup or an established enterprise \u2014 relies on vast amounts of documents to make informed decisions, optimize operations, and deliver personalized customer experiences. But here\u2019s the catch: this information is often siloed across various systems, platforms, and departments, making it difficult to gain a unified view. This is where data integration comes in. Simply put, [data integration](https://skyvia.com/learn/what-is-data-integration) is the process of connecting disparate sources and systems to create a single, cohesive framework. By breaking down silos and ensuring smooth data flow, integration enables businesses to unlock the full potential of their information. For companies looking to stay competitive, having an efficient ecosystem is no longer optional \u2014 it\u2019s a necessity. So, what are the benefits of data integration? Let\u2019s dive into how it can drive business success. Key Benefits of Data Integration Integrated data is no longer just a nice-to-have \u2014 it\u2019s a necessity. When your services are seamlessly connected, it unlocks new opportunities for efficiency and a competitive edge. Let\u2019s dive into how to drive impactful change in your company. 1. Enhanced Decision-Making When data is siloed across different departments and platforms, making informed decisions becomes a guessing game. You need to have a unified, real-time view of all your information, whether it\u2019s from sales, marketing, customer service, or finance. This consolidated view empowers decision-makers with accurate, up-to-date insights, helping them make quicker and more informed choices. In fact, businesses with well-integrated data systems report significantly improved processes, resulting in more effective strategies and better business outcomes. 2. Streamlined Operations and Increased Efficiency Data integration eliminates the tedious, error-prone task of manually transferring information between systems. By [automating these processes](https://skyvia.com/learn/what-is-process-automation) , businesses can: streamline operations reduce human error free up valuable time This means your team spends less time on data wrangling and more time on high-value tasks, like strategic planning or customer engagement. As a result, overall operational efficiency improves, leading to cost savings and faster response times. Integrating is not just about improving how you manage information \u2014 it\u2019s about fundamentally transforming how your business operates. 3. Better Data Quality and Accuracy Automation plays a key role in enhancing consistency and reducing human errors. By streamlining the flow of data and enforcing standardized formats, businesses ensure that their documents remain accurate and reliable. This improvement leads to a higher level of trust in business insights. 4. Cost Savings and ROI Redundant processes and manual data handling directly translate into operational cost savings. By automating data flows, businesses can: free up valuable resources cut down on inefficiencies reduce the risk of errors that could lead to costly mistakes Moreover, integrated systems allow quicker data-driven decisions, driving better ROI as businesses can optimize operations and capitalize on new opportunities more effectively. 5. Scalable Solutions for Future Growth As businesses grow, so do their needs. The beauty of a robust solution is that it scales effortlessly with your business. With centralized data, you can handle increasing volumes and complexities without worrying about the need for constant upgrades. Whether expanding into new markets or adding new sources, integrated systems ensure that your infrastructure can grow alongside your ambitions. 6. Enhanced Customer Insights and Personalization Unified data provides a clearer, [360-degree view of your customers](https://skyvia.com/learn/single-source-of-true) . By integrating information from various touchpoints, businesses gain deeper insights into customer behavior, preferences, and needs. This allows for more targeted marketing, personalized experiences, and stronger customer relationships. The result? A more engaged and loyal customer base that feels understood and valued. 7. Improved Collaboration Across Departments Data integration [bridges the gap](https://skyvia.com/learn/data-consolidation) between departments, fostering seamless communication and collaboration. When everyone has access to the same set of data, teams \u2014 from sales to marketing, finance to customer support \u2014 can work together more efficiently. This interconnectedness leads to faster responses to market changes and a more cohesive organizational structure. In short, it eliminates the barriers that often slow down cross-functional collaboration. Best Practices for Implementing Data Integration Evaluate Your Data Sources Before jumping into the integration process, step back and evaluate your services. Understand where your data is coming from and how it\u2019s structured. Whether it\u2019s CRM systems, marketing platforms, or financial tools, having a clear picture of your data landscape will help you identify potential challenges and gaps in your workflow. This step is crucial for setting the stage for a smooth integration, ensuring that you bring the correct data into your system. Choose the Right Tool for Your Business Choosing the proper integration tool is a game-changer. Different businesses have different needs, so what works for one company might not work for another. Whether you need a no-code solution for ease of use or a more robust, customizable platform for complex data tasks, the tool should align with your team\u2019s skills, budget, and future goals. Consider scalability, ease of integration, and how well the platform supports your existing systems. Ensure Data Quality from the Start Workflows can\u2019t be effective without ensuring data quality from the get-go. Garbage in means garbage out. Prioritize data cleansing and validation processes early in the integration to avoid poor decision-making based on inaccurate or incomplete information. The goal is not just to integrate systems, but to integrate clean, reliable data that adds value across your business operations. Plan for Scalability Plan from the start by choosing integration tools and processes that can handle increased data volumes and evolving requirements. A scalable integration setup will not only support your business\u2019s current needs but also allow you to adapt as new systems and platforms come into play. The more future-proof your integration, the easier it will be to handle growth without disrupting business operations. Key Factors to Consider When Choosing Data Integration Tools 1. Integration with Existing Tools When selecting a new service, the first thing to evaluate is how well it integrates with the systems you\u2019re already using. Whether it\u2019s CRM software, marketing platforms, or ERP systems, seamless integration ensures that your new tool doesn\u2019t disrupt existing workflows. Compatibility is key \u2014 make sure the tool can pull data from all your platforms and sync it effortlessly without creating additional silos. A tool that doesn\u2019t play well with your tech stack could lead to more manual work and data inconsistencies, which defeats the purpose of integrating in the first place. 2. Ease of Use and Customization Next, think about usability. A tool might have all the features you need, but if your team struggles with its interface or can\u2019t customize it to suit your processes, you\u2019ll run into problems. Look for a solution that balances ease of use with flexibility. A no-code platform like Skyvia might be ideal for teams without technical expertise, but it also provides features for complex integration needs. The right balance will empower your team to manage integrations without constant external support. 3. Scalability and Support for Big Data Your tool stack should grow as your business does. As you scale, the volume, variety, and complexity will also increase. Choose a platform built to handle big data without compromising speed or accuracy. Scalability is particularly crucial if you plan to expand your data ecosystem or integrate additional sources in the future. Ensure the tool supports large-scale operations and seamlessly handles larger workloads. 4. Cost and Licensing Models Cost is, of course, a major factor. But don\u2019t just focus on the upfront costs. Consider the licensing model and any additional expenses down the line. Some tools charge based on data volume, while others charge per user or task. Make sure you understand how pricing scales with your needs and whether the tool\u2019s total cost fits within your budget. Additionally, consider whether the value and functionality of the tool align with its price tag. Investing in a solution that offers scalability and flexibility can save you from having to switch tools down the line. Popular Data Integration Tools for Businesses in 2025 Before diving into the details of the most popular tools, let\u2019s take a quick look at a comparison table to get an overview of their features and capabilities. Platform G2 Rating Best For Pricing Skyvia 4.8 out of 5 Businesses of any size and any industry. Free plan available. Paid plans start at $79/month. Talend 4.0 out of 5 Telecommunication, Government, and Healthcare companies. The cost is discussed with their Sales team based on the four pricing plans. Fivetran 4.4 out of 5 Teams focused on replication and ELT, not custom logic or scripting. Free plan available. Starter plan starts at ~$300/month. Airbyte 4.4 out of 5 Companies with cloud infrastructure looking for open-source alternatives. The open-source version is free. The cloud version has volume-based pricing. Hevo Data 4.6 out of 5 Startups and growing companies. Event volume-based pricing model. Starter plan begins at $239/month. Informatica 4.3 out of 5 Insurance, Healthcare, and Education companies. Volume-based pricing model. 1. Skyvia Skyvia is a no-code, cloud-based data integration platform that simplifies [ETL and ELT](https://skyvia.com/blog/elt-vs-etl/) processes, providing over 200 connectors to cloud apps, databases, and file storage platforms. With its intuitive drag-and-drop interface, it makes automation, data migration, and sync tasks easy for both technical and non-technical users. Whether you\u2019re looking to automate data pipelines or unify your tech stack, Skyvia offers scalability and flexibility for any businesses. Pros No-code interface, ideal for non-technical users Supports a wide variety of integrations Flexible pricing and robust scheduling options Cons Lacks support for real-time data syncing No on-premise deployment option Reviews G2 rating: 4.8 out of 5.0 Best for Skyvia is an intuitive, no-code platform that fits businesses of all sizes across various industries. With flexible subscription plans, it offers powerful data integration capabilities without requiring complex setups or heavy IT resources. Key Benefits of Skyvia Ease of Use : Skyvia\u2019s browser-based interface ensures that no extra installations are needed, allowing teams to get started quickly. With helpful tips and in-app guidance, users can easily navigate through the platform. No Coding : Whether you\u2019re creating complex integration scenarios or building data pipelines, Skyvia simplifies the process with its user-friendly visual wizard. No programming knowledge is necessary, which makes it accessible to non-technical users while still offering powerful functionality. Connectivity : Over 200 pre-built connectors, enabling seamless integration across a wide range of data sources, including databases, cloud applications, storage systems, and data warehouses. Error Logging : When issues arise during integration, there are detailed error descriptions, helping users quickly identify and resolve problems. Freemium Plan : The platform offers a free, fully-featured plan, allowing users to try everything without any upfront costs. While the free version includes unlimited access to connectors, there are limits on scheduled integrations and data processing volumes. Scalable Pricing : Skyvia\u2019s volume- and feature-based pricing ensures that businesses only pay for what they need, with the flexibility to scale as data integration needs grow. Whether you\u2019re a startup or a large enterprise, there\u2019s a plan to suit your requirements. 2. Talend Talend is an open-source ETL tool with a strong focus on both cloud and on-premises integration. Built on Java and Eclipse, it supports complex workflows, transforming data with its visual drag-and-drop UI. While it\u2019s highly customizable and works well in big data environments, Talend requires technical expertise, particularly when creating custom integrations. Pros Open-source and highly customizable Strong support for [big data tools](https://skyvia.com/blog/top-big-data-analytics-tools/) like Hadoop Advanced error handling and logging Cons Steep learning curve for beginners Not beginner-friendly UI performance can be slow for large data sets Reviews G2 rating: 4.0 out of 5.0 Best for Talend Data Integration is particularly valuable for companies in industries like Telecommunications, Government, Healthcare, and Finance. In Healthcare, it helps systematize patient data from clinical trials for accurate analysis. For Government organizations, Talend ensures compliance with GDPR while managing data from various public apps. It\u2019s an ideal solution for any industry needing efficient, scalable data integration and regulatory compliance. 3. Fivetran Fivetran is a fully managed, cloud-native ELT platform designed for high-volume data replication. It automates data extraction from over 500 sources, including SaaS tools and databases, and loads it directly into your data warehouse. Fivetran\u2019s \u201cset-it-and-forget-it\u201d approach makes it ideal for teams focusing on scaling with minimal maintenance. Pros Zero-maintenance setup with automatic schema updates Fast setup and high scalability Optimized for ELT workflows Cons Lacks pre-load transformations, only post-load Expensive at scale Limited custom pipeline flexibility Reviews G2 Rating: 4.4 out of 5 Best for Ideal for data teams and analysts who need fast, reliable access to raw data for reporting or modeling. It\u2019s a great fit for companies using modern data stacks like dbt, Snowflake, and Fivetran. Additionally, it works well for organizations with growing connector needs and limited engineering capacity. Perfect for teams focused on replication and ELT, rather than custom logic or scripting. 4. Airbyte Airbyte is an open-source ETL tool that focuses on scalability and extensibility, allowing teams to centralize and sync data from various sources to destinations like MySQL, BigQuery, and Redshift. Its modular architecture is ideal for data engineers who prefer building and customizing their own connectors. Airbyte can be deployed locally or in the cloud, offering flexibility without vendor lock-in. Pros Open-source with a growing library of pre-built connectors Easily deployable locally or in your cloud REST API and CLI support for automation Cons Requires some technical knowledge Some connectors are unstable or still in development Limited built-in scheduling features Reviews G2 Rating: 4.4 out of 5 Best For Perfect for data teams and engineers who need to build or customize their own connectors. It\u2019s also well-suited for companies with cloud infrastructure seeking self-hosted or open-source alternatives. Additionally, it excels in use cases where ELT is preferred over traditional ETL, such as scenarios where transformations are done in the database after the data is loaded. 5. Hevo Data Hevo Data is a cloud-native, no-code ETL platform designed to streamline the movement of data in real-time from over 150 sources to popular data warehouses like Snowflake, BigQuery, Redshift, and MySQL. The platform prioritizes automation and ease of use, enabling teams to sync and transform data without writing code. With its intuitive UI, real-time streaming capabilities, and built-in data quality checks, Hevo is ideal for operational analytics and business intelligence applications. Pros Supports both batch and real-time streaming ETL Automatic schema mapping and transformation options Strong monitoring and alerting for pipeline health Built-in data quality checks for reliable analytics Cons Limited flexibility for highly customized transformations Expensive at scale with increasing data volume No on-premise deployment; cloud-only Reviews G2 Rating: 4.6 out of 5 Best for Ideal for startups and growing businesses that need fast and reliable data syncing with cloud warehouses. It\u2019s also perfect for operations and marketing teams looking to enable self-service analytics with minimal IT involvement. Additionally, data engineers and analysts seeking a robust streaming ETL solution with strong automation and real-time alerting will find it highly beneficial. 6. Informatica Informatica offers a comprehensive suite of [tools for data management](https://skyvia.com/blog/best-data-management-tools/) , focusing on the Intelligent Cloud Data Management platform for building ETL and ELT processes that handle large data volumes. With its pay-as-you-go pricing model, businesses can scale operations efficiently according to their current needs. The platform simplifies the developer\u2019s role by automating routine tasks, scheduling integrations, and providing clear visibility into workloads and pipelines. This solution is well-suited for industries like insurance, healthcare, life sciences, and financial services, offering businesses the flexibility to adapt quickly in response to changing demands. Pros Scalable for high data volumes Automated routine tasks and integration scheduling Strong support for compliance and governance Advanced encryption for secure data transfer Cons High pricing compared to similar tools Lack of detailed error messages for troubleshooting Occasional performance issues Reviews G2 rating: 4.3 out of 5 Best for Informatica\u2019s Intelligent Cloud Data Management is ideal for industries like Insurance, Healthcare, Life Sciences, and Financial Services. It simplifies digital transformation by unifying the entire value chain in the data cloud, making it particularly valuable for industries that require fast and efficient data integration. For example, in Life Sciences, it enables quick responses to public health emergencies by consolidating research data from multiple sources, providing a complete and actionable overview in real-time. Real-World Case Studies: Data Integration in Action Let\u2019s dive into a practical [use case from NISO](https://skyvia.com/case-studies/niso) , which transformed its financial operations by automating data flows with Skyvia. NISO faced a significant challenge: they needed to paint a complete picture of their customers\u2019 financial aspects. The process of manually extracting data from MySQL, transferring it to Excel spreadsheets, and then uploading it to QuickBooks Online became increasingly cumbersome as the company grew. This time-consuming process was holding back their ability to scale and automate financial operations effectively. The manual method wasn\u2019t cutting it anymore. What NISO needed was a way to streamline and automate the entire data flow. This is where Skyvia came in. \u201cWe needed a way to consolidate and integrate the information from various data sources and systems, and Skyvia fit that gap very precisely,\u201d said Rodrigo Fritis, CEO at NISO. To dive deeper into how NISO overcame this challenge and achieved its goal of streamlined, automated data integration, watch the full video interview with CEO Rodrigo Fritis. Hear firsthand how Skyvia helped the company to unify its data systems. Conclusion Data integration is no longer just a luxury \u2014 it\u2019s a necessity for staying competitive. By streamlining your data workflows, you can significantly enhance decision-making and improve operational efficiency in your business. The key benefits of data integration \u2014 ranging from better data quality and cost savings to improved collaboration and scalability \u2014 are essential for modern businesses looking to thrive in a data-driven world. In this article, we\u2019ve explored the pros and cons of six different [data integration tools](https://skyvia.com/blog/data-integration-tools/) and platforms, highlighting their strengths and limitations to help you make an informed decision. However, the best way to determine which solution truly meets your needs is by trying it out live. Whether through a demo or a free trial, testing the tool in your environment is the most effective way to see how it fits into the data workflows. If you\u2019re ready to explore data integration further, try Skyvia\u2019s capabilities today! You can start with a free trial and discover how easily it can streamline your data processes. F.A.Q. for the Benefits of Data Integration What is the value of data integration for modern businesses? Data integration streamlines data workflows and increases operational efficiency by providing a unified view of data from multiple sources. This helps businesses make more informed decisions faster and with better accuracy. What is the primary purpose of data integration? The primary purpose of data integration is to consolidate data from various sources, making it accessible in a central location for analysis. It eliminates data silos and ensures that data flows seamlessly between systems. What are the benefits of system integration? Improved Efficiency: Businesses reduce manual work and speed up workflows. Better Decision-Making: Integrated data provides a holistic view of business operations. Cost Savings: Reduces the need for redundant data processing and lowers operational costs. Scalability: Businesses can easily scale operations as data volumes grow without sacrificing performance. What are the challenges of data integration? Some common challenges include managing data quality, handling large data volumes, ensuring security and compliance, and integrating data from disparate sources with different formats. What industries can benefit from data integration? Data integration is valuable across industries, including finance, healthcare, retail, marketing, and manufacturing. It helps organizations across sectors streamline operations, improve data accuracy, and make informed decisions. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fwhat-are-the-benefits-of-data-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=What+are+the+Benefits+of+Data+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fwhat-are-the-benefits-of-data-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/what-are-the-benefits-of-data-integration/&title=What+are+the+Benefits+of+Data+Integration) [Vlada Maksymiuk](https://skyvia.com/blog/author/vlada/) With years of experience as a content manager and writer, Vlada leverages expertise in data integration, ETL solutions, and cloud technologies to create content that educates, informs, and engages technical experts and business decision-makers. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "https://skyvia.com/blog/what-is-ipaas/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Integration Platform as a Service (IPaaS) By [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) [0](https://skyvia.com/blog/what-is-ipaas/#respond) 4327 June 17, 2022 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fwhat-is-ipaas%2F) [Twitter](https://twitter.com/intent/tweet?text=Integration+Platform+as+a+Service+%28IPaaS%29&url=https%3A%2F%2Fblog.skyvia.com%2Fwhat-is-ipaas%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/what-is-ipaas/&title=Integration+Platform+as+a+Service+%28IPaaS%29) As we wrote [before](https://skyvia.com/blog/what-is-data-integration/) , data integration between different sources is a crucial task that most of the businesses need to solve. Since the task is so widespread, there is a large number of tools for solving it \u2014 from coding libraries to ready-to-use tools \u2014 on-premises and cloud based. This article describes the latter \u2014 cloud or iPaaS systems. Read this article if you want to know what is iPaaS, and top iPaaS vendors. It describes, how iPaaS systems work and what are the benefits of using iPaaS. Table of contents iPaaS Meaning and Definition iPaaS vs PaaS vs SaaS How Does iPaaS Work? iPaaS Features Benefits of Using iPaaS Examples and Use Cases IPaaS Vendors and Providers Skyvia: Best IPaaS for Cloud Integration iPaaS Meaning and Definition What is iPaaS? First, iPaaS is an acronym. iPaaS stands for Integration Platform as a Service. We can define iPaaS as a platform that automates integration tasks loads data between different data sources or automates workflows requiring actions in one applications to be triggered by events from other applications. By the *aaS definition, such systems are usually hosted in a cloud, and users control it via web browser or via API. Such integration solutions are usually offered on a subscription basis and often have different pricing plans for different needs. iPaaS vs PaaS vs SaaS There are multiple acronyms similar to iPaaS. Let\u2019s quickly talk about them to avoid confusion. Here are the differences of PaaS vs iPaaS vs SaaS. PaaS stands for Platform as a Service. Without the \u201cintegration\u201d word. It usually means a set of cloud services for developers to help them with creating, testing and running their applications in the cloud. This also includes deployment, servers and hosting. It\u2019s basically a way to free developers from the needs of setting up infrastructure, environment, software stack for development, hosting, etc. and let them focus on the development process. As for differences with iPaaS, while PaaS software offers services for application creation, deployment and hosting, iPaaS offers services for cloud integration of different applications and platforms. How Does iPaaS Work? So how does iPaaS work and what does it do? Usually iPaaS is a set of services, hosted in a cloud. These services run integration tasks \u2014 work, performed by iPaaS. The User controls iPaaS with a web console that can be opened in a web browser. Using this console, user can define integration tasks. Most [top iPaaS solutions](https://skyvia.com/blog/best-ipaas-solutions/) work on no-coding basis, allowing users to configure their integrations visually. Some require coding and you need to upload configuration files with your integration tasks. Most iPaaS provide pre-built connectors that allow users to connect to most widely used applications and databases. The user only needs to specify necessary connection parameters or authorize the iPaaS using OAuth method. Then the user defines how the iPaaS should connect the corresponding apps or databases. Depending on the the user either configures data pipelines, loading data from one data source to another or configures steps of the trigger-action workflow or etc. After the integration is configured, it can be started automatically or manually. It is executed by the iPaaS service and runs in the cloud. Different integration solutions and integration kinds offer different ways of automation \u2014 run integrations periodically or on a more complex schedule or when triggered by some connector, etc. iPaaS Features What distinguishes iPaaS solutions from other integration solutions? Since iPaaS are hosted in the cloud, they have no requirements to local infrastructure. With only a web browser users can configure and run their integrations. Cloud architecture also ensures that the solution is always available and always on, regardless whether the user\u2019s computer is on or off. All the scheduled work will be done, for whenever it is scheduled. Finally, cloud architecture also makes enterprise iPaaS solutions able to work with any data volumes. Benefits of Using iPaaS Now let\u2019s consider what are the iPaaS benefits for your business. If your business has a need for data integration (which most modern and developed businesses have), here are some of the benefits of iPaaS compared to on-premises tools or custom solutions. No up-front costs. If you choose an iPaaS tool, you don\u2019t need to invest in infrastructure, since iPaaS runs in the cloud and can be controlled via only a web browser. You also don\u2019t need to purchase expensive integration software or invest money in development of your own in-house solution. Flexible pricing. Most iPaaS solutions have different pricing plans depending on the user\u2019s needs. If you have simple integration tasks with low volumes of data, maybe you can find a tool with a free pricing plan for you. Scales with your needs. iPaaS tools offer high scalability. They suit all business sizes \u2014 from SMB to enterprise, as they can both perform small tasks and load huge enterprise data volumes. As your business grows, you can select pricing plans for your data volumes and needs. No need for IT experts. Most iPaaS tools don\u2019t require developers or IT specialists to configure integration. Convenient GUI interfaces allow even business users to perform all the required configuration. Enterprise iPaaS solutions often offer consulting services for their customers and configure their integration themselves. Examples and Use Cases Since iPaaS is an integration solution, iPaaS use cases and iPaaS work examples are basically the same as with other integration solutions. The most popular iPaaS use cases are: Data archiving. iPaaS solutions can automate exporting data to CSV or loading it from multiple sources to a single database. Data analysis. iPaaS tools can help users move data from data sources, not supported by [data analysis tools](https://skyvia.com/blog/top-data-analysis-tools/) directly, to supported databases or even allow connecting data analysis tools to these sources via iPaaS. Data migration. iPaaS tools can be used to [migrate your data](https://skyvia.com/blog/data-migration-tools/) from one app to another or from one database to another. Automating workflows. Another example of work performed by iPaaS is app integration. Some iPaaS solutions allow automatically performing actions in applications when certain events happen in other applications. This allows you to automate a number of routine workflows. IPaaS Vendors and Providers Since iPaaS systems can perform that wide range of frequently encountered tasks and provide such features and benefits, there are a lot of different iPaaS systems on the market. Some iPaaS companies, like Informatica, Jitterbit, Boomi target the enterprise market, others, like Blendo or Lyftron target SMBs. There are iPaaS tools specialized in performing ETL and reverse ETL tasks, like Stitchdata, Fivetran, Mulesoft, Grouparoo, etc. and tools that suit better for app integration tasks: Zapier, Integromat, IFTTT, etc. You can read more about [what is ETL](https://skyvia.com/blog/etl-tools/) and [what is reverse ETL](https://skyvia.com/blog/what-is-reverse-etl/) in other articles of our blog. The iPaaS tools list should also include solutions from such well-known companies as Microsoft (Microsoft Power Automate), Oracle, and IBM. We can also mention MuleSoft Anypoint Platform, which is currently owned by Salesforce and Skyvia \u2014 a cloud integration platform from Devart, which is a company with 24 years of experience in developing data access and data management solutions. Skyvia: Best IPaaS for Cloud Integration Skyvia is a cloud-based platform for data integration with different [cloud integration services](https://skyvia.com/) and tools to perform different integration tasks. It is mostly oriented for ETL and reverse ETL tasks, but also provides tools for [API-based integration](https://skyvia.com/connect/) . Skyvia supports integration of over 80 different cloud applications and a number of most widely used databases and cloud data warehouses. It also supports a wide range of data integration scenarios: CSV export/import, data migration, data warehousing, one-direction synchronization, bi-directional synchronization, complex integration of more than 2 sources, seeding sandbox with production data, etc. Skyvia started as a solution for small and medium businesses, but now it suits for businesses of any size. Skyvia can work with huge data volumes and offers a wide choice of customizable pricing plans, so anyone can find a suitable one. It offers enterprise-grade security and is compliant with GDPR and HIPAA privacy regulations. If you are selecting a [data integration solution](https://skyvia.com/blog/data-integration-tools/) for your business, check Skyvia, you\u2019ll probably find that it suits your needs. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fwhat-is-ipaas%2F) [Twitter](https://twitter.com/intent/tweet?text=Integration+Platform+as+a+Service+%28IPaaS%29&url=https%3A%2F%2Fblog.skyvia.com%2Fwhat-is-ipaas%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/what-is-ipaas/&title=Integration+Platform+as+a+Service+%28IPaaS%29) [Sergey Bykov](https://skyvia.com/blog/author/sergeyb/) Sergey combines years of experience in technical writing with a deep understanding of data integration, cloud platforms, and emerging technologies. Known for making technical subjects approachable, he helps readers navigate complex tools and trends with confidence. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Pipedrive Integration with QuickBooks: Automate & Grow](https://skyvia.com/blog/pipedrive-integration-with-quickbooks/)" }, { "url": "https://skyvia.com/blog/zoho-and-quickbooks-integration/", "product_name": "Unknown", "content_type": "Blog", "content": "[Data Integration](https://skyvia.com/blog/category/data-integration/) Guide to Zoho CRM and QuickBooks Integration By [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) [0](https://skyvia.com/blog/zoho-and-quickbooks-integration/#respond) 2744 October 19, 2023 [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fzoho-and-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Guide+to+Zoho+CRM+and+QuickBooks+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fzoho-and-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/zoho-and-quickbooks-integration/&title=Guide+to+Zoho+CRM+and+QuickBooks+Integration) Have you ever asked yourself, \u201cDoes Zoho CRM integrate with Quickbooks?\u201d If so, we\u2019re here to affirm that this is possible even in several different ways. In this article, we talk about the benefits of Zoho CRM Quickbooks integration and its actual implementation via various methods. Table of Contents Method 1: Official Native Zoho CRM and QuickBooks Integration Method 2: Integration using Skyvia Loading Data in Any Direction Mass Data Updates and Deletes Synching Data Two-Ways Comparing the Integration Methods Conclusion The question of Zoho CRM Quickbooks integration is important because both systems are extremely popular. Zoho CRM is a top-notch customer relationship management solution, while Quickbooks boasts [80% of the market share](https://www.businessdit.com/how-many-companies-use-quickbooks/) in the accounting software industry. Bringing both systems together helps businesses to uncover the hidden power of this mix. Companies can expect financial benefits from this combination as CRM used by accountants returns [$30 for every dollar](https://www.hubspot.com/products/crm/accounting) spent, according to Salesforce. Integration of Zoho CRM and Quickbooks also helps with the following: Understanding the principal source of revenue for maximizing the company\u2019s profits. Tracing income association with sales and marketing activities. Knocking down the probability of manual data entry errors and thus eliminating discrepancies in CRM and accounting software. Improving collaboration between various departments. If you\u2019re eager to experience all these advantages, check the two methods for Zoho CRM and Quickbooks integration provided below. Method 1: Official Native Zoho CRM and QuickBooks Integration Zoho Corporation owes a proper platform for connecting Zoho tools with other SaaS applications. Note that Zoho CRM Quickbooks desktop integration might require several preliminary steps in this case: see [migrating QuickBooks Desktop to QuickBooks Online](https://www.youtube.com/watch?v=Lb0mBFBjcmk) . To proceed with the native scenario for connecting CRM and accounting software, take the following steps: Log into the Zoho Flow platform using your Zoho account credentials. Click on the + icon to create a new flow. Click Configure in the App section and select Zoho CRM from the list. Select the trigger from available options to define conditions for data updates in QuickBooks. Find QuickBooks in the App tab and select the action that best corresponds to the Zoho CRM trigger. Drag the selected action to the builder to place it under the Zoho CRM trigger block. The setup is ready. Once the change occurs in Zoho CRM, the corresponding change will take place in QuickBooks. In fact, Zoho Flow is a great solution for data synchronization between systems. However, it has a limited number of options for integration and prevents the exchange of historical data. Method 2: Integration using Skyvia To overcome the limitations imposed by the native solution, use Skyvia for [Zoho CRM and QuickBooks Online / Desktop Integration](https://skyvia.com/data-integration/integrate-zoho-crm-quickbooks) . [Skyvia](https://skyvia.com/) is a universal cloud-based platform perfect for configuring and executing [ELT, ETL](https://skyvia.com/blog/elt-vs-etl/) , and [reverse ETL](https://skyvia.com/learn/what-is-reverse-etl) scenarios. This service meets such essential needs of modern businesses in today\u2019s digital world as ease of setup and use, flexibility, and scalability. Other Skyvia\u2019s advantages include but are not limited to: User-friendly. Being a no-code platform, Skyvia requires minimum to no coding experience. Pre-built connectors. Both Zoho CRM and QuickBooks connectors are already embedded into Skyvia, so no extra API configurations are required. Extended feature set. This platform provides all the necessary tools for basic (Import and Synchronization features) as well as complex data integration scenarios such as [Data Flow and Control Flow](https://docs.skyvia.com/data-integration/data-flow/) to build compound [data pipelines](https://skyvia.com/blog/10-best-data-pipeline-tools/) . With Skyvia, it\u2019s possible to perform various integration operations: Loading data in any direction Mass data updates and deletes Syncing data two-ways Let\u2019s have a look at each of these scenarios and see step-by-step instructions on how to implement them in Skyvia. Before we proceed, make sure to establish the connection with [Zoho CRM](https://docs.skyvia.com/connectors/cloud-sources/zoho_connections.html#establishing-connection) and [QuickBooks](https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html) . Loading Data in Any Direction Challenge Businesses often struggle to create a unified system that contains holistic data about everything. It\u2019s natural because different departments use various tools for their daily tasks. There\u2019s a way though to build a so-called single source of truth on the basis of Zoho CRM or QuickBooks \u2013 it\u2019s up to you \u2013\u00a0 by transferring data from one application to another. Solution Making a CRM a single source of truth means sending data from QuickBooks to Zoho CRM. This helps marketers and customer service departments generate invoices instantly and see the financial information of each client. To carry out this procedure with Skyvia, use the [Import component](https://docs.skyvia.com/data-integration/import/) and enjoy data filtering, transformations, and mapping functions. Click +NEW in the top menu. In the Integration column, click Import . Under Source Type , click Data Source and select Zoho CRM from the Connection drop-down list. Under Target , select QuickBooks from the Connection drop-down list. NOTE: In this example, the data is loaded from Zoho CRM to QuickBooks. However, the same procedure can be done for loading data from QuickBooks to Zoho CRM. Click Add New on the right to open the Task Editor settings window. In the Task Editor window, select Invoices (or any other field of your interest) from the Source drop-down list. Click Next Step . In the Target drop-down list, select Invoice (or any other field corresponding to the source). NOTE: To prevent the creation of duplicate records on the target side, use the UPSERT operation. Click Next step . On the Mapping Definition tab, check whether all required columns are mapped. Click Schedule to set the timing for integration. Click Save . Click Create in the tab bar to preserve the import task. Mass Data Updates and Deletes Problem There are multiple cases where a company changes its information. For instance, when a business\u2019s billing address gets changed, invoices in a CRM need to be updated. Here comes a mass update between QuickBooks and Zoho CRM. At the same time, there might be a need to delete a lot of information at once. For instance, if a company has decided to stop working with customers from some country X, the corresponding changes should occur in CRM and accounting software. Solution Skyvia supports all DML operations, including UPDATE and DELETE. To perform a mass update and delete operations, proceed with the same instructions for the import scenario as provided above with the difference in Step 8: Select the UPDATE option for updating existing records. Select DELETE to delete records matching a certain condition. Synching Data Two-Ways Problem Businesses sometimes want to ensure information consistency across apps. Here comes Skyvia\u2019s Synchronization to align data in various tools. Solution When there\u2019s a need to keep data aligned between two sources, bi-directional synchronization takes place. This process is realized with the Synchronization component in Skyvia. NOTE: In case one of the apps has no records yet, bi-directional synchronization would be great. Skyvia copies all data from Zoho CRM to QuickBooks and vice versa for the first time. This prevents the creation of unnecessary record duplicates. If there are already plenty of records in each tool, apply the import scenario instead by creating two different import tasks in each direction for data transfer. To set up and execute the synchronization process, proceed with the following steps: Click +NEW in the top menu. In the Integration column, click Synchronization . Under Source, select Zoho CRM. Under Target , select QuickBooks. Click Add New to open the Task Editor settings window. In the Task Editor window, select Invoices (or any other field of your interest) from the Source drop-down list and the corresponding data field on the target side. Click Next Step . Define data mapping settings in the Column Definition tab for both directions. Click Save . Click Schedule to set the timing for integration, and click Save . Click Create in the tab bar to preserve the synchronization task. Comparing the Integration Methods After having explored two principal methods for Zoho CRM integration with QuickBooks, let\u2019s recapitulate the essence of each approach based on the provided characteristics. Zoho Flow Skyvia Features This platform works great for one-way and two-way synchronization. This platform has an extended set of features for data synchronization, import, replication, and complex integration scenarios. Ease of use Has a drag-and-drop UI convenient for interaction. Has a visual wizard that requires no coding for building data pipelines and setting up integration parameters. Customization options Limited customizability. High customizability. Zoho Flow starts integration processes only when a certain trigger is invoked, which makes it perfect for flawless synchronization but less convenient for loading already existing data from one source to another. Moreover, this platform doesn\u2019t offer any kind of monitoring dashboard where the current processes are shown, though it has a history tab where all the triggers and actions are displayed. Skyvia obviously has a wider range of integration options and a higher degree of customizability, along with a monitoring tab where the integration progress is depicted in real time. Consider this service for moving data between sources or to a DWH, and use it for analytical purposes and other data-related tasks. Conclusion Integrating CRM with an accounting system generally creates a positive impact on the entire company as well as on its particular departments. Accountants and managers discover the principal sources of revenue, while marketing and sales specialists obtain a comprehensive view of customer\u2019s purchasing history, special discounts, and other financial details. Keeping consistent data in both systems via manual data entry is no longer an option. There are effective methods for integrating Zoho CRM and QuickBooks data quickly and with no errors. One of the options is to rely on Zoho Flow for native integration, even though this method imposes certain limitations. Otherwise, consider Skyvia as an easy-to-use cloud-based platform with a rich feature set for integrating data in various ways depending on your needs. Share [Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fblog.skyvia.com%2Fzoho-and-quickbooks-integration%2F) [Twitter](https://twitter.com/intent/tweet?text=Guide+to+Zoho+CRM+and+QuickBooks+Integration&url=https%3A%2F%2Fblog.skyvia.com%2Fzoho-and-quickbooks-integration%2F&via=Skyvia+Blog) [Linkedin](https://www.linkedin.com/shareArticle?mini=true&url=https://skyvia.com/blog/zoho-and-quickbooks-integration/&title=Guide+to+Zoho+CRM+and+QuickBooks+Integration) [Liliia Levk\u043e](https://skyvia.com/blog/author/liliia-levko/) With nearly a decade of experience in technical writing, Liliia specializes in ETL/ELT tools and data management and integration. With a keen eye for detail and a passion for simplifying intricate concepts, she excels at translating technical jargon into accessible content for diverse audiences. Continue Reading [Data Integration](https://skyvia.com/blog/category/data-integration/) [How to Export CSV from HubSpot (Contacts, Deals, Reports & More)](https://skyvia.com/blog/hubspot-export-csv-guide/) [Data Integration](https://skyvia.com/blog/category/data-integration/) [Snowflake to Salesforce Integration: Step-by-Step Solutions](https://skyvia.com/blog/snowflake-to-salesforce-integration/)" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How can I get my invoice?\n\nAnswer: To find your invoices, please do the following: Go to your Verifone account: [https://secure.2co.com/myaccount/](https://secure.2co.com/myaccount/) Following the link, you will be asked to indicate the email address used while purchasing the subscription. Once indicated and the request submitted, you will get to your Inbox an access link to your billing account. Follow the access link. In MyAccount you will see all your invoices under the orders." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How can I delete my Skyvia account?\n\nAnswer: I'm sorry to hear you\u2019ve decided to delete your Skyvia account. Could you share why you decided to quit? Your feedback will help us create a better service. You can delete your account in the General section of your account settings." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How do i cancel my trial?\n\nAnswer: Please note that the trial cannot be manually canceled, but it will automatically expire after 14 days with no charges applied . If you decide to continue using Skyvia after the trial, you\u2019re welcome to subscribe to one of our paid plans at any time. When you create a Skyvia account, you are automatically placed on a free two-week trial of all our products . This trial is completely free of charge and gives you full access to explore all features without limitations." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: What payment methods does your company accept?\n\nAnswer: We accept the following payment methods: Credit Card Wire Transfer Purchase Order Paypal Webmoney ACH" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How to update payment details?\n\nAnswer: To update your payment details, please go to your Verifone account: [https://secure.2co.com/myaccount](https://secure.2co.com/myaccount) Following the link, you will be asked to indicate the email address used while purchasing the subscription. Once indicated and the request submitted, you will get an access link to your email account Inbox that is linked to your billing account. Log into your Verifone account and go to Payment Methods. The options are: If you wish to link a different credit card, just delete the current one and link to the new one. If you'd like to switch to e.g. Wire Transfer just delete the current credit card and save settings. When submitting a new order, you'll be able to choose any available payment method you like." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How can I pay with a Purchase Order?\n\nAnswer: To pay with the Purchase Order method, please follow the steps below: Go to Skyvia \u2192 Plans and Subscriptions \u2192 Choose the required product \u2192 Choose Plan and click on Select Plan. Once you are redirected to the order page, fill in all the required information and select Purchase Order payment method. Please pay attention to the Internal PO field as well. Then click Continue and you\u2019ll be redirected to the order verification page to double-check the details provided. If all the information is correct, press the \u201cPlace order\u201d button to place the order. What will happen after I place my order? After placing your order, you will receive an email confirmation and more information regarding the purchased products/services. You will need to download the Purchase Order Form (available after placing the order) to finish your order. To finish your order, download the Purchase Order Form, print and sign it. Send it to Verifone by using the upload form or by email at support@2checkout.com . To identify your order please specify the reference number (the number will be available after the order has been made). If you generate your own Purchase Order document, make sure your internal Purchase Order number matches the one on the document provided by Verifone. You can find this number on the invoice Verifone issues." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How can I pay with Wire transfer?\n\nAnswer: To pay with the Wire transfer method, please follow the steps below: Go to Skyvia \u2192 Account \u2192 Choose the required product \u2192 Choose Plan and click on Select Plan. Once you are redirected to the order page, fill in all the required information and select Bank/Wire transfer payment method. Then please press the \u201cPlace order\u201d button to place the order. After placing your order, you will receive an email confirmation with the bank details for paying for the subscription." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I need to change my billing address, company address, company details.\n\nAnswer: Thank you for your question. A human can provide a better answer to this query. I will forward your question to the Skyvia team, and they will respond as soon as possible." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I need a refund, cancel my order.\n\nAnswer: Thank you for your request. Your Skyvia Customer Success Manager will reach out to you shortly." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How do I cancel my Skyvia subscription?\n\nAnswer: You can disable auto-renewal on your Account page under the Subscriptions tab by setting the Auto Renewal Status to Off. If auto-renewal is disabled for a product, you will automatically be switched to the Free plan when your subscription for a paid plan expires. Could you shed some light on the reasons for canceling your subscription with Skyvia? Only if it\u2019s not confidential, of course. Was it a pricing concern, missing functionality, or something else? Your input is a top priority, and if it's easier, we can jump on a call for a more detailed chat. I'm here to explore options for you to come back and am all ears for any feedback, requests, or proposals you may have." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: We are tax-exempt. How do I get that removed?\n\nAnswer: Since you are a non-profit organization, there is a possibility to get back money paid as Sales Tax after the purchase is completed. You will need to contact our billing processor (pay@ [2checkout.com](http://2checkout.com) ) and provide them with your Tax Exemption Certificate." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General undefined question\n\nAnswer: Thanks for reaching out! Please provide more details about your request by answering the following short questions: 1. Which Skyvia product are you interested in (Data Integration, Query, Backup, Connect, or Automation)? You can visit [skyvia.com](http://skyvia.com) => Product to find more details about our solutions. 2. What data sources do you plan to use? All supported connectors can be found [here](https://skyvia.com/connectors) . 3. What's your business need, i.e., what goal could Skyvia help you achieve?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Tell me about Connect pricing\n\nAnswer: The pricing model for the Connect product is based on the amount of traffic you load per month and the security settings of endpoints. There are two types of endpoints in terms of security settings: public and private. Public endpoints can be accessed by anyone with the link, while private endpoints can be secured further by IP restrictions or username and password. Please let me know, what business use case do you want to solve with Skyvia Connect?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Tell me about Data Integration pricing\n\nAnswer: Our pricing model for the Data Integration product is mainly based on the number of records you load per month, the frequency of automatic data updates, and the complexity of the mapping and integration scenarios. Automatic updates can run on a particular day of the week, once per day, every hour, or every minute. We do not charge for the number of connectors or users within one subscription. Please let me know, what business use case do you want to solve with Skyvia Data Integration product?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Tell me about Backup pricing\n\nAnswer: The pricing model for the Backup product is based on the storage size you occupy per month (which consists of your backup task's size multiplied by the number of snapshots you plan to keep). The minimum storage is 1GB per month, and the maximum available storage is 1TB. Please note that automatic backup creation is available starting from any paid plan. We do not charge for extra connections or users within one subscription." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Do you offer discounts for non-profit organizations?\n\nAnswer: Yes, we offer a 10% discount for non-profit organizations. Please let me know which product you are interested in, and I will help you find the best solution for your needs." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How can I activate a promocode?\n\nAnswer: You need to go to Skyvia \u2192 Plans and Subscriptions \u2192 select the required product \u2192 select plan. Once you are redirected to the payment page, click the checkmark, insert the promo code, and apply the coupon." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How do I activate a trial?\n\nAnswer: To activate a trial, please go to Skyvia \u2192 Plans and Subscriptions \u2192 choose a product \u2192 click Upgrade plan \u2192 select the desired plan and click Try now." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How does your workspace feature work?\n\nAnswer: Below are links to our documentation regarding this feature: \u2022 [Adding and Deleting Workspaces](https://docs.skyvia.com/account-management/adding-and-deleting-workspaces.html) \u2022 [Workspace Roles](https://docs.skyvia.com/account-management/workspace-roles.html) \u2022 [Creating and Deleting Accounts](https://docs.skyvia.com/account-management/creating-and-deleting-accounts.html) \u2022 [Account Management](https://docs.skyvia.com/account-management/index.html) \u2022 [Inviting and Deleting Users from Accounts](https://docs.skyvia.com/account-management/inviting-and-deleting-users-from-accounts.html) Just curious - what business use case do you want to solve with Skyvia?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: What\u2019s your data integration processing time?\n\nAnswer: The processing time for a package depends on multiple factors, most importantly the cloud API\u2019s limits. Additionally, the following factors influence run time: \u2022 type of objects; \u2022 type of fields; \u2022 custom fields; \u2022 type of data; \u2022 relationships between objects. The best way to assess this is through hands-on testing. I suggest creating a Skyvia account and running some tests. Just curious - what business use case do you want to solve with Skyvia?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Oracle Fusion?\n\nAnswer: Currently, we do not have a separate connector for Oracle Fusion. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Oracle Opera PMS?\n\nAnswer: Currently, we do not have a separate connector for Oracle Opera PMS. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General question about data integration\n\nAnswer: It seems you are interested in Skyvia Data Integration as a solution to your business needs. Here is an [overview of Skyvia Data Integration](https://skyvia.com/data-integration/) , including a short demo video [here](https://youtu.be/41JIlV1bP9c) . I hope you find it helpful. To guide you further, please answer the following short questions: 1. What is the approximate number of records you plan to load per month? (1 record = 1 row in a database table) 2. What frequency of data updates do you need? The automatic updates can run once per day, every hour, or every minute. 3. Are you interested in one-way or bi-directional data transfer?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Oracle Hospitality?\n\nAnswer: Currently, we do not have a separate connector for Oracle Hospitality. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Oracle EBS (E-Business Suite)?\n\nAnswer: Currently, we do not have a separate connector for Oracle EBS. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Azure Blob Storage?\n\nAnswer: Currently, we do not have a separate connector for Azure Blob Storage. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Microsoft Dynamics 365 F & O (Finance and Operations)?\n\nAnswer: Currently, we do not have a separate connector for Dynamics 365 Finance and Operations." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to Salesforce Commerce Cloud?\n\nAnswer: Currently, we do not have a separate connector for Salesforce Commerce Cloud. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to NetSuite OpenAir?\n\nAnswer: Currently, we do not have a separate connector for NetSuite OpenAir." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I would like more details on the OData connector for Salesforce.\n\nAnswer: Skyvia Connect ideally suits the Salesforce OData connector (Salesforce Connect). With Skyvia Connect, you can create an OData endpoint to a database, data warehouse, or other cloud app, and then paste the OData URL into Salesforce Connect. Afterward, this data in your SFDC org will be shown as \u2018External Objects.\u2019 Is this the use case that brought you to us? If not, please specify what goal you\u2019re trying to achieve." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I connect to SAP ERP?\n\nAnswer: Skyvia does not have a direct connector for SAP ERP. It is on the roadmap, but there is no ETA for its implementation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General question about backup if Skyvia Backup supports the requested application.\n\nAnswer: Here is an [overview of Skyvia Backup](https://docs.skyvia.com/backup/working-with-backups/how-to-create-backup.html) , including a short demo video [here](https://youtu.be/-Q0g4iV4ccQ) ." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General question about Connect/OData/SQL Endpoints\n\nAnswer: Here is an [overview of Skyvia Connect](https://skyvia.com/connect/) . I hope you find it helpful. To guide you further, please answer the following short questions: 1. What volume of data do you plan to expose on a monthly basis? 2. Would you like the option to restrict access to an endpoint by IP address?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I need to have a secured endpoint\n\nAnswer: There are two types of endpoints in terms of security settings: public and private. Public endpoints can be accessed by anyone with the link, while private endpoints are secured with additional measures such as IP restrictions or username and password. Private endpoints are available starting from the Connect Standard plan. The Free and Basic plans allow the creation of public endpoints only. Please ensure that you are creating an endpoint available within the subscription you are testing. Please let me know, what business use case do you want to solve with Skyvia?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General question about Skyvia Query product\n\nAnswer: It seems you are interested in Skyvia Query as a solution to your business needs. Here is an [overview of Skyvia Query](https://skyvia.com/query/) . I hope you find it helpful. To guide you further, please specify how many queries do you plan to run per day?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Connect a cloud platform to a BI tool\n\nAnswer: There are two ways to share data from your cloud platform with your BI tool: - Using the Data Integration product: Your cloud platform data can be replicated to a database or data warehouse, which can then be connected to your BI tool as a source. - Using the Connect product: You can create an endpoint for your data source and connect it to your BI tool. Please let me know, what business use case do you want to solve with this type of data integration?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Compare Data Integration and Connect for BI tools\n\nAnswer: Both solutions can achieve the same result, but they operate differently. The Connect product gives your BI tool access to data through an SQL or OData endpoint, whereas Data Integration's Replication scenario physically writes your cloud app's data to a database or data warehouse. Pricing for both products largely depends on the volume of data you plan to load per month. For Data Integration, data is measured in records (1 record = 1 row in a database table). For the Connect product, data is measured in MB or GB. You can check the pricing details [here](https://skyvia.com/pricing) . Please let me know, what business use case do you want to solve with Skyvia?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Compare Data Integration and Backup\n\nAnswer: With the Data Integration product, you can always keep an up-to-date copy of your cloud data stored in a database of your choice. Additionally, it allows data export, import, synchronization between two data sources, and other sophisticated integrations that are useful for data management and building advanced data pipelines. The Backup product allows you to create and store multiple backup snapshots of your cloud data on our Microsoft Azure server. Please let me know, what business use case do you want to solve with Skyvia?" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General info about Skyvia\n\nAnswer: Thank you for your interest in Skyvia. Skyvia is a powerful solution for cloud data integration (ETL) and backup. It is a completely online solution and does not require any local software installed except for a web browser. With Skyvia you can: [import](https://docs.skyvia.com/data-integration/import/) data to cloud applications and relational databases; [export](https://docs.skyvia.com/data-integration/export/) your database or cloud data to a CSV file; create a copy of cloud application data in a relational database with [Replication](https://docs.skyvia.com/data-integration/replication/) and maintain this copy in an up-to-date state when necessary; [synchronize](https://docs.skyvia.com/data-integration/synchronization/) your data between cloud sources and relational databases in both directions; [backup](https://docs.skyvia.com/backup/) various cloud data sources daily; use online [Query](https://docs.skyvia.com/query/index.html) Tool to view and manage your cloud and database data with the power of SQL directly from your web browser; use Skyvia [Connect](https://docs.skyvia.com/connect/) to expose your data from various sources via the OData protocol and make it available in JSON or XML format over the web or create API endpoints that allow running SQL statements against both databases and cloud applications. use [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) to build integrations with powerful transformations between multiple data sources. In one data flow, you can use multiple data transformations and transfer modified data to multiple targets; use [Automation](https://docs.skyvia.com/automation/) to connect your favorite apps and services and build complex workflows to automate repetitive manual tasks." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: The \"... does not have storage.buckets.get access to bucket default\" error in BigQuery.\n\nAnswer: This issue is related to Google BigQuery limitations. The default bucket is not suitable for operations performed via BigQuery API. Please try using another bucket for your BigQuery connection." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How data is inserted into MPP data sources (Amazon Redshift, BigQuery, etc.)?\n\nAnswer: 1. Insert operation. In this case, data is divided into batches and loaded as CSV files to the bucket (Google Storage). Then BigQuery jobs are started. BigQuery has a load job, that allows loading CSV data to BigQuery [cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load](http://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load) . These BigQuery jobs load data from CSV files on Google Storage to a BigQuery table. After this, all the temporary CSV files are deleted. 2. Update or delete operation. Data for these operations are first loaded to a temporary table as described above. Then UPDATE FROM or DELETE FROM commands for the main table are performed (with the subselects from the temporary table)." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: CSV file has invalid content or its structure does not correspond to the mapping defined in the integration\n\nAnswer: This error usually occurs when the structure of the CSV file in the source does not correspond to the structure of the file initially added to the Task, e.g. some columns are missing, added, renamed, reordered, etc. Another possible reason - unexpected hidden characters in the file which may lead to violation of file integrity. Thus, please check your file, for example, via NotePad++ and then reload the file to the Task (re-add from SFTP to refresh columns lists) and check the mapping." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: The \" Connection settings are invalid. Please check your Domain, User, and/or Password values\" error in Jira connection.\n\nAnswer: This is an \"Unauthorized (401)\" error which means that incorrect login or password is used. It is required to use an API token for a successful connection. Please see [https://docs.skyvia.com/connectors/cloud-sources/jira_connections.html](https://docs.skyvia.com/connectors/cloud-sources/jira_connections.html) . In order to connect to your Jira instance, you need to specify your Domain, your email address and generate a token at [https://id.atlassian.com/manage/api-tokens](https://id.atlassian.com/manage/api-tokens) and use it as a Password." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Access Denied, (403) Forbidden, 501 Server Unavailable error in Mailchimp\n\nAnswer: This error occurs on the MailChimp side when numerous requests are sent to their server. In this case, Akamai CDN is blocking access to MailChimp from particular IPs. The requests from all Skyvia users at a certain point in time are taken into account (not from one particular user). Skyvia uses two static IP addresses. In such cases, it is recommended to change schedule settings, such as moving the integration's start for 1 or 2 hours or running the integration manually later. Also, you can try splitting your integration into several ones and run them at different times." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)\n\nAnswer: It could happen because Amazon blocked granting the missing \u2018SUPER\u2019 privileges to any database user except the \u2018rdsadmin\u2019 user owned by Amazon itself. As a workaround, one may set the parameter \u2018log_bin_trust_function_creators\u2019 to 1, that prevents the database from complaining against triggers. Though, setting this parameter requires the following steps: Open the RDS web console. Open the \u201cParameter Groups\u201d tab. Create a new Parameter Group. On the dialog, select the MySQL family compatible with your MySQL database version, give it a name and confirm. Select the just created Parameter Group and issue \u201cEdit Parameters\u201d. Look for the parameter \u2018log_bin_trust_function_creators\u2019 and set its value to \u20181\u2019. Save the changes. Open the \u201cInstances\u201d tab. Expand your MySQL instance and issue the \u201cInstance Action\u201d named \u201cModify\u201d. Select the just created Parameter Group and enable \u201cApply Immediately\u201d. Click on \u201cContinue\u201d and confirm the changes. Again, open the \u201cInstances\u201d tab. Expand your MySQL instance and issue the \u201cInstance Action\u201d named \u201cModify\u201d. Open the \u201cInstances\u201d tab. Expand your MySQL instance and issue the \u201cInstance Action\u201d named \u201cReboot\u201d. Also, some useful information regarding this error: [https://dev.mysql.com/doc/refman/5.7/en/stored-programs-logging.html](https://dev.mysql.com/doc/refman/5.7/en/stored-programs-logging.html) [http://wiki.ispirer.com/sqlways/troubleshooting-guide/mysql/import/binary-logging](http://wiki.ispirer.com/sqlways/troubleshooting-guide/mysql/import/binary-logging) [http://stackoverflow.com/questions/11601692/mysql-amazon-rds-error-you-do-not-have-super-priviledges](http://stackoverflow.com/questions/11601692/mysql-amazon-rds-error-you-do-not-have-super-priviledges)" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: \"Invalid private key\" SFTP error\n\nAnswer: Skyvia uses OpenSSH format for the private keys when working with the SFTP connections. Here are the steps that are necessary to perform in the PuTTY Key Generator tool ( [https://www.putty.org/](https://www.putty.org/) ) to get the correct format of the private key file: - open PuTTY Key Generator; - open menu File -> Load private key and select your file; - open menu Conversions -> Export OpenSSH key and save the file; - use the newly generated private key file in Skyvia connection." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: \"No such file\" FTP/SFTP error\n\nAnswer: You may receive the No such file error if you have whitelisted only one IP. In this case, the site works with SFTP, and the runtime doesn't. It happens because we have 3 servers with 3 different IPs. To fix an error, please make sure both Skyvia's IPs are whitelisted: 40.118.246.204, 13.86.253.112, and 52.190.252.0." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I\u2019m interested in your metadata backup service\n\nAnswer: Skyvia does not support metadata backup. It backups only the data." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: I'm getting the \" A task was canceled. [Code: RC_03]\" error.\n\nAnswer: This appears to be a temporary server-side error that sometimes occurs and later resolves itself. We are currently investigating it. Please try running the integration again, and if the error persists, share the link to your integration so we can pass it on to our developers for further investigation." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Cached Lookup\n\nAnswer: When the cached lookup is used, the necessary fields from all rows of the lookup object are queried and cached on Skyvia, and lookup is performed against this cache. Depending on the number of rows in the lookup object and the number of imported rows, this may provide a performance gain." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: What support options do you provide? Do you provide phone support?\n\nAnswer: We're always here to help with any technical questions you have! If you need support, please reach out to us by email, live chat, or through our support portal. However, please note that we don't offer support via phone/video calls at this time." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Sync/Import Dynamics CRM and Salesforce Attachments\n\nAnswer: Skyvia currently cannot synchronize attachments between Salesforce and Dynamics. Besides, Skyvia requires objects to have fields storing information when an object was created and last modified. In Dynamics CRM - createdon and modifiedon fields, in Salesforce - CreatedDate and LastModifiedDate fields. So, the objects, not having such fields cannot participate in Synchronization. Additionally, many Dynamics CRM objects have polymorphic relations, which are not supported by Skyvia. In many cases, Skyvia won't be able to synchronize such relations. Salesforce and Dynamics CRM data sources also have system tables, which probably cannot be synchronized or have limitations. Skyvia can guarantee synchronization of general objects, such as Accounts, Contacts, Products, etc. for sure, but system and Dynamics CRM-specific or Salesforce-specific objects may be difficult to synchronize. Please specify what objects exactly you want to synchronize, so that we can provide you with the proper information. For more information about synchronization, please refer to [https://docs.skyvia.com/data-integration/synchronization/](https://docs.skyvia.com/data-integration/synchronization/)" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Importing recently added or updated data from database\n\nAnswer: Please refer to the \"Workaround for Relational Databases\" section here: [https://docs.skyvia.com/data-integration/import/how-to-guides/importing-only-recently-added-or-changed-data-from-cloud-sources.html](https://docs.skyvia.com/data-integration/import/how-to-guides/importing-only-recently-added-or-changed-data-from-cloud-sources.html) In few words: you can add columns for storing timestamps of creation time and/or last modification time to the database table you want to import data from (columns must be in UTC) and create triggers that assign a current timestamp to these columns whenever a row is inserted or modified. Then you can use source data filters, and add filters on these columns that use the LAST_RUN relative constant. If you encounter any issues with this or have any further questions, feel free to contact us. We will be glad to answer all your further questions." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: General Restore process\n\nAnswer: The restore process in Skyvia works in the following way: When viewing data, you need to perform the following actions to restore data: 1. Select checkboxes for the records or whole objects you want to restore. You can freely navigate between Backup Details, Data Preview, and Record Details pages using on-page links and breadcrumbs - Skyvia will remember all the selections you made till you leave the current backup. 2. Click the Restore button, and select the operation to apply for the selected data - Insert, Update, or Delete. Please note, that when restoring data while viewing data changes, you don't need to select the operation manually. Skyvia selects the operation to apply that undoes the selected change: inserted records are deleted, deleted records are inserted, and updated records are updated with their previous values. 3. Optionally, you can select another connection to restore data to the Restore details dialog box. Then simply click Apply, and the data will be restored. Note: when restoring parent and child objects, you should select both of them in order to restore relations correctly. For example, after deletion of Account and related Contacts, you should select Account and Contact at the Record Preview Page. Please refer [https://docs.skyvia.com/backup/working-with-backed-up-data/restoring-data.html](https://docs.skyvia.com/backup/working-with-backed-up-data/restoring-data.html)" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Connecting to Amazon RDS\n\nAnswer: Skyvia supports Amazon RDS databases: [https://skyvia.com/connectors/amazon-rds](https://skyvia.com/connectors/amazon-rds) You can choose and configure the one that is the most suitable for you. For this, open Connections, click New -> Database and then select MySQL, PostgreSQL or SQL Server. Some useful information on how to configure your Amazon RDS database to allow connections from Skyvia: 1) Open the Security Group that is specified for your server (you can open it directly from the server by clicking it). 2) For the necessary Security Group click Actions and select Edit inbound rules 3) The rules editor will be opened, click Add Rule. 4) Select the necessary instance in the Type combo box, e.g.: MySQL 5) Protocol and Port will be assigned automatically 6) For Source please set 40.118.246.204/32 7) Save the changes and try your scenario again. Please also refer to [http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html)" }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Account locked\n\nAnswer: Customer accounts are normally locked for 20 minutes. Your account should be unlocked now. Please try to log in. Should you have any questions, do not hesitate to contact us back." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How UPSERT works?\n\nAnswer: The UPSERT operation updates a record if it exists or inserts a new record. This allows you to avoid inserting duplicate data. You need to map the target ID/Primary key columns for performing UPSERT. In Skyvia, UPSERT determines what action to perform in the following way: if a Null value is specified for the ID or primary key, UPSERT operation inserts the record, and if a non-null value is specified, UPSERT operation tries to update the record with the specified ID or primary key. Skyvia does not actually check if such a record exists, and providing invalid ID/PK values results in failed records. We recommend you to use Lookup Mapping for ID/PK columns and get the IDs or PK values from the target object itself by some other field that uniquely identifies a record. For example, this could be Title or/and Images or something that uniquely identifies a record. JIC: When using lookup mapping for ID or PK columns in UPSERT, don't forget to select the Set null when no match found checkbox in Lookup Options. Otherwise, a lookup will produce errors if no such record found, and there would be failed records instead of inserted new ones. Please look at these links [https://docs.skyvia.com/data-integration/import/how-to-guides/performing-upsert-operation.html](https://docs.skyvia.com/data-integration/import/how-to-guides/performing-upsert-operation.html) [https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html) Please tell us if this helps." }, { "url": "Unknown", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: How to create Agent with alias\n\nAnswer: Here is example for For SQL Server: 1. Create a text file with any name and \".alias\" extension in the directory of Skyvia Agent. By default, it's C:\\Program Files (x86)\\Skyvia Agent. 2. Paste this content into the file with the correct values: dataSource: your_data_source_name database: your_database_name userId: your_user_id password: your_password 3. For new database connections select \"Agent with Alias\" as a Connection Mode, choose the Agent, and enter the name of the file without extension:" }, { "url": "https://docs.skyvia.com", "product_name": "Unknown", "content_type": "Documentation", "content": "Overview Skyvia is a versatile no-code cloud data integration platform for ETL, ELT, Reverse ETL, data migration, one-way and bi-directional data sync, workflow automation, real-time connectivity, and other data-related tasks. Skyvia supports 190+ data sources , including cloud applications, databases, data warehouses, and file storage services. A vast connector library enables seamless integration among them. Skyvia no-coding wizard-based tools are easy to use by IT professionals and business users with no technical skills. How it Works Skyvia connects to the data sources via their APIs. When Skyvia connects to cloud apps or databases, it reads their metadata and data and represents them as tables and relations between them. You can connect Skyvia to any of the supported data sources and use the same connections in different Skyvia products. Products Skyvia offers several products tailored for different data-related needs: Data Integration , Automation , Backup , Query , and Connect . Each product has distinct pricing plans, allowing users to pay only for what they use. Data Integration The Data Integration product automates ETL, ELT, and Reverse ETL processes between various cloud applications and databases. In Skyvia, \u201cintegration\u201d refers to a collection of tasks or operations grouped together to automate data-related processes. Each integration can be scheduled to run automatically, helping users automate repetitive tasks without manual intervention. Skyvia offers various types of integrations for different data integration scenarios. Import Import provides one-way data loading from a data source to a cloud app or database, applying data transformations using mapping capabilities. Data sources can include another cloud app, database, or a CSV file on a local computer or in a file storage. Use Import to integrate data sources of different structures. Export Export enables data extraction from a database or cloud app to a CSV file on a local computer or file storage. Advanced export features allow exporting the original or transformed source data. Replication Replication copies cloud data to a database or cloud data warehouse and automatically keeps it up to date. It creates tables in the database or data warehouse from cloud app objects, using available transformation options. Synchronization Synchronization enables two-way data transfer between data sources with different structures by using mapping capabilities. It copies data between both sources and maps the original records to their counterparts in the other source. Data Flow Data Flow enables the implementation of advanced integration scenarios involving more than two data sources. It helps you perform complex, multistage transformations, such as getting data from one data source, enriching it with data from another one, and finally loading it into the third one. Control Flow Control Flow enables running data flows or other integrations in a specific order or depending on the particular conditions, performing pre- and post-integration actions and setting up error processing logic within a single integration. Automation [Automation](https://docs.skyvia.com/automation/) helps optimize business workflows by connecting apps and services and automating repetitive tasks. With Automation, you can create complex workflows to handle various conditions and data operations efficiently. Use Triggers, Actions, and Conditional components to build the automation flow with a multi-step conditional logic and error processing and run your automation manually based on schedule or event. Automation is especially useful for streamlining recurring processes. For example, you might automatically add a task when a new support ticket is received, create a new order after a sale, or schedule cross-platform data transfers. Backup Skyvia Backup is a robust backup and restore tool for cloud application data. It enables performing manual and automatic backups. After you back up your data, you can access it in the web browser or export it to CSV. The Restore feature helps you to restore the data source objects, separate records, or even fields. Also, you can use your backed-up data as a data source for other integrations. Query Skyvia Query is a cloud SQL client that enables executing SQL statements against cloud applications and relational databases. It supports SELECT and DML Statements. Visual Query Builder helps build queries without SQL knowledge. You can export query results to a CSV or PDF file. To save your queries in one place and access them at any time, use the Query Gallery . Query Gallery is a collection of predefined queries for common use cases. Skyvia Query supports querying data directly in Google Sheets, using Skyvia Query Google Sheets Add-on , and in MS Excel using [Skyvia Query Excel Add-in](https://docs.skyvia.com/skyvia-query-excel-add-in/) . Connect Skyvia Connect is a connectivity-as-a-service solution that helps to publish any data as OData endpoints. It makes your cloud and on-premises data available for various OData consumer applications, such as BI tools, office suites, Salesforce Connect, etc., with no coding. Skyvia connects to data sources via their custom interfaces and provides unified web API for their data. Skyvia supports OData endpoints and SQL endpoints . You can configure endpoints in convenient GUI editors without coding. Skyvia logs all the requests to created endpoints and provides detailed information on activities against them. How to Use This Documentation This documentation is designed to help you get acquainted with Skyvia and find the needed information quickly. Start with the Concepts and User Interface Basics topics to familiarize yourself with the basic terms and UI elements and controls. Look for your apps in the Connectors section, which helps you establish a connection to your application or database and provides additional advanced Connector-related information. Do not hesitate to use the search bar to look for any Skyvia-related information. Below is the list of sections dedicated to the separate Skyvia products. Data Integration Automation Backup Query Connect Google Sheets Add-on Excel Add-in See the latest Skyvia updates and news in the [Recent Releases](https://docs.skyvia.com/recent-releases/) section." }, { "url": "https://docs.skyvia.com/account-management/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management As already mentioned in the Collaboration topic, in Skyvia we separate the concept of user profile and account. Account contains user\u2019s subscriptions to Skyvia products, user\u2019s payment details and payment history. Account settings can be managed by account admin only. Other users invited to the account as members can share subscription resources but cannot manage them or view any payment details. Account also contains a single or multiple workspaces with objects (connections, integrations, backups, etc.). Objects belong solely to a workspace, while workspaces belong solely to an account. This allows invited users with assigned workspace roles to work over same Skyvia objects within one workspace, having limited or no rights to manage account settings. Account Overview You can view and manage your Skyvia account on your [Account](https://app.skyvia.com/#/account/subscriptions) page. To open this page, click the user icon in the top right corner of the Skyvia page and then select Account . You will be transferred to your Skyvia account that has been automatically created during your registration in the platform. User account consists of 8 tabs. If you are an account admin , you have full privileges in the account and can use all the tabs freely. If you have been invited to someone\u2019s account as account member , you can view subscriptions without managing them, but you cannot perform other actions, like viewing other account users, workspace roles or workspaces, unless you were invited to them. Read more about it below or in the Inviting and Deleting Users from Accounts topic. Subscriptions tab displays Skyvia subscriptions to four Skyvia products. When signing up to Skyvia, users automatically receive free subscriptions to all the products. Later, on this tab, you can upgrade to a paid subscription plan if you need more records to be processed by Data Integration tools or if you need more storage to back up your data or unlimited endpoints, etc. Skyvia also provides trial subscriptions for all its products. Invoices tab displays all your invoices in Skyvia. On this tab, you can check such information as invoice number, date, total sum and invoice status (pending, refund, complete, etc.). To view more information about the invoice, click its number. Users tab allow you to view all users invited to the account, their status, add new users, grant or change users\u2019 account permissions. You can also delete users from the account, cancel invitations sent to users to join the account. Workspace Roles tab allows viewing a list of roles. It includes standard workspace roles with a predefined set of permissions and custom roles with a set of individual permissions. The standard workspace roles are preset in Skyvia. They cannot be deleted unlike custom roles. Skyvia also allows account admins to create custom workspace roles and set individual permissions. Workspaces tab displays all workspaces you have created yourself or you were invited to. On this tab, you can also create new workspaces, edit existing or delete unnecessary ones. You can see and manage account workspaces only if you are an account admin. If you are an account member, you will not see workspaces on this tab as your access to this tab will be restricted. You will have access only to the workspace you were invited to, and you have to check your permissions by clicking workspace in the top menu. Error Notifications tab allows you to configure email notifications about errors in backups and integrations. See the Email Notifications topic for more information. Usage Notifications tab allows you to configure email notifications about reaching certain thresholds of your subscription limits for integration records and backup storage space used. See the Email Notifications topic for more information. Usage Summary displays statistics on the records used by [Data Integration](https://docs.skyvia.com/data-integration/) . You can get the data usage statistics for a specific period, grouped by workspace or integration and broken down by day, month, quarter, or year. API Settings displays Skyvia API access tokens, allows you to create new ones, as well as manage and view details for existing ones. Account Subscriptions Skyvia offers four different products for its users: Data Integration , Backup , Query , Connect . Each of these products has more than one pricing plan available. Each product also includes a free plan and a two-week trial. You can manage all your pricing plans (i.e. subscriptions) on the Subscriptions tab. What is also convenient is that you use Skyvia products completely independently \u2014 you need to pay only for the product you use. For example, if you use only Data Integration tools, you don\u2019t need to pay for Query, Backup, and Connect. Read more about subscriptions, payments and trials here or about subscription limits in more details here . Account Rights and Permissions Account Admin The account admin (account owner) has full permissions in the account. The account admin can select/change subscriptions to Skyvia products and change their auto-renewal status, enable the autocleaning mode for backups (except in a free plan), change any payment information, create the initial account infrastructure, i.e. add workspaces to the account, create objects in the workspaces, invite other users to the account/workspace, give them administrative privileges in the account and revoke them later, assign workspace roles to users, delete users from the account/workspace, etc. The main goal of the account administrator, if the admin represents a company, is to help other teammates or colleagues conveniently collaborate in the system by establishing a comfortable collaborative environment, managing product subscriptions as well as all payments and ensuring the overall account security. You receive automatically an account admin status when you register in Skyvia. Account Member The account member has limited permissions in the account he/she was invited to. The account member can view subscriptions to Skyvia products, used resources, but cannot manage them, cannot view or edit invoices and any other payment information. Apart from that, the account member cannot see other account users and account workspaces unless he/she was given access to a certain workspace. Being an account member, you cannot view and manage the account settings, but you can be granted workspace administrator rights, which means you obtain full workspace privileges from changing its settings to deleting the entire workspace with all its users if needed. You receive automatically an account member status when you accept an invitation to join the account of another user. To change your account member status, ask your account admin for extended permissions. Workspaces and Workspace Roles Workspaces are introduced to facilitate collaboration among users. Workspace is a working area, which allows a single user or multiple users from one company or team to work simultaneously over same Skyvia objects. You can allow access of multiple users to a workspace by granting them different roles and permissions level. You can either assign standard roles with a predefined set of permissions or create and assign custom roles with individual permissions to users. You can also assign different roles to users in different workspaces. For example, you can assign a workspace role to a user in one workspace and a different workspace role to the same user in another workspace. You can remove/add custom roles if needed or modify existing custom roles to match new requirements. Read more about standard and custom roles in the Workspace Roles topic. Workspaces belong to a certain account, which means that you need to invite a new user to the account first and only then assign him/her a certain workspace role to access the workspace. Creating and Working with Multiple Workspaces Multiple workspaces can be beneficial for medium-sized companies with numerous departments, which are quite isolated from one another, or for large project-oriented companies. Such companies usually develop multiple projects, and each project requires a separate workspace where a project manager can collaborate with other project team members (developers, supporters, etc.) sharing with them mutual objects (connections, integrations, etc.). As multiple workspaces belong to the same company\u2019s account in Skyvia, users of workspaces can share same subscriptions and resource limitations, however, payments for such subscriptions are centrally managed by a responsible person (account admin). Such structure allows keeping the overall account security and security of each separate workspace. If you have been invited to several workspaces or if you use a private workspace for your personal needs and company\u2019s workspace shared among other team members, only one of your workspaces can be active at a time. You should stop working with one workspace to switch to another one. You can read more on how to create, delete or switch between workspaces in the Adding and Deleting Workspaces topic." }, { "url": "https://docs.skyvia.com/account-management/adding-and-deleting-workspaces.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Adding and Deleting Workspaces Workspaces have been introduced to facilitate collaboration among users. Workspace is a working area, which allows a single user or multiple users from one company or team to work simultaneously over same Skyvia objects (integrations, backups, queries, etc.). You can allow access of multiple users to a workspace by granting them different roles. Users can also have access to many workspaces in one or in several accounts. In different workspaces, user can have different permissions and can be assigned different workspace roles \u2014 standard or custom ones. User can be a workspace admin for one workspace and a developer or member for another workspace. What is also convenient is that team members working remotely or being located geographically in different places can easily collaborate and work over same Skyvia objects in one workspace. Read more about the effective teamwork in the Collaboration topic. Please note that workspaces can be created by account admins only. When you create a workspace, you become by default a workspace administrator . You can add other users and assign workspace roles to them. You can transfer your workspace admin rights to another user and delete yourself from the workspace or both of you can act as workspace admins in the same workspace. Adding Workspaces You can create a new workspace in two different ways. You can add a workspace on the Workspaces tab of the Account page or you can click the workspace drop-down list in the top menu and add a new workspace to a certain account on the Select Your Workspace page. Below we describe each of these ways in more details. The first way is to switch to your default account by clicking the User icon and then \u2014 Account . Here click the Accounts link on the left to switch to the Select Account page. Please note if you did not rename your default account, it will be displayed as Untitled . On the Select Account page, you can select an account you want to switch to. In our example, we switched to Devart Test Account . Then you have to select the Workspaces tab, and, on this tab, click the button to add a new workspace. After you have clicked the button, the Create new workspace window will open. When creating a new workspace, enter the unique workspace name in the Name field and click Save . You cannot leave the Name field empty as your workspace should be easily identified among other workspaces if you have several of them. Later you can easily rename your workspace if required. The second way is to use the workspace drop-down list. Click it in the top menu and select Change Workspace . On the screenshot, we can see TEST PROJECT workspace as the user is signed in to this workspace. If the user is signed in to his/her default workspace and is working with it now \u2014 the default workspace will be displayed. After you have clicked Change workspace , you are redirected to the Select Your Workspace page. You can easily switch between existing workspaces by clicking them or add new workspaces by clicking the button on the right or the Add workspace link under the account, which does not contain any workspaces yet. You can also make any of your workspaces default simply by clicking the Set default link. When you sign in to Skyvia next time, you will be dropped directly into the workspace you have set as a default one. Deleting Workspaces To delete a workspace, you need to click the User icon and then \u2014 Account . You will be transferred to the page of your default account. Here click the Accounts link on the left and switch to the Select Account page. Choose the account the workspace of which you want to delete. On the Account Page, select the Workspaces tab, click the More Options icon next to the workspace name and select Delete workspace . You can delete a workspace only if you are a workspace admin." }, { "url": "https://docs.skyvia.com/account-management/api-settings.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management API Settings On this tab, you can create and manage API tokens for Skyvia API . It lists existing tokens, shows their expiration dates and when they were last used, and allows you to view the selected scopes. Creating Token To create an API token, perform the following steps: On the API Settings tab of Account Settings, click + API Token . Enter Token name . Specify the Expiration date for the token and click Next . Note that you cannot create token lasting longer than a year. Select whether the token provides access to the whole account or only to one of its workspaces. In case of a workspace, also select the workspace to provide access to. Click Next . Select the scopes to grant access to via the token and click Next . Copy the token and store it somewhere safe. Note that this is the only time the token is displayed. Finally, click Finish . Deleting Token At any time you can revoke existing API tokens by deleting them. For this, point to the token and click Remove API Token in the last column of the grid. Scopes You can grant access to the following scopes to API tokens: Account Read Allows you to read account users and invitations. Provides access to the following endpoints: 1\n2 GET /v1/account/users \nGET /v1/account/invitations Modify Allows you to delete users from account, invite users to account, resend and delete invitations. Provides access to the following endpoints: 1\n2\n3\n4 POST /v1/account/invitations\nPOST /v1/account/invitations/{invitationId}/resend\nDELETE /v1/account/users \nDELETE /v1/account/invitations/{invitationId} Agent Read Allows you to read the list of agents and information on specific agents. Provides access to the following endpoints: 1\n2 GET /v1/workspaces/{workspaceId}/agents\nGET /v1/workspaces/{workspaceId}/agents/{agentId} Modify Allows you to test whether the connection with an agent works. Provides access to the following endpoint: 1 POST /v1/workspaces/{workspaceId}/agents/{agentId}/test Automation Read Allows you to read the list of automations and information on specific automations. Execute Allows you to execute automations. Enable/Disable Allows you to enable and disable automations. Provides access to the following endpoints: 1\n2 POST /v1/workspaces/{workspaceId}/automations/{automationId}/enable\nPOST /v1/workspaces/{workspaceId}/automations/{automationId}/disable Connection Read Allows you to read the list of connections and information on specific connections. Provides access to the following endpoints: 1\n2 GET /v1/workspaces/{workspaceId}/connections\nGET /v1/workspaces/{workspaceId}/connections/{connectionId} Test Allows you to test a connection. Provides access to the following endpoint: 1 POST /v1/workspaces/{workspaceId}/connections/{connectionId}/test Endpoint Read Allows you to read the list of Connect endpoints and information on specific Connect endpoints. Provides access to the following endpoints: 1\n2\n3 GET /v1/endpoints/types\nGET /v1/workspaces/{workspaceId}/endpoints\nGET /v1/workspaces/{workspaceId}/endpoints/{endpointId} Enable/Disable Allows you to enable and disable Connect endpoints. Provides access to the following endpoints: 1\n2 POST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/enable\nPOST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/disable Integration Read Allows you to read the list of integrations, information about specific integrations and their executions. Provides access to the following endpoints: 1\n2\n3\n4 GET /v1/workspaces/{workspaceId}/integrations\nGET /v1/workspaces/{workspaceId}/integrations/{integrationId}\nGET /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions\nGET /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/active Execute Allows you to run integrations, stop and cancel running integrations. Provides access to the following endpoints: 1\n2\n3 POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions\nPOST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/cancel\nPOST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/kill Read Schedule Allows you to read schedule information for integrations. Provides access to the following endpoint: 1 GET /v1/workspaces/{workspaceId}/integrations/{integrationId}/schedule Enable/Disable Schedule Allows you to enable and disable schedule for scheduled integrations. Provides access to the following endpoints: 1\n2 POST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/enable\nPOST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/disable Workspace Read Allows you to read the list of account workspaces and information about specific workspaces. Provides access to the following endpoints: 1\n2 GET /v1/workspaces\nGET /v1/workspaces/{workspaceId} Read Users Allows you to read the list of workspace users. Provides access to the following endpoint: 1 GET /v1/workspaces/{workspaceId}/users Modify Users Allows you to grant and revoke access to a workspace from users. Provides access to the following endpoints: 1\n2 POST /v1/workspaces/{workspaceId}/users\nDELETE /v1/workspaces/{workspaceId}/users/{userId}" }, { "url": "https://docs.skyvia.com/account-management/creating-and-deleting-accounts.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Creating and Deleting Accounts When users register in Skyvia, they automatically receive a default account with a default workspace. When later they sign in to Skyvia, they sign in directly to their default workspace, which belongs to the default account. Optionally, users can create additional accounts, or they can be invited to accounts of other users or companies with full or limited permissions. If required, users can delete their accounts including their default account and create the totally new account infrastructure in Skyvia again. Creating Accounts To create a new account, you need to click the User icon and then \u2014 Account . You will be transferred to the page of your default account. Skyvia automatically assigns Untitled name to users\u2019 default accounts. However, users can rename their default accounts any time they want. As an example, we rename our Untitled default account to Skyvia Personal Account and click the Accounts link on the left to switch to the Select Account page. After that click on the right to create a new account. The Create new account window will open. When creating a new account, enter the unique account name in the Name field and click Save . You cannot leave the Name field empty as your account should be easily identified among other accounts (if you have more than one). Later you can easily rename your account if required. For this, you should switch to the account, the name of which you want to change. Click the account name field and rename it. Switching Between Accounts When you have several accounts and want to switch from one account to another, click the User icon and then \u2014 Account . You will be transferred to the page of your default account. Here click the Accounts link on the left and switch to the Select Account page. You need to choose the account you want to go to and click it. On this page, Skyvia displays all accounts available for you: your default account, additional accounts you have created or accounts of companies/users you\u2019ve been invited to, and it does not matter whether you were invited to the account as an account member or account admin . As an account member you have limited rights, but you can see the account in the list and use the workspace you were invited to (to the extent allowed by workspace admin). Another way to switch from one account to another is to click the workspace drop-down list in the top menu and select another workspace from the list. On the screenshot, we can see Default Workspace as the user is signed in to his/her default workspace. If the user is signed in to another workspace and is working with it now \u2014 the name of this workspace will be displayed. In the opened menu, you can see a list of accounts you created in Skyvia and accounts you were invited to by other users. Under each account you will see workspaces, which belong to this particular account. To switch to a certain account, simply click one of its workspaces. Deleting Accounts To delete an account, you need to click the User icon and then \u2014 Account . You will be transferred to the page of your default account. To delete this account, click the icon next to its name. To select other accounts for deletion, switch to the Select Account page by clicking the Accounts link on the left. You can also delete an account by clicking Workspaces in the top menu and switching to the account you want to delete. Please note you can delete an account if you are an account admin. When deleting an account with workspaces, workspaces and their objects (integrations, backups, etc.) will be deleted as well." }, { "url": "https://docs.skyvia.com/account-management/email-notifications.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Email Notifications Skyvia has the email notifications feature, that allows you to get notified about errors in integrations or backups or reaching certain thresholds of your subscription limits. The notifications are configured per account, on the corresponding tabs in the account settings. The users can, however, override settings in their profile. Note that by default, Email notifications are disabled. Error Notifications Switch to Notifications tab and select a notification option that better meets you needs. You can choose whether to send notifications to emails of workspace admins or to some other emails. You may enter several emails separated by semicolons. To disable email notifications, select Don\u2019t send notifications . Notification Events Currently, Skyvia sends error notification emails in the following cases: When an integration fails to execute completely (no records are loaded successfully). When a task of an integration fails completely. When a backup fails to back up data from one or more objects or fails completely. Notification Frequency In order not to spam your email address with a large number of notifications, Skyvia sends an email notification once per hour. Even if several integrations fail within an hour, Skyvia will send a notification only for the first one. Besides, if an integration is scheduled to run more often than once per day and fails multiple times, Skyvia will send only up to one notification per day for an integration. You can reset these limits in the Error Notifications settings by clicking the Reset limits button. After this, both one email per hour, and one email per integration per day limits are reset. Usage Notifications In addition to notifications about errors in your integrations and backups, you can also receive notifications when you getting closer to your subscription limits for data integration records or backup storage space. These notifications are turned off by default too. You can enable them on the Usage Notifications tab of the account settings: Select whether to send the notifications to account admin or to custom emails. Click the Integration Records Threshold and Backup Storage Threshold toggles, depending on the products you use. Select checkboxes for the percentage, you want to receive notifications for. Testing Email Notifications In order to make sure that you can receive email notifications, and they don\u2019t go to spam, you can send a test notification email. You can click the send test email button for it. You can test both Error and Usage notifications. Profile Notification Settings The users can override these error and usage notification settings in their profile. They can either turn sending notifications to their email off or specify a different email or multiple emails. Settings in the user profile are applied for accounts, where: the user is a workspace administrator, and the \u201csend to workspace administrators\u201d option is selected in Error Notifications settings. the user is an account administrator, and the \u201csend to account administrators\u201d option is selected in Usage Notifications settings." }, { "url": "https://docs.skyvia.com/account-management/inviting-and-deleting-users-from-accounts.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Inviting and Deleting Users from Accounts Inviting Users Invite users to your account and workspaces from the Users page in the Account settings. You can invite Skyvia users and the ones that do not have a Skyvia account yet. To invite new users: Click + User on the right. Enter the user\u2019s email address. Select the account role, workspaces to invite to and workspace roles to assign. To invite more users, click + User below, and fill out new invitation details. Accepting Invitation Skyvia sends an email with an invitation to the user\u2019s email address. To accept an invitation, open it and click the here link. If you are logged out from Skyvia, you will be transferred to the Skyvia Sign In window. Sign up or enter your Skyvia email and password and click Join Account . If you are logged in to Skyvia under the email address that differs from the one specified in the invitation, you will receive an Invitation email mismatch notification. Log out from Skyvia and log back in with the proper email to accept the invitation. It can happen that by the time you accept an invitation, it has been revoked by any reason. In this case, you will receive a This invitation no longer exists notification. To find out details, contact the person responsible for the invitation. There is always an option to resend it. Same notification will appear in case you try to accept the invitation, which you have already accepted earlier. Monitoring Invitations Monitor current workspace users and the users with pending invitations by switching between Current Users and Invited Users tabs. On the Invited Users tab you can resend the invitation email, copy the invitation link so you can quickly share it, for example, in a messaging app, and revoke the user invitation. To do so, hover over the invited user in the list and select the according icon on the bottom right. Deleting Users from Accounts To delete invited users from your account, click the User icon in the top right corner of the Skyvia, and then \u2014 Accounts . Switch to the account you want to delete users from and select the Users tab. On this tab, click the icon next to the users you want to delete. The corresponding users will be removed from your account. Next time these users sign in, they sign in to their own accounts. User Statuses in the Account The users listed in your account(s) may have one of the following statuses: Invited \u2014 this status shows that you have invited a user to your account, but this user haven\u2019t yet signed in to Skyvia and received the invitation. Member \u2014 this status shows that the user has accepted the invitation and now uses your account. Administrator \u2014 this status shows that the user has administrative privileges in this account (see below). When you sign up to Skyvia, you automatically receive administrative privileges to your default account and workspace. When you send an invitation to another user to join your account, status of this user automatically turns into Invited . When the user accepts your invitation, his/her status is automatically changed to Member . That is briefly how the sequence works. Rights of Account Administrator The account admins have full administrative privileges in the account. They can: select/edit subscriptions for products and change their auto-renewal status; enable the autocleaning mode for backups (except in a free plan); change payment information; create the initial account infrastructure, i.e. add workspaces to the account, create objects in the workspaces; invite other users to the account, assign workspace roles to users, delete users from the account/workspace, give other users administrative privileges and revoke them. The main task of the account administrator is to manage subscription plans and payments, the overall account security and to allow users to conveniently collaborate in the system by establishing a comfortable collaborative environment on a centralized basis. Rights of Account Member The account members have limited permissions in the account they were invited to. They can view selected subscriptions to Skyvia products, resource limitations, but cannot manage subscriptions, cannot view or edit invoices, or any other payment information. Apart from that, account members cannot see other account users and account workspaces unless they were given access to such workspaces. When the account member is assigned a certain workspace role, he/she can perform tasks strictly according to this role. Which workspace roles exist in Skyvia and how to assign them, find it in the Workspace Roles topic." }, { "url": "https://docs.skyvia.com/account-management/subscription-limits-and-plans-in-more-details.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Subscription Limits and Plans in More Details Skyvia includes such four products as Data Integration , Backup , Query , and Connect . More information about pricing plans can be found on the [Pricing](https://skyvia.com/pricing/) page. Skyvia products are used completely independently, and you need to pay only for the product that you use, if the free plan for this product is not enough for you. For example, if you use only Data Integration tools, you do not need to pay for Query, Backup, Automation, and Connect. Data Integration Current Plans Current pricing plans differ in the features and integration kinds available. They also differ in the number of simultaneously scheduled packages and schedule frequency. The Free pricing plan offers a fixed number of 10 000 records per month. Paid plans allow you to select the number of records you plan to load per month, up to 200 million records, and the price of the plan is calculated based on this number of records. Available Features You can see the differences in available features in the Detailed comparison table on the [Pricing](https://skyvia.com/pricing/) page. In brief, Free and Basic plans allow only basic integration scenarios: import , export , synchronization , and replication . They do not allow such features as Advanced mode in import and export that allows you to import/export results of a custom query, SQL command, or some other custom action. Import and Synchronization are also limited in mapping kinds for these pricing plans. They don\u2019t allow source lookup , expression , and relation mapping. Note that target lookup mapping is allowed. Import also does not allow loading data into multiple related tables ( data splitting ), and synchronization does not allow one-to-many/many-to-one synchronization . Besides, when importing CSV files from file storages, you cannot use file masks . When importing data from a cloud app/database, you cannot use Returning feature , loading data back to source. Standard pricing plan allows more advanced integration scenarios. It allows using all import , export , synchronization , and replication features, and additionally, it allows using Data Flow for more complex integration cases. Professional plan offers all features of the previous plan and also Control Flow tool to orchestrate all your integrations. Number of Records Here you can read how the number of processed records is counted. This number of records includes all the records, which have been successfully processed by Data Integration tools, including: records, successfully created, deleted, or updated in the target by import and replication integrations (even if the update operation updates the target record with the same values it already has). rows in the CSV files, produced by export integrations both records, successfully created, deleted, or updated in source and target by synchronization integrations. billed records in data flows and control flows \u2014 all success rows for all the Target components that use any connections other than cache or log. Records, loaded by external integrations called in control flows via the Execute Integration component , are displayed in the logs of the corresponding integrations and are counted according to integration type rules. They do not appear in the billing records of the parent control flow. Failed records are not counted. If you cancel integration execution, the records, processed by the integration are also not counted. Even though canceling an integration can take time, and it may process more records after you cancel it, all the processed records won\u2019t be counted for cancelled integration. On your Account page you can see the number of used records. As for the numbers of records, processed by each integration, you can find the corresponding numbers in the run history of your integrations for each run. If the number of processed records exceeds the number of records, the currently running integrations will continue execution till their finish and can process more records than included in your subscription. However, after the number is exceeded, you won\u2019t be able to start any more integrations both automatically and manually, unless your pricing plan allows processing additional records over the price plan month limit. Standard, Professional, and Enterprise plans allow processing more records than included in the subscription for additional costs. Note that this feature is not enabled by default in order to avoid unexpected additional payments. To load records over the subscription limits, you need to explicitly enable this feature on your [Account](https://app.skyvia.com/#/account/subscriptions) page. For this, on the Subscriptions tab, turn toggle on for Data Integration in the line with Paid Records . Schedule Settings You can define schedules for your integrations to run automatically. Lower pricing plans have certain limits on scheduling integrations for automatic execution. The number of scheduled integrations means the number of integrations that can have enabled schedule simultaneously, and schedule frequency determines how often you may run each of your scheduled integrations on the schedule. If you define a schedule that runs an integration more often than your subscription allows, you won\u2019t be able to save this integration till you modify or disable the schedule. The same occurs if you already reached the max number of scheduled integrations and try to set the schedule for another integration. Note that on any pricing plan you can have as many integrations as you need, and run them manually as often as you need, till you reach the record limit. The Professional and Enterprise pricing plans don\u2019t have limits on the number of scheduled integrations and allow scheduling them up to once per minute. Legacy Plans Legacy plans are active only for users that registered and selected these plans before the current plans were put into action. These plans mostly depend on the number of records per month included in the subscription, and offer similar features available. They also have the same schedule settings limitation as current plans. Besides, for Free and Basic legacy plans, records, processed by export packages or by import packages, importing CSV files, are counted separately, and have their own separate limit. For example, if you have the Basic pricing plan, and have one (or more) package importing CSV files, and import package, loading data from a cloud app or from a database/data warehouse, the records processed by the first package will be counted to the 200k record limit. Records processed by the second package are counted to the 25k record limit. Records processed by export packages are always counted to the 200k record limit in this case (to the 100k record limit in case of the Free plan), and records processed by replication and synchronization packages are always counted to the 25k record limit (5k record limit in case of the Free plan). Backup Backup pricing plans mostly differ in the storage space available for your backups. You can backup data from any number of connections/cloud app accounts, and only the total size of your backups matters. The Free pricing plan has additional limitations: It does not allow scheduling backups for automatic execution. You can only run backup manually. The snapshots are stored for 3 months in the free plan. After three months they are automatically deleted. Besides, snapshots are deleted automatically for free users and if they exceed their space limits for two weeks. Note that every time a backup runs, a full backup is performed. To determine a pricing plan that suits you, multiply the size of your data that you plan to backup by the number of snapshots you plan to keep. By default, old snapshots are not deleted automatically on paid plans. In paid pricing plans you can configure deleting unneeded snapshots automatically or delete them manually. See Managing Storage Space and Deleting Old Snapshots for more information. After you reach the limit of your subscription, backups are not performed anymore till you free space. Query The free Query pricing plan limits the user to execute 5 queries per day. This includes all the successfully executed SELECT queries or other SQL statements. This also includes refreshing data, retrieved with Skyvia Query Google Sheets Add-on or Skyvia Query Excel Add-in , in Google Sheets or any queries executed via this add-on. Failed queries and browsing connection metadata in the objects list are not counted. Besides, we reserve the right to introduce traffic limitations for the free plan, in case of using it to load huge data volumes. Connect Connect pricing plans differ in the volume of traffic that goes via your endpoints and availability of endpoint security features . Besides, the free pricing plan is designed to test Skyvia Connect. It allows enabling only one endpoint to a Cloud Source (not to database) and provides only 100 KB of traffic per month. You can create any endpoints in Skyvia, but you cannot activate endpoints that use features not provided in your subscription. They will be inactive and won\u2019t work. For example, for the Free pricing plan, you can activate only one endpoint and only to a cloud source. You cannot activate an endpoint to a database or cloud data warehouse . For the Free and Basic plans you cannot activate any endpoints, using authentications or access restrictions by IP. If you exceed the traffic limits, currently running requests to your endpoint will finish successfully, but subsequent requests will return an error: \u201cYou have reached the limit of bytes transferred. The endpoint is blocked. You may upgrade your pricing plan to activate your endpoints\u201d. Automation The free Automation pricing plan is a limited plan that allows up to two active automations. It allows schedule frequency for Run on Schedule triggers and polling frequency for Connection triggers of up to once per day. It limits the log retention period to 7 days, and does not provide advanced features like [Managing Automation Versions](https://docs.skyvia.com/automation/operating-automation/managing-automation-versions.html) and detailed log information . The free plan is also limited to 1000 tasks per month . The Standard plan allows unlimited active automations with schedule and polling frequency of up to once a minute. Automation logs are stored for 60 days, and all the above mentioned features are available. This plan allows you to select the desired number of tasks, starting from 50 000 per month. If you use all the tasks per month in your subscription, subsequent executions will produce errors with the message about the limit exceeded. Tasks Tasks counted to the limit are successful executions of Action components in the automation. Note that the number of tasks used per automation execution depends on automation structure and can vary from execution to execution. Actions on the branches of the If component may or may not be executed depending if the condition is met, Actions in the Foreach component loop can be executed multiple times. You can use the Debug Viewer for any execution to see which Action components executed, but it does not indicate if an Action executed more than one time in a Foreach loop." }, { "url": "https://docs.skyvia.com/account-management/subscriptions-payments-and-trials.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Subscriptions, Payments and Trials Skyvia offers monthly and yearly subscriptions for Skyvia products: Data Integration , Automation , Backup , Query , and Connect . More information about pricing plans can be found on the [Pricing](https://skyvia.com/pricing/) page. By default, a new account is subscribed to a free plan for each product. Skyvia products are used completely independently, and you need to pay only for the product that you use, if the free plan for this product is not enough for you. For example, if you only use Data Integration, you don\u2019t need to pay for Query, Backup, Automation, or Connect. Only users with the account admin status can manage subscriptions, payments and invoices, enable/disable auto renewal of subscriptions. If, except your default account, you were invited to accounts of other users, subscriptions to products may differ in each of these accounts. For example, in one account you can have Data Integration Professional subscription, in another \u2014 Standard Query and Connect subscriptions. Providing Trials Skyvia provides trial subscriptions for all its products. 14 day trials start for all products when you register on Skyvia: Product Plan Data Integration Professional Trial (5M records) Automation Standard Trial (50K tasks) Backup Professional Trial Query Standard Trial Connect Professional Trial You can select another or current paid pricing plan and purchase subscription while doing the trial. The Free pricing plans can be requested after initial trial expires. For this, in the Account Settings click Subscriptions and then click Request Free for the required product. For some of the products you will need to provide additional information. After your request is approved, the Free plan is activated. After your initial trial period expires, you can request the second trial once. Subscription Expiration Subscription expires if your trial period ends or if you don\u2019t pay for your paid pricing plan. When subscription expires, you still can create, edit, and delete your integrations, backups, automations, queries, etc., but cannot run them till you pay for the subscription or get Free pricing plan. You can see the limitations in more details below: Data Integration Schedule for all the integrations gets disabled, and you cannot enable schedule for old and new integrations. You cannot run integrations manually as well. Note that you will need to manually enable schedule for integrations after purchasing the subscription or activating the Free plan. Automation All the automations become disabled and cannot be enabled. You cannot execute manual automations or use test mode. Note that you will need to manually enable automations after purchasing the subscription or activating the Free plan. Backup Schedule for all backups gets disabled, and you cannot enable schedule for old and new backups. You cannot create new snapshots and perform restore. Note that you will need to manually enable schedule for backups after purchasing the subscription or activating the Free plan. Query You cannot execute queries. Connect All the endpoints become disabled and cannot be enabled. Note that you will need to manually enable endpoints after purchasing the subscription or activating the Free plan. Changing Pricing Plan You can access your subscription for a product on your [Account](https://app.skyvia.com/#/account/subscriptions) page. To open this page, click User icon in the top right corner of the page and then click Account . The Subscriptions tab displays active subscriptions for all your products. It also displays resource limits in pricing plans for products, and how much of the subscription limit you have used. If a product has more than one pricing plan available, you can subscribe to other plan by clicking Upgrade plan (if you are subscribed to the free plan) or Change plan (if you are subscribed to a non-free plan) and then select a pricing plan. If you choose a more expensive plan, you will need to fill in the payment form and pay for the plan. When you upgrade to a more expensive plan, Skyvia calculates the price taking into consideration the current plan price and the number of days left to use the current plan. However, if you downgrade to a less expensive plan, we do not return any costs. Auto Renewal When you purchase a subscription to a paid pricing plan, auto-renewal for a subscription is enabled by default. The subscription is provided on a monthly basis, so, if you keep the auto-renewal enabled, you will be automatically charged every month for this subscription. You can see the Auto Renewal status on your Account page on the Subscriptions tab in each subscription for a paid plan together with the date and sum of the next charge. These parameters are not available for free plans. To disable auto-renewal when performing the payment, clear the Enable auto-renewal for this order checkbox in the Payment Options pane. You can also enable or disable auto-renewal later on your Account page on the Subscriptions tab by setting the Auto Renewal Status to Off . If you have auto-renewal disabled for a product, you are automatically subscribed to the free plan when your subscription for a paid plan is expired. Payments and Invoices Skyvia charges you in the following cases: When you upgrade from a less expensive or a free plan to a more expensive one. If you have the auto-renewal set to On , when your subscription to a paid plan expires. Some Data Integration plans allow processing additional records over the price plan month limit. These are Standard, Professional, and Enterprise plans. You are charged for these records at the end of the subscription month period. You must explicitly enable this feature on your account page to load extra records. For example, if you have paid for subscription on the January, 8th, and you have used Skyvia to process more records than the pricing plan includes till February, 8th, on February, 8th, you are charged for these additional records. When you have auto-renewal enabled, the price of these records is added to the subscription renewal payment. If you have auto-renewal disabled or if you have paid for the subscription for several months ahead, you are automatically charged the price of these records as a separate payment (still on February, 8th in our example). For every payment, an invoice is created. The Invoices tab displays your invoices for the last 30 days. You can click an invoice to open its details or print it out. You can set time periods to display your invoices by clicking the Last 30 Days list, which allows selecting another period from the available options or by clicking Calendar . In Calendar , you can either select certain dates or change the time period manually by entering the required dates in the box. We do not limit you, you are free to change any parameter in the box \u2014 year, month, day, even minutes or seconds." }, { "url": "https://docs.skyvia.com/account-management/usage-summary.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Usage Summary Each account has a [Subscription Limit](https://docs.skyvia.com/account-management/subscription-limits-and-plans-in-more-details.html) that defines the number of records available for processing in Data Integration, Tasks in Automation, and Traffic in Connect. While using Data Integration, Automation and Connect products you may impact the Subscription Limits usage. Usage Summary shows how many resources you have already used and how many you still have left. It provides the statistics for a specific period, groups it by workspace or integration, and breaks it down by days or months. Note that the usage data is available from the 6th of September 2022. To view the usage statistics, do the following: In Skyvia, click on your user icon in the top right corner of the page. Select Account . Select Usage Summary in the left menu. Get Statistics for Period Usage Summary allows getting statistics for a specific period. \nBy default, it displays the records used for the current billing period. Click on the selected period to change the preset type and select the date range. There are three preset types available: Billing, Relative, and Calendar . Billing \u2014 gets statistics for your billing period. The billing period is individual and not related to calendar periods. \nFor example, you purchased or upgraded the subscription on the 7th of December. In this case, your monthly billing period starts on the 7th day of every month. Relative \u2014 gets the usage statistics starting from 7, 30, 90, or 365 days back from the current date. Calendar \u2014 displays the usage statistics for a calendar week, month, or year. You can select the current or last year\u2019s calendar periods. Use Custom Date Range on the bottom to specify the custom dates for the usage report. Details by Integration or Workspace Group by feature allows getting the total statistics by account, workspace, or integration. Use Group by to see what workspaces and integrations consume your resources. \nBy default, the Group by value is set to None displaying the total number of resources used by the account. You can open the integration directly from the usage summary page, just click on the integration Id. \nThe statistics is available even for the deleted objects. In this case, the report displays only the integration Id without its title. Breakdown by Interval With Interval you can split the statistics by the selected value. Use Interval to get the used resources per specific day, month, or year.\nBy default, the Interval value is set to None , and Skyvia displays the total value." }, { "url": "https://docs.skyvia.com/account-management/workspace-roles.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Account Management Workspace Roles Workspace role defines permissions for a user to perform certain tasks in the workspace. Skyvia offers the possibility either to assign standard roles with a predefined set of permissions or to create and assign custom roles with individual permissions to users. As roles allow you to manage who can do what in the workspace, team can easier and more effectively collaborate with each other. However, you also need to consider carefully which role you assign to other users in the workspaces since workspace administrators, for example, are able to remove other users, including you, manage and delete workspace objects or the entire workspace itself. Please note that workspaces can be created by account admins only. When you create a workspace, you become by default a workspace administrator . You can invite another user to the workspace and transfer your workspace admin rights to this user. The workspace admin can later assign standard or custom roles to other users to control their work in the workspace depending on the tasks the workspace admin wants each user to be responsible for. Standard Roles Skyvia offers 4 standard workspace roles: Administrator, Developer, Member, Supporter. Each of these roles is preset and cannot be deleted. You can view them on the Workspace Roles tab. Administrator can manage workspace settings, workspace objects (create, update, delete), invite or delete users from workspaces, assign workspace roles to them. Developer has the same rights as administrator except the right to manage workspace settings and invite, delete users from workspaces or assign workspace roles to them. Member cannot manage the workspace or workspace objects (create, edit, delete) but can execute already created integrations, backups, queries; member can also view or download integration logs, view backed up data. Supporter can view or download integration logs to help customers resolve their problems. The checkboxes next to other permissions are disabled for this role. Click the More Options icon next to each standard role to view its permissions or to create a new (custom) role based on the current (standard) one. Custom Roles Skyvia also allows you to create and assign custom roles to invited users, grant individual permissions to them, depending on the tasks they are supposed to fulfill. To create a custom role, click on the right. In the opened window, enter a role name, turn on toggles next to permissions you want to grant and click Save . When a new custom role is created, it is added to the bottom of the role list. Later, you can edit the role name or permissions anytime you want, or delete the custom role completely. If the custom role is deleted for any reason, the user is not. The user will stay in the workspace and retain permissions given to him/her with this role. Instead of the deleted custom role, the user will see custom permissions in the workspace role column. Adding Users to Workspaces To assign a workspace role to a user, first you need to switch to the workspace you want to add a user to. You can do it in 2 ways: By clicking the workspace you need on the Select Your Workspace page. You will be transferred to the workspace page. Workspace can be default or can have specific name you have chosen before. Click the workspace name in the top menu and click Settings in the drop-down list. By going to a certain account. On the Account page, you need to switch to the Workspaces tab. Here, click the More Options icon next to the corresponding workspace, select Change settings in the drop-down list and click Users . You can add users to the workspace and assign their workspace roles by clicking + Add User . You can assign either standard roles or custom. Custom roles should be previously created on the Workspace Role tab. Assigning user roles allows delegating responsibilities, managing workflows, providing the overall workspace security and proper collaboration among users. Deleting Users from Workspaces To delete a user from a certain workspace, you need to switch to this workspace. On the workspace settings page, click Users and click the icon next to the users you want to delete from the workspace. Later you can easily add users back to the workspace and assign the same or another role to them. Please note if you are a workspace admin, you can remove yourself from the workspace as well and later add yourself back by clicking the + Add User button." }, { "url": "https://docs.skyvia.com/agents.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Agents For connecting to on-premise databases located in local networks or on users\u2019 computers behind the firewall, Skyvia offers Agent application. You install this application on your computer and start it. The application connects to Skyvia and serves as a secure gateway between Skyvia and a local database server. The agent application connects to Skyvia and exchanges data via the HTTPS protocol. To use the agent, outbound connections to https://agents.skyvia.com via the HTTPS port 443 must be allowed. This port is usually open by default in most systems, so in most cases you don\u2019t need to do anything. If you have already installed the Agent application and want to create a connection, see Agents Connections . Managing Agents In order to use the Agent application, you need to create an Agent in Skyvia first. The Agent represents an instance of the Agent application, running on a specific computer. It has a unique Key that is used to identify a specific agent when connecting. If you need to install agent application onto multiple computers to connect to databases in different local networks, you need to create multiple Agent. As well as the other Skyvia objects, agents are available in the object list, and you can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, view their dependencies, etc. By clicking a certain agent, you are transferred to its details. Here you can download the Agent application and key, test agent, edit its name or delete it. Creating and Using Agents To create an Agent, click +NEW in the top menu and select Agent (on the left). Agent is immediately created, and its details are opened. Here you can provide a meaningful name instead of the default \u2018Untitled\u2019 (just click the name to edit it), download the Agent application installer and obtain its Key . Download the Agent application installer and install it. Then you can either download the agent key as a file and place it to the folder where the Agent application is installed or copy it and specify it in the Agent application with GUI. The key file must have name skyvia_security_agent.key . If you downloaded keys multiple times, browser may rename files in order to avoid overwriting already existing one. In such case, rename it back in the Agent application folder. Agent Applications On 64-bit operating system, the Agent application is installed to the %programfiles(x86)% directory (by default, it is C:\\Program Files (x86) ). When you install the Agent application, 4 executable files are installed: File Name Description Skyvia.Agent.Client.x64.exe Console application, 64 bit Skyvia.Agent.Client.UI.x64.exe Application with GUI, 64 bit Skyvia.Agent.Client.UI.exe Console application, 32 bit Skyvia.Agent.Client.exe Application with GUI, 32 bit Applications with GUI allow you to manually enter the key instead of loading it from the downloaded file. The entered key is saved to the skyvia_security_agent.key file the folder where the Agent application is installed. It is saved when the agent connects to Skyvia, and if there is an existing file, it is overwritten. Console apps can be installed as a service, as described below. Both console and GUI applications come with 32-bit and 64-bit versions. Usually, you can run any of them and won\u2019t notice the difference, but in case of using ODBC connections it is important to use the application of the same bitness as the installed ODBC driver that you want to use. Running Agent as a Service Console Skyvia Agent apps can be installed as a service. For this, run the agent with the -i command line argument: 1 Skyvia.Agent.Client.exe -i Note that the command should be run from the agent application folder. Then you can start the service like this: 1 net start Skyvia.Agents.Client To unregister the service, run the agent application with the -u command line argument: 1 Skyvia.Agent.Client.exe -u How Agent Works Agent application is installed on a computer in the same local network, in which your database server is running. The Agent application must be running and have access to the Internet in order to work. After Agent is started, it connects to the Skyvia service and keeps this connection. Since it initiates the connection itself, it does not need a dedicated external IP. When Skyvia connects to a local database via Agent, it sends the corresponding request to Agent, and it connects to the database and performs all the database interaction inside the local network, to which it is installed. This is why you should specify database connection parameters when creating an agent connection so as if you are connecting from the computer where the Agent is installed. Agent sends the requested data to Skyvia and receives the data to load into the database encrypted, over HTTPS protocol. This protocol is usually allowed in firewalls, so usually Agent requires no additional firewall settings. Besides, Agent\u2019s protocol is often more optimized for exchanging data over the Internet than database protocols. It uses less granulated operations and performs less roundtrips between the Agent and Skyvia. Proxy Settings for Agent If the computer you install the Agent on is connected to the Intenet via a proxy server, you need to configure your agent and set up proxy parameters. This can be done by editing the Skyvia.Agent.Client.exe.config file in the agent installation directory and providing the necessary values for parameters in the appSettings section: 1\n2\n3\n4\n5\n6 You need to specify the following values: proxyHost - Domain name or IP address of your proxy server, by which it can be accessed from the computer, where Agent is installed. proxyPort - Proxy port to use. proxyUsername - If your proxy server requires authentication, specify the username to use for authentication on proxy server. proxyPassword - If your proxy server requires authentication, specify the password to use for authentication on proxy server. For example: 1\n2\n3\n4\n5\n6 " }, { "url": "https://docs.skyvia.com/api-reference/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference The Skyvia API allows users to programmatically create and manage Skyvia resources such as accounts, agents, automations, backups, connections, endpoints, integrations, and workspaces. All the endpoints available in Skyvia API require an API token, which must be passed in the Authorization header of each request. To generate an API Token, go to API Settings in your Account Settings . Check the API settings documentation to learn how to generate an API token and get aknowledged with the available scopes. You can also find a swagger version of API documentation at [https://api.skyvia.com/swagger/index.html](https://api.skyvia.com/swagger/index.html) Recources Below are the links to the route descriptions grouped by a resource name: Account Agents Automations Backups Connections Endpoints Integrations Workspaces" }, { "url": "https://docs.skyvia.com/api-reference/account.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Account List Account Users GET /v1/account/users Lists all users in the account. This endpoint supports filtering and pagination to retrieve a subset of users based on the query parameters provided. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE searchMask string query A filter by name or email. None take int32 query The number of users to return in the response. 20 skip int32 query The number of users to skip before returning results. Used for pagination. 0 Responses Code Message Action 200 OK Returns a list of users in the account Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18 { \"data\" : [ { \"id\" : 0 , \"email\" : \"string\" , \"fullName\" : \"string\" , \"type\" : \"Administrator\" , \"workspaces\" : [ { \"workspaceId\" : 0 , \"roleName\" : \"string\" , \"roleId\" : 0 } ] } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/account/users?take=20&skip=0\" \\ -H \"Authorization: \" Delete Account User DELETE /v1/account/users Deletes a user from the account based on the provided email. The email must be provided in the request body to identify which user to delete. Request Body NAME TYPE IN DESCRIPTION email string body The email of the user to delete. Request Body Example: 1\n2\n3 { \"email\" : \"string\" } Responses Code Message Action 200 OK Confirms that the user was deleted. Request Example 1\n2\n3\n4\n5\n6 curl -X DELETE \"https://api.skyvia.com/v1/account/users\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" \\ -d '{\n \"email\": \"user@example.com\"\n }' List Account Invitations GET /v1/account/invitations Lists all pending invitations in the account. This endpoint supports pagination to retrieve a subset of invitations based on the query parameters provided. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE take int32 query The number of invitations to return in the response. 20 skip int32 query The number of invitations to skip before returning results. Used for pagination. 0 Responses Code Message Action 200 OK Returns a list of pending invitations in the account. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 { \"data\" : [ { \"id\" : 0 , \"email\" : \"string\" , \"type\" : \"Administrator\" , \"workspaces\" : [ { \"workspaceId\" : 0 , \"roleName\" : \"string\" , \"roleId\" : 0 } ], \"invitationDate\" : \"2024-10-02T14:15:30Z\" , \"userId\" : 0 , \"fullName\" : \"string\" } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/account/invitations?take=20&skip=0\" \\ -H \"Authorization: \" Create Account Invitation POST /v1/account/invitations Sends an invitation to a new user to join the account. The request body must include the user\u2019s email, user type, and the workspace details. Request Body NAME TYPE IN DESCRIPTION email string body The email address of the user to invite. userType string body The type of the user (Administrator, Member). workspaces array body A list of workspaces the user is invited to. workspaces[].workspaceId int32 body The workspace ID where the user is invited. workspaces[].roleId int32 body The role ID assigned to the user in the workspace. Request Body Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 { \"email\" : \"johnDoe@email.org\" , \"userType\" : \"Member\" , \"workspaces\" : [ { \"workspaceId\" : 123 , \"roleId\" : 3 } ] } Responses Code Message Action 200 OK Confirms that the invitation was successfully sent. Response Example: 1\n2\n3\n4\n5 { \"email\" : \"string\" , \"status\" : \"string\" , \"invitationId\" : 0 } Request Example 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 curl -X POST \"https://api.skyvia.com/v1/account/invitations\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" \\ -d '{\n \"email\": \"johnDoe@email.org\",\n \"userType\": \"Member\",\n \"workspaces\": [\n {\n \"workspaceId\": 123,\n \"roleId\": 3\n }\n ]\n }' Resend Account Invitation POST /v1/account/invitations/{invitationId}/resend Resends an invitation to the specified user. The invitation is identified by the invitationId provided in the URL path. Parameters NAME TYPE IN DESCRIPTION invitationId int32 path The ID of the invitation to resend. Responses Code Message Action 200 OK Confirms that the invitation has been successfully resent. Response Example: 1\n2\n3\n4\n5 { \"email\" : \"string\" , \"status\" : \"string\" , \"invitationId\" : 0 } Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/account/invitations/123/resend\" \\ -H \"Authorization: \" Delete Account Invitation DELETE /v1/account/invitations/{invitationId} Deletes a pending invitation from the account. The invitation is identified by the invitationId provided in the URL path. Parameters NAME TYPE IN DESCRIPTION invitationId int32 path The ID of the invitation to delete. Responses Code Message Action 200 OK Confirms that the invitation was deleted. Request Example 1\n2 curl -X DELETE \"https://api.skyvia.com/v1/account/invitations/123\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/agents.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Agents List Agents GET /v1/workspaces/{workspaceId}/agents Lists all agents within a specified workspace. This endpoint supports pagination to retrieve a subset of agents based on the query parameters provided. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None take int32 query The number of agents to return in the response. 20 skip int32 query The number of agents to skip before returning results. Used for pagination. 0 Responses Code Message Action 200 OK Returns a list of agents in the workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 { \"data\" : [ { \"id\" : 0 , \"name\" : \"string\" , \"key\" : \"string\" } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/agents?take=20&skip=0\" \\ -H \"Authorization: \" Get Agent Details GET /v1/workspaces/{workspaceId}/agents/{agentId} Retrieves the details of a specific agent in a workspace, identified by agentId . Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. agentId int32 path The agent ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified agent. Response Example: 1\n2\n3\n4\n5 { \"id\" : 0 , \"name\" : \"string\" , \"key\" : \"string\" } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/agents/456\" \\ -H \"Authorization: \" Test Agent POST /v1/workspaces/{workspaceId}/agents/{agentId}/test Tests the specified agent in a workspace to ensure it\u2019s functioning. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. agentId int32 path The agent ID to test. Responses Code Message Action 200 OK Returns the result of the agent test. Response Example: 1\n2\n3\n4 { \"message\" : \"string\" , \"refresh\" : true } Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/agents/456/test\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/automations.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Automations List Automations GET /v1/workspaces/{workspaceId}/automations Lists all automations within the specified workspace. This endpoint supports pagination to retrieve a subset of automations based on the query parameters provided. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None take int32 query The number of automations to return in the response. 20 skip int32 query The number of automations to skip before returning results. 0 Responses Code Message Action 200 OK Returns a list of automations in the workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 { \"data\" : [ { \"id\" : 0 , \"name\" : \"string\" , \"triggerType\" : \"Manual\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-02T14:15:15.307Z\" } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations?take=20&skip=0\" \\ -H \"Authorization: \" Get Automation Details GET /v1/workspaces/{workspaceId}/automations/{automationId} Retrieves the details of a specific automation in a workspace, identified by automationId . Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified automation. Response Example: 1\n2\n3\n4\n5\n6\n7 { \"id\" : 0 , \"name\" : \"string\" , \"triggerType\" : \"Manual\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-02T14:15:15.307Z\" } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations/456\" \\ -H \"Authorization: \" Enable Automation POST /v1/workspaces/{workspaceId}/automations/{automationId}/enable Enables the specified automation in a workspace, making it active. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID to enable. Responses Code Message Action 200 OK Confirms that the automation was enabled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/automations/456/enable\" \\ -H \"Authorization: \" Disable Automation POST /v1/workspaces/{workspaceId}/automations/{automationId}/disable Disables the specified automation in a workspace, making it inactive. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID to disable. Responses Code Message Action 200 OK Confirms that the automation was disabled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/automations/456/disable\" \\ -H \"Authorization: \" List Automation Executions GET /v1/workspaces/{workspaceId}/automations/{automationId}/executions Lists finished executions of a specified automation. Supports filtering by date range and execution status and sorting. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None automationId int32 path The automation ID. None startDate string query The start date for filtering executions (ISO 8601 format). None endDate string query The end date for filtering executions (ISO 8601 format). None failed boolean query Whether to filter by failed executions only. None take int32 query The number of executions to return in the response. 20 skip int32 query The number of executions to skip before returning results. 0 sortOrder string query Specifies how the result records are sorted. Allowed values are \u2018asc\u2019 and \u2018desc\u2019 (without quotes). asc sortBy string query The field, by which the result records are sorted. Allowed values are \u2018date\u2019 and \u2018executionId\u2019 (without quotes). date Responses Code Message Action 200 OK Returns a list of execution logs for the automation. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 { \"data\" : [ { \"executionId\" : 123456 , \"state\" : \"Succeeded\" , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"billedTasks\" : 100 , \"testMode\" : false } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations/456/executions?take=20&skip=0\" \\ -H \"Authorization: \" Get Active Automation Execution GET /v1/workspaces/{workspaceId}/automations/{automationId}/active Retrieves currently active execution of a specific automation in a workspace, identified by automationId . Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID to retrieve details for. Responses Code Message Action 200 OK Returns active automation executions. Response Example: 1\n2\n3\n4\n5\n6 { \"executionId\" : 9528182 , \"date\" : \"2025-04-02T14:15:15.307Z\" , \"state\" : \"Executing\" , \"testMode\" : false , } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations/456/active\" \\ -H \"Authorization: \" Get Automation Execution Details GET /v1/workspaces/{workspaceId}/automations/{automationId}/executions/{executionId} Retrieves detailed information of a specific automation execution. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID. executionId int64 path The execution ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified execution. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 { \"executionId\" : 123456 , \"state\" : \"Succeeded\" , \"version\" : 1 , \"testMode\" : false , \"comment\" : \"Execution completed successfully.\" , \"started\" : \"2023-10-02T14:15:15.307Z\" , \"executed\" : \"2023-10-02T14:30:15.307Z\" , \"billedTasks\" : 100 , \"result\" : \"Data processed successfully.\" } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations/456/executions/123456\" \\ -H \"Authorization: \" Get Automation State GET /v1/workspaces/{workspaceId}/automations/{automationId}/state Retrieves the current state of the specified automation, including the execution status and queue information. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. automationId int32 path The automation ID to retrieve state for. Responses Code Message Action 200 OK Returns the current state of the automation. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15 { \"trigger\" : { \"enabled\" : true }, \"queue\" : { \"queuedCount\" : 5 }, \"executing\" : { \"executionId\" : 789 , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"state\" : \"Executing\" , \"testMode\" : false }, \"testMode\" : false } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/automations/456/state\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/backups.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Backups List Backups GET /v1/workspaces/{workspaceId}/backups Lists all backups for the specified workspace. Supports pagination for large data sets. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None take int32 query The number of backups to return in the response. 20 skip int32 query The number of backups to skip before returning results. 0 Responses Code Message Action 200 OK Returns a list of backup records in the workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 { \"data\" : [ { \"id\" : 123 , \"name\" : \"Workspace Backup 1\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-02T15:15:15.307Z\" , \"scheduled\" : true } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups?take=20&skip=0\" \\ -H \"Authorization: \" Get Backup Details GET /v1/workspaces/{workspaceId}/backups/{backupId} Retrieves details of a specific backup, identified by the backupId . Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified backup. Response Example: 1\n2\n3\n4\n5\n6\n7 { \"id\" : 123 , \"name\" : \"Workspace Backup 1\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-02T15:15:15.307Z\" , \"scheduled\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups/456\" \\ -H \"Authorization: \" List Snapshots GET /v1/workspaces/{workspaceId}/backups/{backupId}/snapshots Lists all created snapshots for the specified backup in a workspace. Supports filtering by date range and status. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None backupId int32 path The backup ID to list snapshots for. None startDate string query The start date for filtering snapshots (ISO 8601 format). None endDate string query The end date for filtering snapshots (ISO 8601 format). None failed boolean query Whether to filter by failed snapshots only. None take int32 query The number of snapshots to return in the response. 20 skip int32 query The number of snapshots to skip before returning results. 0 Responses Code Message Action 200 OK Returns a list of snapshot for the backup. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15 { \"data\" : [ { \"snapshotId\" : 789 , \"queueTime\" : \"2023-10-02T14:15:15.307Z\" , \"startTime\" : \"2023-10-02T14:20:15.307Z\" , \"endTime\" : \"2023-10-02T15:15:15.307Z\" , \"state\" : \"Succeeded\" , \"runBySchedule\n\n\" : true } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups/456/snapshots?take=20&skip=0\" \\ -H \"Authorization: \" Create Snapshot POST /v1/workspaces/{workspaceId}/backups/{backupId}/snapshots Creates a snapshot for the specified backup in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID to run the snapshot for. Responses Code Message Action 200 OK Returns the snapshot creation status. Response Example: 1\n2\n3\n4\n5\n6\n7\n8 { \"runId\" : 123 , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"duration\" : 300 , \"state\" : \"Running\" , \"successRows\" : 1000 , \"errorRows\" : 0 } Request Example 1\n2\n3\n4 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/backups/456/snapshots\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" \\ Get Snapshot Details GET /v1/workspaces/{workspaceId}/backups/{backupId}/snapshots/{snapshotId} Retrieves details of a specific snapshot for the backup in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID for which the snapshot details are being requested. snapshotId int32 path The snapshot ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified snapshot. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9 { \"snapshotId\" : 789 , \"queueTime\" : \"2023-10-02T14:15:15.307Z\" , \"startTime\" : \"2023-10-02T14:20:15.307Z\" , \"endTime\" : \"2023-10-02T15:15:15.307Z\" , \"state\" : \"Succeeded\" , \"log\" : \"Backup completed successfully.\" , \"runBySchedule\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups/456/snapshots/789\" \\ -H \"Authorization: \" Get Active Snapshot Details GET /v1/workspaces/{workspaceId}/backups/{backupId}/snapshots/active Retrieves the details of the active snapshot for the specified backup. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID for which to retrieve active snapshot execution. Responses Code Message Action 200 OK Returns the details of the active snapshot execution. Response Example: 1\n2\n3\n4\n5\n6\n7\n8 { \"runId\" : 123 , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"duration\" : 600 , \"state\" : \"Running\" , \"successRows\" : 1000 , \"errorRows\" : 0 } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups/456/snapshots/active\" \\ -H \"Authorization: \" Get Backup Schedule Status GET /v1/workspaces/{workspaceId}/backups/{backupId}/schedule Retrieves the schedule status of the specified backup in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID to retrieve the schedule status for. Responses Code Message Action 200 OK Returns the schedule for the specified backup. Response Example: 1\n2\n3 { \"active\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/backups/456/schedule\" \\ -H \"Authorization: \" Enable Backup Schedule POST /v1/workspaces/{workspaceId}/backups/{backupId}/schedule/enable Enables the backup schedule for the specified backup in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID to enable the schedule for. Responses Code Message Action 200 OK Confirms that the backup schedule is enabled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/backups/456/schedule/enable\" \\ -H \"Authorization: \" Disable Backup Schedule POST /v1/workspaces/{workspaceId}/backups/{backupId}/schedule/disable Disables the backup schedule for the specified backup in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. backupId int32 path The backup ID to disable the schedule for. Responses Code Message Action 200 OK Confirms that the backup schedule is disabled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/backups/456/schedule/disable\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Connections List Workspace Connections GET /v1/workspaces/{workspaceId}/connections Lists all connections in the specified workspace. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None take int32 query The number of connections to return in the response. 20 skip int32 query The number of connections to skip before returning results. 0 Responses Code Message Action 200 OK Returns a list of connections in the workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 { \"data\" : [ { \"id\" : 123 , \"name\" : \"Connection 1\" , \"connector\" : \"SQL Server\" } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/connections?take=20&skip=0\" \\ -H \"Authorization: \" Get Connection Details GET /v1/workspaces/{workspaceId}/connections/{connectionId} Retrieves details of a specific connection in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. connectionId int32 path The connection ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified connection. Response Example: 1\n2\n3\n4\n5\n6 { \"id\" : 123 , \"name\" : \"Connection 1\" , \"connector\" : \"SQL Server\" , \"type\" : \"Direct\" } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/connections/456\" \\ -H \"Authorization: \" Test Connection POST /v1/workspaces/{workspaceId}/connections/{connectionId}/test Tests the specified connection in the workspace to verify if it is functioning. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. connectionId int32 path The connection ID to test. Responses Code Message Action 200 OK Returns the result of the connection test. Response Example: 1\n2\n3\n4 { \"message\" : \"Connection successful\" , \"refresh\" : false } Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/connections/456/test\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/endpoints.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Endpoints List Endpoint Types GET /v1/endpoints/types Lists all available endpoint types. Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/endpoints/types\" \\ -H \"Authorization: \" Responses Code Message Action 200 OK Returns the list of endpoint types. Response Example: 1\n2\n3\n4 { \"OData\" : 1 , \"Sql\" : 2 } List Workspace Endpoints GET /v1/workspaces/{workspaceId}/endpoints Lists all endpoints in the specified workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. skip int32 query Number of items to skip. take int32 query Number of items to return. Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/endpoints?skip=0&take=20\" \\ -H \"Authorization: \" Responses Code Message Action 200 OK Returns the list of endpoints. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 { \"data\" : [ { \"id\" : 1 , \"name\" : \"OData Endpoint\" , \"token\" : \"abcdefg\" , \"active\" : true , \"type\" : \"OData\" , \"created\" : \"2024-10-03T12:34:56Z\" , \"modified\" : \"2024-10-04T09:23:12Z\" } ], \"hasMore\" : true } Get Endpoint Details GET /v1/workspaces/{workspaceId}/endpoints/{endpointId} Retrieves the details of a specific endpoint in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. endpointId int32 path The endpoint ID. Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/endpoints/456\" \\ -H \"Authorization: \" Responses Code Message Action 200 OK Returns the endpoint details. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9 { \"id\" : 456 , \"name\" : \"OData Endpoint\" , \"token\" : \"abcdefg\" , \"active\" : true , \"type\" : \"OData\" , \"created\" : \"2024-10-03T12:34:56Z\" , \"modified\" : \"2024-10-04T09:23:12Z\" } Enable Endpoint POST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/enable Enables a specific endpoint in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. endpointId int32 path The endpoint ID. Request Example 1\n2\n3 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/endpoints/456/enable\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" Responses Code Message Action 200 OK Endpoint enabled successfully. Disable Endpoint POST /v1/workspaces/{workspaceId}/endpoints/{endpointId}/disable Disables a specific endpoint in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. endpointId int32 path The endpoint ID. Request Example 1\n2\n3 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/endpoints/456/disable\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" Responses Code Message Action 200 OK Endpoint disabled successfully. List Endpoint Executions GET /v1/workspaces/{workspaceId}/endpoints/{endpointId}/executions Lists all executions of the specified endpoint in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. endpointId int32 path The endpoint ID. startDate date-time query Start date for filtering executions. endDate date-time query End date for filtering executions. failed boolean query Whether to filter executions by failed status. skip int32 query Number of items to skip. take int32 query Number of items to return. sortOrder string query Specifies how the result records are sorted. Allowed values are \u2018asc\u2019 and \u2018desc\u2019 (without quotes). asc sortBy string query The field, by which the result records are sorted. Allowed values are \u2018date\u2019 and \u2018executionId\u2019 (without quotes). date Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/endpoints/456/executions?skip=0&take=20\" \\ -H \"Authorization: \" Responses Code Message Action 200 OK Returns the list of executions. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 { \"data\" : [ { \"executionId\" : \"abc123\" , \"date\" : \"2024-10-04T09:00:00Z\" , \"method\" : \"GET\" , \"failed\" : false , \"bytes\" : 1024 , \"remoteIP\" : \"192.168.0.1\" , \"url\" : \"https://api.skyvia.com/v1/endpoints/456/data\" } ], \"hasMore\" : true } Get Endpoint Execution Details GET /v1/workspaces/{workspaceId}/endpoints/{endpointId}/executions/{executionId} Retrieves the details of the specified endpoint execution. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. endpointId int32 path The endpoint ID. executionId string path The execution record ID. Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/endpoints/456/executions/abc123\" \\ -H \"Authorization: \" Responses Code Message Action 200 OK Returns the execution details. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15 { \"endpointId\" : 456 , \"workspaceId\" : 123 , \"date\" : \"2024-10-04T09:00:00Z\" , \"url\" : \"https://api.skyvia.com/v1/endpoints/456/data\" , \"method\" : \"GET\" , \"remoteIP\" : \"192.168.0.1\" , \"userAgent\" : \"Mozilla/5.0\" , \"user\" : \"admin\" , \"error\" : null , \"errorDetails\" : null , \"log\" : [ \"Step 1: OK\" , \"Step 2: OK\" ], \"bytes\" : 1024 , \"rows\" : 50 }" }, { "url": "https://docs.skyvia.com/api-reference/integrations.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Integrations List Workspace Integrations GET /v1/workspaces/{workspaceId}/integrations Lists all integrations in the specified workspace. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None take int32 query The number of integrations to return in the response. 20 skip int32 query The number of integrations to skip before returning results. 0 Responses Code Message Action 200 OK Returns a list of integrations in the workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 { \"data\" : [ { \"id\" : 123 , \"name\" : \"Data Integration 1\" , \"type\" : \"import\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-03T14:15:15.307Z\" , \"scheduled\" : true } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/integrations?take=20&skip=0\" \\ -H \"Authorization: \" Get Integration Details GET /v1/workspaces/{workspaceId}/integrations/{integrationId} Retrieves details of a specific integration in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified integration. Response Example: 1\n2\n3\n4\n5\n6\n7\n8 { \"id\" : 123 , \"name\" : \"Data Integration 1\" , \"type\" : \"import\" , \"created\" : \"2023-10-02T14:15:15.307Z\" , \"modified\" : \"2023-10-03T14:15:15.307Z\" , \"scheduled\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/integrations/456\" \\ -H \"Authorization: \" List Integration Executions GET /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions Lists all finished executions for the specified integration. Parameters NAME TYPE IN DESCRIPTION DEFAULT VALUE workspaceId int32 path The workspace ID. None integrationId int32 path The integration ID to retrieve executions for. None startDate date-time query Start date for filtering executions. None endDate date-time query End date for filtering executions. None failed boolean query Whether to filter executions by failed status. None take int32 query The number of executions to return. 20 skip int32 query The number of executions to skip. 0 sortOrder string query Specifies how the result records are sorted. Allowed values are \u2018asc\u2019 and \u2018desc\u2019 (without quotes). asc sortBy string query The field, by which the result records are sorted. Allowed values are \u2018startDate\u2019 and \u2018executionId\u2019 (without quotes). startDate Responses Code Message Action 200 OK Returns a list of integration executions. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 { \"data\" : [ { \"runId\" : 123 , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"state\" : \"Succeeded\" , \"successRows\" : 100 , \"errorRows\" : 0 } ], \"hasMore\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/integrations/456/executions?take=20&skip=0\" \\ -H \"Authorization: \" Run Integration POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions Runs specified integration in the workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID. Responses Code Message Action 200 OK Returns the details of the execution. Response Example: 1\n2\n3\n4\n5\n6\n7\n8 { \"runId\" : 0 , \"date\" : \"2024-10-04T09:53:28.371Z\" , \"duration\" : 0 , \"state\" : \"New\" , \"successRows\" : 0 , \"errorRows\" : 0 } Request Example 1\n2\n3 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/integrations/456/executions\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" \\ Get Active Integration Executions GET /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/active Lists active integration executions in the specified integration (having status Queued, Running, or Canceling). Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID to retrieve active executions for. Responses Code Message Action 200 OK Returns a list of active integration executions. Response Example: 1\n2\n3\n4\n5\n6\n7 { \"runId\" : 123 , \"date\" : \"2023-10-02T14:15:15.307Z\" , \"state\" : \"Executing\" , \"successRows\" : 100 , \"errorRows\" : 0 } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/integrations/456/executions/active\" \\ -H \"Authorization: \" Cancel Integration Execution POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/cancel Cancels the active execution of the specified integration. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID for which the active execution is being canceled. Responses Code Message Action 200 OK The active execution is successfully canceled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/integrations/456/executions/cancel\" \\ -H \"Authorization: \" Kill Integration Execution POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/executions/kill Forces termination of the active integration execution in the specified integration. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID for which the active execution is being killed. Responses Code Message Action 200 OK The active execution is successfully killed. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/integrations/456/executions/kill\" \\ -H \"Authorization: \" Get Integration Schedule Status GET /v1/workspaces/{workspaceId}/integrations/{integrationId}/schedule Retrieves the schedule status of the specified integration. Note that the request executes successfully only if the integration has a schedule defined whether enabled or disabled. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID to retrieve the schedule for. Responses Code Message Action 200 OK Returns the schedule details of the integration. 204 No Content Schedule is not defined for the integration. Response Example: 1\n2\n3 { \"active\" : true } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/integrations/456/schedule\" \\ -H \"Authorization: \" Enable Integration Schedule POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/schedule/enable Enables the schedule for the specified integration. A schedule must be defined for the integration; otherwise, the call results in the 404 error. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID for which the schedule is being enabled. Responses Code Message Action 200 OK The schedule for the integration is enabled. 404 Not Found The schedule for the integration is not defined. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/integrations/456/schedule/enable\" \\ -H \"Authorization: \" Disable Integration Schedule POST /v1/workspaces/{workspaceId}/integrations/{integrationId}/schedule/disable Disables the schedule for the specified integration. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. integrationId int32 path The integration ID for which the schedule is being disabled. Responses Code Message Action 200 OK The schedule for the integration is disabled. Request Example 1\n2 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/integrations/456/schedule/disable\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/api-reference/workspaces.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation API Reference Workspaces List Workspaces GET /v1/workspaces Lists all available workspaces. Responses Code Message Action 200 OK Returns a list of available workspaces. Response Example: 1\n2\n3\n4\n5\n6\n7 [ { \"id\" : 123 , \"name\" : \"Workspace 1\" , \"isPersonal\" : false } ] Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces\" \\ -H \"Authorization: \" Get Workspace Details GET /v1/workspaces/{workspaceId} Retrieves the details of a specific workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID to retrieve details for. Responses Code Message Action 200 OK Returns the details of the specified workspace. Response Example: 1\n2\n3\n4\n5 { \"id\" : 123 , \"name\" : \"Workspace 1\" , \"isPersonal\" : false } Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123\" \\ -H \"Authorization: \" List Workspace Users GET /v1/workspaces/{workspaceId}/users Lists all users in the specified workspace. This endpoint supports filtering based on a search mask. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID to retrieve users for. searchMask string query Filter users by part of their name or email. Responses Code Message Action 200 OK Returns a list of users in the specified workspace. Response Example: 1\n2\n3\n4\n5\n6\n7\n8\n9 [ { \"id\" : 123 , \"email\" : \"user@example.com\" , \"fullName\" : \"User Name\" , \"roleId\" : 3 , \"roleName\" : \"Administrator\" } ] Request Example 1\n2 curl -X GET \"https://api.skyvia.com/v1/workspaces/123/users\" \\ -H \"Authorization: \" Add Workspace User POST /v1/workspaces/{workspaceId}/users Adds a user to the specified workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. Request Body NAME TYPE DESCRIPTION userId int32 The ID of the user to add to the workspace. roleId int32 The role ID to assign to the user in the workspace. Request Body Example 1\n2\n3\n4 { \"userId\" : 456 , \"roleId\" : 3 } Responses Code Message Action 200 OK The user was successfully added to the workspace. Request Example 1\n2\n3\n4 curl -X POST \"https://api.skyvia.com/v1/workspaces/123/users\" \\ -H \"Authorization: \" \\ -H \"Content-Type: application/json\" \\ -d '{\"userId\": 456, \"roleId\": 3}' Remove Workspace User DELETE /v1/workspaces/{workspaceId}/users/{userId} Removes a user from the specified workspace. Parameters NAME TYPE IN DESCRIPTION workspaceId int32 path The workspace ID. userId int32 path The ID of the user to be removed from the workspace. Responses Code Message Action 200 OK The user was successfully removed from the workspace. Request Example 1\n2 curl -X DELETE \"https://api.skyvia.com/v1/workspaces/123/users/456\" \\ -H \"Authorization: \"" }, { "url": "https://docs.skyvia.com/automation/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Automation helps you to connect your favorite apps and services and build complex workflows to automate repetitive manual tasks. Its purpose is to simplify your work and allow you to focus on important things, so you can achieve more in less time. What to Expect From Automation? Here are a few examples of what you can do with automation: \u0421reate a task in your project management tool from a new support ticket or email. \u0421reate new orders in your accounting or inventory management system from recent sales in your e-commerce platform. Move business data between systems on a schedule. How It Works Skyvia offers a wide range of connectors to applications and data sources. Automation uses connectors to access data and apply actions. The starting point of each automation is a trigger. It defines when to start the process. There are manual, on-schedule, and event-based triggers. Besides triggers and actions, automation provides other components to build a complex logic behind each action and form a multi-step automation flow. Learn More About Automation How to build your first automation . What is trigger and what trigger types are available. Detailed look at actions and other components used to build automation. How to run your automation manually, based on schedule or an event. How to work with data in automation." }, { "url": "https://docs.skyvia.com/automation/advanced-features/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Advanced Features This part of the automation documentation is dedicated to the automation features that are not obligatory for building or using it, however can extend the automation possibilities and improve your automation usage experience. Coming Soon\u2026" }, { "url": "https://docs.skyvia.com/automation/building-automation/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Building Automation The building automation part of the documentation describes the building blocks of the automation flow, the ways to interact with them, guides you through working with data, and more. Here is what you will learn if you spend a few minutes to explore: What are the triggers and trigger types. What components are available in automation, and how to use them. How to work with data , what is the scope, output, and more." }, { "url": "https://docs.skyvia.com/automation/building-automation/components.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Building Automation Components To build automation, you need to create an automation flow. Each automation flow consists of building blocks called components. It always starts with Trigger, followed by Action, and ends with Stop. You can create a complex multi-step conditional logic by adding more components to the automation flow. Below, you may find the description of each automation component available in Skyvia. Action The Action component is what makes the automation happen. It executes an operation over data from the chosen connection . Each connection has its own list of available actions. Use actions to get data, transform it, and transfer it to the destination point. To learn more about actions and their settings, visit the Actions section. Use Case Scenario Task Solution You need to create an opportunity in Salesforce each time a new deal is added to HubSpot. Add a connection type trigger that checks for the new records in the Deals object in HubSpot. Add an action that grabs the data from HubSpot and an action that imports data from HubSpot to Salesforce. If The If component sets a condition and splits the automation flow into two branches. If the input data meets the condition, the True branch is executed; otherwise, the automation flow is navigated to the False branch. To create a complex condition, open Expression Editor by selecting the If component and clicking the wand icon. The result of the condition should always be a boolean value. Use Case Scenario Task Solution You need to check if a customer has made a purchase in the last 30 days. The name of the field that stores the required information is PurchaseDaysPassed . Set a condition: PurchaseDaysPassed<=30 . If a customer made a purchase during the last 30 days, the True branch would be executed; otherwise \u2014 the False branch. Foreach The Foreach component is used to iterate over a list of items. It allows you to apply same actions to each item in the list. To choose a list of items to work with, select it from the List dropdown in the Foreach settings. Use the As box to set an alias name for the item. Use Case Scenario Task Solution You have a list of customers and a mailing list. You need to add customers to the mailing list only if today is their birthday. Add a Foreach component that accepts the list of customers. Add an If component inside the Foreach loop that checks if today is the customer\u2019s birthday. Assign an action that adds a customer to the mailing list to the True branch of the If component. Break Loop The Break Loop component allows you to stop a loop (for example, Foreach) at a specific point or based on a specific condition. Use Case Scenario Task Solution You have a list of customers. You need to know if there is a customer from Germany among them. Add a Foreach component that accepts the list of customers. Add an If component inside the Foreach loop that checks if Germany is the customer\u2019s location. Add the Break Loop component to the True branch. In this case, the loop stops once the first German customer is found, as there is no further need to iterate over the remaining items in the list. Parallel The Parallel component allows you to perform two or more actions simultaneously. Use it when you need to execute several independent branches at the same time. Use Case Scenario Task Solution You have a new customer. You need to send a welcome email, create a new account for that customer, and set yourself a reminder to make a call on Monday. Add a Parallel component add create three branches inside it. Assign email action to the first branch, account action to the second branch, and a reminder to the third one. As all those actions are not interconnected, they can be run in parallel to speed up the execution. Try Catch The Try Catch component allows you to handle errors. It consists of two branches: the Try branch, which tries to execute an action, and the Catch branch, which catches an error if the Try branch fails. The Catch branch can also contain actions that are being executed if the Try branch fails. Use Case Scenario Task Solution You need to import a customer from one system to another. If the import fails, a notification email should be sent. Add a Try Catch component. Assign an import action to the Try branch and the send email action to the Catch branch. If the import fails, you will receive an error message, and a notification email will be sent. Exception The Exception component can be used inside and outside the Try Catch component. When used outside Try Catch, Exception triggers an error, and the automation flow stops. When used inside the Try Catch component, it switches the execution to the Catch branch. You can set an error message in the Exception settings. It will be displayed in the Execution Details in case of automation failure. Use Case Scenario Task Solution You have a customer. You need to create an account for him only if his age is 18 or higher and send him an email only if the account was created. Otherwise, the automation should be treated as failed. Add an If component that checks if the customer\u2019s age is > 18. Assign the create account action to the True branch and Exception to the False branch. Optionally, set an error message in the Exception settings. Add the send email action after the If component. In this case, an account will only be created if the customer\u2019s age is > 18, and an email will be sent only if the account is created. If the age is < 18, the automation will fail, and you will receive an error message in the Execution Details. Stop Each automation ends with a Stop component. Additionally, you can use Stop to end the automation execution at any point. Note that if you use Stop inside the Parallel component to one of the branches, all parallel branches will finish their execution before the automation stops. Use Case Scenario Task Solution You have a customer. You need to create an account for him only if his age is 18 or higher and send him an email only if the account was created. Add an If component that checks if the customer\u2019s age is > 18. Assign the create account action to the True branch, and Stop to the False branch. Add the send email action after the If component. In this case, an account will only be created if the customer\u2019s age is > 18, and an email will be sent only if the account is created. Unlike the same situation with an Exception component, automation will be treated as successful. Set Variable The Set Variable component allows you to set values to the declared variables . It can be useful for performing calculations and for having access to the inner scope of the array from the outside. Use Case Scenario Task Solution You have a list of potential leads in your CRM. You need to calculate a male/female ratio and add it to your database. Add a Foreach component that iterates over the list of customers. Add an If component inside Foreach, that checks whether the potential leed is male or female. Set a variable for each case that counts the number of leads. After Foreach, set a male/female ratio variable that checks the values of count male and count female variables and calculates the ratio. Add an action to upload this data to your database." }, { "url": "https://docs.skyvia.com/automation/building-automation/managing-components.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Building Automation Managing Components When building your automation you operate with components on the automation flow diagram. Let\u00b4s explore how to edit, copy, paste, delete, and skip them. Editing To edit a component, click on it on the diagram. This opens a settings pane. Each automation component has a name and a unique set of settings for you to configure. Copying Use copy to replicate an existing component within the same automation flow. To copy a component: Hover your mouse over a component and click on the button to open the context menu. Choose Copy from the list of options. Pasting After copying a component, you can paste it below or above any component in the automation flow (except for places where the copied component is not appliable). To do so, choose Paste Above , or Paste Below option from the context menu. Deleting Deleting a component removes it from the automation flow and cannot be undone. To delete a component select Delete from the component\u2019s context menu. Skipping When you do not want a specific component to remain the part of your automation flow, however you do not want to delete it, you can skip it. In this case automation flow will act as if your component doesn\u2019t exist. To skip a component, choose Skip Component from the context menu. You can unskip it at any point. Using Hotkeys You can use the following hotkeys on your keyboard instead of using context menu: Hotkey Operation Ctrl + C Copy Ctrl + V Paste Below Ctrl + Alt + V Paste Above Del Delete" }, { "url": "https://docs.skyvia.com/automation/building-automation/triggers.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Building Automation Triggers Trigger is the starting point of every automation. It looks for a specific event or a condition to happen, and launches the automation flow when it occurs. The automation can be run manually, based on the schedule or the event. There is a trigger type for each case. Trigger Types Skyvia provides Manual, Connection, Run on Schedule, and Webhook trigger types. Choose one of them to define what type of event triggers the automation. Manual Choose Manual to start automation on demand by pressing Run on the automation management pane. Run on Schedule \u0421hoose Run on Schedule to set a schedule for the automation runs or create a delayed run. Schedule Settings Setting Description Recurrence Includes two options: One-Time and Recurring, to either set a delayed start or create a schedule Run Every Defines the frequency of the automation runs. Time Defines the time of the day to run the automation. Starting Defines the starting date of the schedule. Time Zone Required Parameter. Defines the time zone to line up with a schedule. Connection Triggers Connection trigger is set up to monitor a specific event or condition in the chosen data source. When the event occurs, trigger detects it and creates a queued execution. If there is no active execution, it starts the automation. You can check and modify the queue and status of executions on the monitor tab. Connection triggers are connector-specific, trigger availability depends on the chosen connection. Connection Trigger Types Currently, only the Polling Trigger type is available in Connection triggers. Polling triggers check the data source for events according to the time interval specified in the Polling Interval setting. For example, the New Record trigger below checks the Account object for new records every 30 minutes and compares the start date of the check with the record\u2019s CreatedDate value. If it finds a record that appeared since the last check, Skyvia creates a queued execution for each new record and adds it to the end of the queue. If there are no active executions, Skyvia runs the Automation. If there is an active execution, newly added executions will run according to their position in the queue. You will get the values of the selected columns in the Trigger payload . When the check starts, Skyvia creates a current check timestamp, looks for the updates since the last check, and makes the current check timestamp a starting point for the future checks. When you disable the automation, polling trigger stops checking for the events. Next time you enable the automation it starts checking from either the last check date or the last event found depending on the Connector used in your automation. Connection Trigger Settings Trigger settings differ from trigger to trigger, while also depending on the chosen Connector. For example, New Record trigger will have a different set of settings for SQL Server and Salesforce connections. Poll Interval Each polling trigger has a poll interval setting that determines the time interval between the event checks. Note, that the less the interval is, the more API calls will be involved. Trigger Condition Each connection trigger has a Trigger condition setting. Use it to filter and validate your events before the automation execution. To set up a trigger condition, select Expression , and use Expression Editor . Columns that you choose in the trigger settings will be available as properties in the Expression Editor . If the expression is not valid you will recieve an error message notification. For example, if you forget to cover your string value with quotes you will receive the following error, as Skyvia treats Hire as an object in this case: You may face the case when the expression is valid, but the error appears during the processing of data by the expression. For example, the MyAge == int32(Age) expression is valid but if Age returns an invalid string, you will receive an error during the automation execution and it will be completed with a failed status. To track the failure reason, use the Debug page. In the case described above you will see a trigger-related error. Webhook Triggers Webhook trigger allows you to create a webhook URL and send data to it from any external app or service. Once Skyvia receives your request, it will trigger the automation. This approach enables real-time workflow automation. Setting Up a Webhook Trigger To start configuring a Webhook trigger, select it and click Start setup . The setup process consists of a three steps: generating a webhook URL, receiving a webhook, and checking the results. Firstly, generate a webhook URL that is used to receive requests from your apps. A webhook URL consists of two parts: a Base URL, and an event name. A Base URL is created automatically by Skyvia. You can recreate the URL manually at any time by clicking Generate new URL . The second part is an Event name and it is set by you manually. Make sure you do not use special symbols or spaces in the event\u2019s name as it will become the part of the URL. Now when the webhook URL is generated you can start receiving requests to that address. Click Copy to copy the webhook URL and paste it to your app. Once Skyvia receives a webhook to the specified URL it will automatically detect the data schema and you will be transfered to the third step. Here you can check the data schema. Note that if one of the parameters is null, Skyvia will determine its data type as string. Once ready, click Finish . Once the setup is done you can check and configure your trigger settings. Webhook Trigger Settings You can edit your request\u2019s headers, query parameters, and payload in the webhook trigger settings. After editing, check the results using output tab. You will be able to access all parameters displayed on the output tab while building your automation flow. Trigger Payload Each trigger can have its payload generated by the event. To return field values to the automation scope, select them from the columns dropdown. In the example below ID, Name, and Rating values will be added to the automation scope. You can check the payload in the Tigger\u2019s output." }, { "url": "https://docs.skyvia.com/automation/building-automation/working-with-data.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Building Automation Working With Data In this topic, we describe how components operate with data inside the automation flow. Before you start exploring, get familiar with the following concepts: Actions and triggers are the only components that add data to the automation flow. Other components use this data or transform it. The data is added to the scope through the component\u2019s output. Component\u2019s Output Triggers and Actions can return data. To check this data, click Ouput in the component\u2019s settings. There are two possible output datatypes: array and object. The output datatype depends on a trigger or action you use. For example, the Insert action always returns an object, while Execute Command always returns an array. Let\u2019s check an example of a simple automation that gets Products data from the SQL Server, inserts it into Salesforce Contact , and returns the ids of the inserted records. To create this automation, we use a trigger, two actions, and Foreach. For demonstration purposes, we use a manual trigger that starts automation whenever we press the Run button. The first action is Execute Command. It executes the Select * from Products query. To check what fields are returned by the action, we go to the Output tab. All data from the component\u2019s output was added to the scope and will be shared with the further components via pipeline so the second action can operate with this data. Our second action is Insert. It uses the data from the first action\u2019s output and inserts it into the Contact object in Salesforce. We select Returning option to receive the ids of the inserted products in the output. We can later use these ids to Insert them into our SQL Server or for any other purposes. The returning output is also displayed on the Output tab. Disabling Component\u2019s Output If you disable a component in the automation flow, you also disable access to its output. Scope Scope represents all the data that can be accessed by a specific component and is shared between components via pipeline. Here is an example: In the automation flow shown above: Action_3 has access to the data added to scope by all other actions. Action_6 has access to the data added to scope by Action_2 and Action_1. Action_6 doesn\u2019t have access to the data added to scope by Action_3, Action_4, and Action_5. Only components that come before Action_6 in the automation flow can share their scope with it. Inner Scope Foreach and Try/Catch components use inner scope. The data that is available inside the Foreach loop and a Catch branch of the Try/Catch component is not available to components that follow them in the Automation Flow. Here is an example: In the automation flow shown above: Action3 has access to the data added to the scope by Action2, Action1, and Trigger. Action4 has access to data that is added to scope by Action1 and Trigger. Action4 doesn\u2019t have access to Action2 or Action3 as they return data to the inner scope of Foreach. To get access to the inner scope data from outside, create a variable. Use Set Variable inside the inner scope to assign a value to the variable. Variables are visible to all components throughout the Automation Flow. Using Data In automation, you can use data to map fields, assign it to variables, and set conditions. Using Data to Map Fields Transfering data from one data source to another is a common case in Automation. To ensure that the data is transfered consistently you need to map fields between objects in data sources using Mapping Editor and Expression Syntax . Here is an example: On the automation flow shown above we get the list of customers from Customers object in one data source and insert each customer\u2019s shipping address and company name to the Contact object in another data source. For the sake of simplicity lets call the initial data source\u2014Source, and the destination data source\u2014Target. We use Mapping Editor to map fields as company\u2019s name is stored in an CompanyName field in Source, and in a Company field in Target. Moreover, the shipping address in Source is stored in two different fields: City and Address . We use expression to concatenate those two fields and map the expression\u2019s result to the ShippingAddress in Target. Note: When mapping fields, pay attention to the field types. Skyvia will try to convert the types if possible, but do not expect to insert a boolean value in a Date/Time field. Using Data to Set Conditions When you build an automation you can navigate the automation flow execution based on conditions with the help of components such as If. Expression Editor helps you to create expressions that will be evaluated to a boolean value. Let\u2019s check a data driven condition example: In the automation flow shown above we get a list of customers and interate over this list with the help of Foreach component. We set a condition to insert customer records to Contact only if customer\u2019s city is London. Assign Data to Variables With the help of Set Variable component you can assign values to variables to store data during the Automation execution, perform calculations, and get access to the inner scope. For example, you can count the number of elements in the array. Accessing Array\u2019s Data Execute Command always returns an array. A default way to operate with the arrays is to use Foreach component. There are cases when you know that Execute Command will return a single element in the array. We implemented a way to access this element without overwhelming the automation flow with Foreach components. You can access the properties of the first element of the array the same way you access an object properties. For example, if you have an array of product objects called Products, and it consists of a single product, you can access the name of the product by using Products.Name." }, { "url": "https://docs.skyvia.com/automation/getting-started/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Getting Started The getting started part of the documentation is here to make you familiar with the automation basics, so you can start using automation as soon as possible. It will equip you with the knowledge of: The key terms and concepts used in automation. The basics of the automation user interface . How to create your first automation ." }, { "url": "https://docs.skyvia.com/automation/getting-started/creating-your-first-automation.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Getting Started Creating Your First Automation In this chapter, we provide a step-by-step guide to a simple automation example \u2014 adding Mailchimp subscribers as leads from a certain Mailchimp audience to a Salesforce campaign. Establishing Connections To create this automation, you need the connections to Salesforce and Mailchimp. To create a Salesforce connection, \u0441lick Create New , select Connection , and look for Salesforce. Then select the environment and sign in to Salesforce. For more detailed instructions, see Salesforce connector topic. To create a Mailchimp connection, \u0441lick Create New , select Connection , and look for Mailchimp, then sign in with your Mailchimp credentials. For more detailed instructions, see Mailchimp connector topic. Configuring Automation Settings To create a new automation, click Create New and select Automation . First, you need to specify automation Name and Location in your workspace and select the automation Trigger type . Trigger type determines how the automation is triggered. We select the Connection trigger, which fires when Skyvia detects a new or updated record in a data source. Setting Up a Trigger Each automation starts with Trigger and ends with Stop. They appear on the diagram once you create a new automation. Let\u2019s configure our trigger to get new Mailchimp subscribers. Select the Trigger on the diagram. In the Connection list, select your Mailchimp connection. In the Trigger list, select New Record . In the Table list, select the ListMembers object. In the Columns list, we need to select the fields to obtain for new subscribers. This list allows you to select multiple fields one by one. For the purpose of our tutorial, select the following fields: Id , Email , First Name , Last Name , Phone Number , ListId . Checking If Lead Exists Let\u2019s set up this automation to add new Mailchimp subscribers to Salesforce leads. The first step is to check if a lead with such email already exists in Salesforce. Add the first Action component by dragging it to the plus sign between Trigger and Stop. To configure this Action, perform the following steps: Click the Action on the diagram. In the Connection list, select your Salesforce connection. Set Action to Execute Command . Enter this statement into the Command Text field: SELECT Id FROM Lead WHERE Email = :Email . Click the Email parameter in the Parameters list. Under Properties expand { } trigger and click Id to map the parameter to input Email property. You can also specify the action name to make the automation diagram more understandable. Here is the result: Adding Condition Now let\u2019s configure a condition to only process subscribers who are not already present in Salesforce. Add an If component after the Action. To configure it, enter isnull(action_1.Id) in its Condition box. You can also rename the If component. Inserting Leads to Salesforce Add another Action component to the True branch of the If component. This Action will insert leads to Salesforce. Perform the following steps to configure it: In the Connection list, select your Salesforce connection. Set Action to Insert . In the Table list, select Lead . In the Returning list, select Id . Click any parameter in the Parameters list and map the parameters like this: Parameter Mapped Value FirstName trigger.\u201dFirst Name\u201d LastName trigger.\u201dLast Name\u201d Email trigger.\u201dEmail\u201d Phone trigger.\u201dPhone Number\u201d Salesforce requires the Company field to be filled for a lead. However, depending on Mailchimp settings, the subscribers object may not have the corresponding data. For the Company parameter you can specify some constant value, for example, \u2018Mailchimp import\u2019 . Finally, let\u2019s name the Action component. Here is the result: Adding Leads to Campaign Finally, let\u2019s add components, that add subscribers from a specific Mailchimp list to a Campaign. For this, we need to add another If component to check whether a subscriber belongs to the selected list and Action component to insert the corresponding records to the CampaignMember Salesforce object. Specify the following condition in the If component: trigger.ListId==\u2019\u2019, where you should replace _ with the actual ID of your Mailchimp list. You can get an ID of your Mailchimp audience in the audience settings in Mailchimp or using our Query tool . To configure the newly added Action component, perform the following steps: In the Connection list, select your Salesforce connection. Set Action to Insert . In the Table list, select CampaignMember . Click any parameter in the Parameters list and map the parameters like this: CampaignId to ID of your Salesforce campaign to add subscribers to. Quote the value with single quotation marks, for example, \u20187012G000001mDZzQAM\u2019 LeadId to trigger.Id Here is the result: Click Save to save it. Testing Automation Now you can check how the new automation works. Click Test Mode . Then, in the Execution pane on the left, click Start test . Skyvia will wait for the connection event to trigger execution. Add a subscriber to Mailchimp to trigger automation in the test mode and wait for the automation to check for the events (new Mailchimp subscribers) or click Check now . Let\u2019s assume our test failed. You can immediately see, which components executed successfully, where the error occurred, and which components didn\u2019t trigger at all on the diagram. Click Show details in the Execution pane to see the details of the error. In this case, the first Action component failed, and the error is caused by an expired token in the Salesforce connection. After fixing errors, you can immediately rerun the test with the same data by clicking Repeat execution . This time we can see that the automation succeeded." }, { "url": "https://docs.skyvia.com/automation/getting-started/key-terms-and-concepts.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Getting Started Key Terms and Concepts In this topic, we share the key terms and concepts that you will meet throughout the automation documentation. We recommend you get familiar with them before moving further. Automation Flow To build automation, you need to create an automation flow. Automation flow is a task execution algorithm. It consists of a trigger that is followed by the chain of automation components, such as actions and conditions that allow you to build a complex multi-step execution logic, handle errors, and more. Triggers Every automation flow starts with a trigger. Trigger defines when to run the automation. There are several types of triggers that allow you to start the automation manually, on schedule, or based on a specific event. To learn more about triggers, visit the Triggers topic. Components Components are the building blocks of the automation flow. They are used to execute actions, set conditions, create branches, handle errors, and navigate the automation flow execution. To learn more about components, visit the Components topic. Actions Actions either get the data or send it to the data source. By chaining actions and mixing them with the other automation components, you choose what happens after the trigger sends a signal to start the automation. Actions are connection-specific, so different data sources may contain a different list of available actions. To learn more about actions, visit the Actions topic. Connections Automation accesses data in data sources through connections. Before building an automation flow, make sure to establish a connection to each data source involved. Skyvia offers a wide selection of pre-built connectors to provide access to all your favorite apps and data storages. To learn more about connections, visit the Connections section. Scope Components inside the automation flow operate with data. Scope defines which data can be accessed by a component at a specific point of the automation flow. To learn more about scope, visit the Scope chapter. Output The component\u2019s output is the data returned by the component that can be later access by other components. To learn more about output, visit the Component\u2019s Output chapter." }, { "url": "https://docs.skyvia.com/automation/getting-started/ui-basics.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Getting Started UI basics In this topic, we showcase the automation product\u2019s main User Interface elements and describe how to interact with them. New Automation Window The New Automation window is the first thing you see when creating your automation. Here you set the automation\u2019s name, trigger type, and the workspace and folder to store it. Once those are set, click Start Building to enter the Edit Mode . Edit Mode Edit mode is designed for building and editing your automation flow. Here you can access the tools and elements such as Components Pane, Automation Flow Diagram, Component Settings, Flow Inspector, and more. All of them are described in details below. Components Pane Components pane is located on the left. By default it shows you only the component icons to free up space for the Automation Flow diagram. To check the component\u2019s details hover your mouse over the Components pane. Components pane contains all the building blocks for you to create an automation flow. To add components to the automation flow, drag them from the components pane to your flow on the diagram. If you want to keep the detailed view of the components, click on the pin icon on the upper right. Automation Flow Diagram In the middle of the screen, you see a diagram with the automation flow. You can move around the diagram by clicking on any free spot and moving your mouse. You can zoom in and out using the + and - buttons. When you hover your mouse over a component, \u2026 menu appears. Here you can choose to copy, paste, delete, or skip your component. Component Settings To access the component\u2019s settings, click on it on the automation flow diagram. Component Settings will appear on the right side of the screen. This pane is used to customize your components and can be used to select connections, choose actions, set conditions, and more. To learn about the settings of a specific component, visit the Components chapter. Flow Inspector Flow inspector is located on the bottom left and designed to assist you with the real-time error tracking. If the error is found, it will be highlighted in the Flow Inspector alongside with the components name that is causing that error. Click on the component\u2019s name in the Flow Inspector to open its settings. Top Menu Bar Top Menu Bar allows you save the changes you\u2019ve made in the Edit Mode, access the automation Overview page, manage Variables and Connections in your automation flow. Overview On the Overview page you can change the name of your automation, create a description for your automation and access [Monitor and Log](https://docs.skyvia.com/automation/operating-automation/monitoring.html) pages." }, { "url": "https://docs.skyvia.com/automation/operating-automation/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Operating Automation The operating automation part of the documentation guides you through the possible interactions with automation after you create one." }, { "url": "https://docs.skyvia.com/automation/operating-automation/managing-automation-versions.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Operating Automation Managing Automation Versions The indicator on the Edit Mode toggle shows if there are unsaved changes in your automation. Each time you save the automation, Skyvia creates and stores a new version of it. To check available versions, select your automation and click Versions . Here you can check details of each version, filter versions by date, leave comments, and restore a chosen version. Checking Version Details In the versions list, each version is assigned to a user who made changes, provides a timestamp and shows a change type associated with that version. To check the version details, select it from the versions list. A version viewer will open where you can analyze the automation flow and check each component\u2019s settings. Restoring a Version To restore a previous version of the automation, select it from the versions list and click Restore this version . This will add one more version to the versions list and assign a current flag to it. Commenting a Version You can add comments to the automation versions that act as descriptions of the changes made to automation. It is useful to leave comments so you and your teammates can find a version with specific changes easily. Note that you can add comments to the version only if you have rights to manage workspace objects. To add a comment, click on the comment box, enter your comment and click \u2714. To update the comment, click the Reset button first. Filtering a Version To quickly find a specific version, apply filters by date range by clicking date on the upper left and selecting one of the proposed options." }, { "url": "https://docs.skyvia.com/automation/operating-automation/monitoring.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Operating Automation Monitoring Automation Executions Once you created automation, it can be run manually or based on a specific event, depending on the trigger type used. Manual triggers require you to click Run to execute the automation. Connection, Schedule, and Webhook triggers execute the automation when a specific event occurs. Use the enable/disable toggler to start or stop checking for the trigger events. To view the results and analyze details of the automation executions use Monitor and Log tabs. Monitor You can check the results of the last five automation executions and the status of the current automation execution on the Monitor tab. Monitoring Connection Triggers If there are no active executions you will see the next event check timer. You can force Skyvia to check for events by clicking Check for events now . Note that if your Automation is enabled but the connection is no longer valid, you will receive an error message and an additional email notification. If no events were detected after the check, the timer resets according to the trigger\u2019s polling interval value. If events were detected, Skyvia creates a queued execution for each detected event, and runs them one by one. Monitor tab provides you with the Execution ID, State, and the number of the queued executions. Here you can cancel the the current execution and discard the queued ones. When you disable the automation, it saves the executions queue. When you restart the automation it starts with the queued execution on which you stopped before. The lifespan of the queued executions is 60 days. You can discard queued executions at any point by clicking discard queued. Log To check all the completed automation executions, use the Log tab. Here you can apply filters by date and state. Execution Details To check the automation execution details, select the preferred execution from the list on either Monitor or Log tab. Details pane provides an access to the following infomation: Detail Description Execution ID Unique identifier of the automation execution in Skyvia. Date Date and time of the automation execution. State State of the automation. Possible values: Executing, Succeeded, Failed, Cancelling, Canceled. Version The version of the automation flow. Goes up in number each time you save changes to your automation. Billed Tasks The number of tasks used during the automation execution. Error If an error occurred during the automation execution, the error message will be displayed here. Each execution has its state. You can check it on the Details pane or in the list of executions on the Log and Monitor tabs. Debug Viewer Debug Viewer enables you to check the automation execution at any point of the automation flow. Use it to check for errors in a specific component and to check the component\u2019s Input and Output data values. Components with errors are highlighted in red , components without errors are highlighted in green , disabled and skipped components are highlighted in grey . Errors If there is an error in the component, you can check its details by selecting the component and going to the Error tab. If the error appeared inside the Try/Catch component, and you have an Exception with an error message inside the Catch branch, this error message will be displayed in the execution details. To check the original error message, go to the Error tab of the component that caused the error. Input and Output Data Values You can check the Input and Output data values of the Trigger, Action, and Set Variable components. Note: you cannot access the Input and Output of the disabled or skipped components as they do not interact with data. Debugging Foreach Loop When debugging a Foreach loop, the values you see in Input and Output of the Action component are values from the last iteration of the loop. If you have multiple conditional branches within a Foreach loop, you will receive different values in Input and Output of the components that are located in different branches. Those are the values from the last iteration over the array where the condition for the branch was met." }, { "url": "https://docs.skyvia.com/automation/operating-automation/testing.html", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Operating Automation Testing Automation To test your automation while building it or when you face any errors, use Test Mode . It enables you to test automation without involving the billed tasks and navigating to the automation overview. For example, if automation fails on a specific payload, you can disable it, enter the Test Mode , locate the failed execution, and debug it to examine what caused the failure. Test Mode Overview To enter Test Mode , select it at the top while building your automation. If there are no previous test executions found, Skyvia will offer you to run your first test execution. If you open a Test Mode and make changes to your automation flow before running the test, you will need to save the changes first. The Test Mode behavior depends on the trigger type. If you are using a Manual or Schedule trigger type, starting the test will execute your automation in a Test Mode immediately. If you are using Connection or Webhook trigger types, after you run the test it begins to wait for an event and starts the test only when the event is detected. After the initial test run is complete you have two options: Repeat execution , and Start new test . For the Manual and Schedule trigger types, both options will run the current automation flow. There is a difference for the Connection and Webhook triggers: if you choose Repeat execution Skyvia will run a test using data from the previous execution, if you choose Start new test Skyvia will begin waiting for the new event. History of Executions Click Show History to view the history of the executions of your automation. Use Test and All buttons to either show all executions or only the test ones. You can re-run the executions by selecting them from the list on the History panel and clicking Repeat Execution . As you can repeat the test execution any number of times, Skyvia captures each repetition and stores it in Execution History. Click See repeats to view all the repetitions of a specific execution. Note that when you are running a test using the flow version that differs from the current flow version Skyvia does not load the re-built automation flow version automatically. Click Load flow version to load the proper flow version and display status indicators and input/output data. Queued Executions Found When you test the automation and Skyvia finds queued executions you will have a choice to either start test on the first one or clear the queue and start waiting for a new event. You may face this situation, for example, if there were 20 queued executions and you disable your automation after it processed only 10 of them. After an execution is placed in the queue, Skyvia stops treating the associated event as a new event. The Clear queue button discards all pending executions. When you re-enable your automation after clearing the queue, Skyvia will not detect events associated with the discarded executions. Stop Test You cannot enable your automation while the test is in progress. To enable the automation, stop the test by pressing Stop test while in the test mode or on the automation overview page." }, { "url": "https://docs.skyvia.com/automation/tutorials/", "product_name": "Automation", "content_type": "Documentation", "content": "Product: Automation. Documentation Automation Tutorials We\u2019ve gathered the list of the most common real case scenarios and created a step-by-step guide for each of them, so you can look here for some help or check how others use automation. If you are looking for something specific and can not find it in the list, feel free to email us your tutorial ideas . Coming Soon\u2026" }, { "url": "https://docs.skyvia.com/backup/", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Skyvia Backup allows you to back up data from supported cloud applications automatically on a schedule or anytime manually. Backed up data can be viewed, exported, or restored directly to a data source via web browser, from the Skyvia interface. Base Terms In Skyvia, a backup is a configuration, determining what and when to back up. It specifies the data source to back up data from, the set of data source objects to back up with additional settings such as a set of fields to back up and filter conditions, and backup schedule. A backup \u201cruns\u201d on schedule or started manually, creating snapshots . A snapshot contains all the backed up data for the specified date and time, according to settings, specified in your backup. Scheduled backup is available for paid subscriptions only. Users on free subscriptions can run backup manually. Backing Up Data With Skyvia, you can set up automatic backup that runs up to once per day. Whenever necessary, you can also back up data manually. Every time Skyvia performs a full backup , so each snapshot contains all the data, selected in the backup. When you set up a backup, you can select what data to back up. You can select cloud objects to back up data from, exclude some of their fields from backup, if necessary, and even configure filters to back up only data, matching certain conditions. Backed up data are stored in a secure Microsoft Azure cloud. They are stored securely, in an encrypted form. For more details, please read Backup Security . Viewing and Exporting Data Backed up data can be easily viewed and exported from web browser. After opening your backup, you can navigate its snapshots, open an object in any of the snapshots and view its data in the browser. Whenever necessary, you can also export backed up data to a CSV file to store it locally. For more details, please read Working with Backed Up Data . Restoring Data You can restore data simply by opening a snapshot, selecting records or whole objects, and then clicking the Restore button. Skyvia then provides you a choice of operations you can do with the selected data \u2014 insert to data source, update the corresponding records in the data source to the values from the snapshots, or delete them from the data source (NOT from the snapshot). You can restore separate records, whole cloud objects, or all the data. Skyvia Backup is able to restore records with their relations. You just need to select all the necessary related records and restore them together in one restore operation. For more details, please read Restoring Data . Alternatively, you can switch to the Changed view and compare the current snapshot with the previous ones. The displayed changes can be simply selected and undone with the same Restore button. For more details, please read Comparing Snapshots and Viewing Data Differences . Backup Limitations Skyvia performs backup and restore operations via API, provided by data sources, and thus, is sometimes limited to the corresponding API functionality. If data source API doesn\u2019t allow querying all the data from a specific object, Skyvia cannot back up data from such objects. If data source API doesn\u2019t allow writing to a cloud object, Skyvia cannot restore data of such object. Some data source object don\u2019t allow specific operations \u2014 either adding records or modifying them or deleting them. In such case Skyvia Backup also cannot perform the corresponding operation when restoring data. For more details, please read Backup Limitations and Troubleshooting . Backup Connections Skyvia Backup also allows using backed up data in other Skyvia products \u2014 Data Integration and Query . You can create connections to your backed up data and use them in Data Integration and Query. For example, you may use query to analyze your backed up data or export only some specific backed up records. Data integration can be used to integrate data with other data sources without adding additional load to the backed up source itself or to perform more custom restore operations. Backup connections are read-only. You cannot modify data in your snapshots. They can be configured to point to a specific snapshot or always to the latest one. Currently, backup connections cannot be used in the Connect product." }, { "url": "https://docs.skyvia.com/backup/backup-limitations.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Backup Limitations Skyvia performs backup and restore operations via API, provided by data sources, and thus, is sometimes limited to the corresponding API functionality. Here are Skyvia Backup limitations you need to consider: Read-only Tables In some of the supported data sources, some tables are read-only. For example, in Salesforce, the objects with name ending with \u201cFeed\u201d or \u201cHistory\u201d are read-only. Skyvia can back up such tables, but it cannot restore data to such tables, since they are read-only. Tables That Don\u2019t Allow Selecting All Data In some of the supported data sources, some tables don\u2019t support selecting all the records from them. They allow selecting records only by their IDs. There is a number of such tables in Salesforce - ContentFolderItem, ContentFolderMember, IdeaComment, Vote, ListViewChartInstance, etc. Skyvia cannot normally back up records from such tables, unless you specify IDs of the records to Back up in the filter settings . If a backup includes such objects, it will show the corresponding errors for them in your snapshots. Tables with Limited DML Support In some of the supported data sources, some tables support not all DML operations. For example, a table may support updating and deleting existing records, but not creating new records. The corresponding data source limitations apply to Skyvia Backup too. If a table does not support the INSERT operation, Skyvia cannot be able to restore deleted records, or if a table does not support UPDATE, Skyvia cannot undo record updates, etc. Polymorphic Relations Some cloud applications have polymorphic relations \u2014 foreign keys, when the same foreign key field can reference different kinds of objects. For example, in Salesforce the OwnerId field of the Case object can reference User or Contact object. Skyvia does not allow navigating such relations when viewing data and does not display the related records by such relations. Additionally, in some cases Skyvia cannot restore such relations in a data source when performing a restore operation. It can restore such relations only if the corresponding foreign key field is not required (can have NULL values) and is updatable. For example, Skyvia cannot restore relations of the Dynamics 365 Connections table that has polymorphic relations on the record1id and record2id fields, which are required. Relation Cycles Skyvia does not fully support restoring relations in case the relations between backed up objects are cyclic. For example, you back up objects A, B, and C. The A object has a reference (foreign key) to B, B has a reference to C, and C has a reference back to A. When Skyvia restores data with such cyclic foreign keys, it omits one of the keys to break the cycle. If there are multiple cycles, Skyvia may omit multiple foreign keys when restoring data. This happens regardless of the number of objects in a cycle. The only exception is a cycle of one object, when the object references itself (for example, an account references a parent account). Skyvia supports restoring such relations. A foreign key to break is selected automatically, based on objects and foreign keys structure. The only way to mitigate this limitation is to manually exclude fields of foreign keys that you prefer to break from the backup. In this way you can choose which relation from the cycle not to restore, instead of the automatic Skyvia choice. For this, edit the integration, click the edit task link for the corresponding object and clear the checkbox for the corresponding foreign key field." }, { "url": "https://docs.skyvia.com/backup/backup-security.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Backup Security Skyvia stores backed up data in secure Azure GRS storage. All your data is stored in an encrypted form, using AES 256-bit encryption, which is one of the strongest ciphers available. Skyvia uses unique encryption keys for each user, and our engineers don\u2019t have access to them. They only have access to your backups metadata. All the interactions of our UI with API are encrypted with SSL, so you don\u2019t need to worry about security of your data. Since Skyvia is hosted in the Windows Azure cloud, so all the latest security updates are always applied. Microsoft works hard to provide customers the best security and protection for their data, and security is built right into their Azure platform. Geographically, backed up data are stored in West US data centers. For more information please visit our [Security page](https://skyvia.com/security) and [Privacy Policy](https://skyvia.com/privacy-policy) ." }, { "url": "https://docs.skyvia.com/backup/troubleshooting.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Troubleshooting If any error occurred when taking a snapshot of data source data or performing a restore operation, it is displayed on the Activities tab of the corresponding backup. Besides, you can enable email notifications for data integration and backup errors in your account settings . Finding Information about Errors To see whether your backup is running successfully, click its name in the list of objects to open it. The status of the most recent snapshot is displayed immediately on the Overview tab. Please note that taking a snapshot is considered successful if at least one object, selected in this backup, was backed up successfully. Even if getting data from some of the objects failed, the snapshot has the Succeeded status. However, Skyvia also displays the number of errors occurred, if any. On the Snapshots tab, you also can find the information about whether the snapshots were made successfully or not, and you can click any snapshot to see more details. Activities As for the information about restore operations and older snapshots, you can find it on the Activities tab. This tab displays all the snapshot and restore operations of this backup. It also displays the number of errors, if any, to the right of the timeline. Color of the circle on the timeline indicates the status of the operation. Successful operations are displayed as green circles, and failed operation are displayed as red circles. If an operation is currently running, the circle has orange color, and an orange dot is displayed to the left of the Activities tab in the tab bar. Operations, canceled by the user, are displayed as the grey circle. You can click any operation in the timeline to see its additional details. For a snapshot, any error messages are displayed in their details. For a restore operation, you can download the detailed error logs from the Details pane too. See Monitoring Backup Activity for more information. Common Error Reasons Connection Problems If a snapshot or restore fail completely, the most common reason is that the connection of your backup became invalid. For example, an authentication data in your connection can expire. In case of connection issues, try testing your connection. If it became invalid, edit it and reenter necessary parameters or perform web login again. Insufficient Privileges Some data sources, like Salesforce, allow permitting and forbidding access for different users on the object or even field level. Make sure that you have enough privileges to access the required objects and to edit their data (for restoring). For some data sources you need to grant the corresponding privileges when creating a connection. API Limitation Skyvia works with cloud apps via their API. For some cloud apps, API provide limited access to their data. For example, data cannot be read from certain Salesforce objects (for example, ContentFolderItem) without specifying the IDs of the records you want to read. Thus, Skyvia simply cannot retrieve all the records from such objects, and if you add such an object to a backup, there will be an error for this object every time you make a snapshot. Some cloud objects don\u2019t allow writing via API, or allow only some of the data management operations. For such object, using an unsupported operation when performing data restore causes an error. Since these are data source API limitations, usually there is no workaround. For more details about data-source-specific limitations, see documentation topics for the corresponding data sources . Also check Backup Limitations for more details. Metadata Changes Data source metadata changes also may cause errors when creating a snapshot. Skyvia does not detect metadata changes automatically, and if you delete an object or a field that is added to a backup, it will start failing when creating a snapshot because it will try to get data from an object or field that no longer exists. In this case you will need to manually edit your backup and exclude such objects/fields from it. See Editing Backup for more information." }, { "url": "https://docs.skyvia.com/backup/working-with-backed-up-data/", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backed Up Data After you have created a backup , you can access all its backed up data via the Snapshots tab. This tab displays the list of all backup snapshots and allows browsing, viewing, and restoring backed up data. Snapshots Period By default, the Snapshots tab displays all the snapshots made by this backup. Just scroll the list down to see older snapshots. Whenever necessary, you can select one of the predefined periods or a custom period in the list above the snapshots. Use Custom range to specify any period you want. You can manually enter start and end dates in the box or click the start date and then the end date in the calendar. To specify a period of several months, navigate to the start date of the period you want to select by using the arrows at the top of the calendar to scroll through the months and click it. Then, in the same way, navigate to the end date of the necessary period and click the end date. Viewing and Exporting Data To view or export data, click the required snapshot in the list to open its details. Snapshot Details and Data Export Skyvia displays the snapshot start date, end date, time of the backup run, the total number of backed up records, the number of records modified since the previous snapshot, and the number of errors if any. It also displays the list of backed up objects and enables snapshot records to be exported to a CSV file. To export data from a backed up object , hover over the needed object and click the Export button to the right of the exported object. Viewing Records To view your backed up data, click an object in the list. Backup displays data in a sortable grid. By default, the data is displayed in the order the data source returned them when creating the snapshot. To sort the data by column, click the header of this column in a grid. Click it again to change the sorting direction. The sorting direction indicator can be seen in the column header on the right, while the column data type indicator is displayed in the column header on the left. Skyvia displays records in pages, by default \u2014 10 records per page. You can switch between pages and change the number of records on the page below the grid. For any record, you can click the click the button, to view its details. Record Details The Record Details view displays all the fields of a record and all the records, referencing the current record (child records). It allows you to quickly navigate relations and select related data in order to restore them. By default, only the first ten fields of the record are displayed. You can display all the fields by clicking the Show More link. Click Show Less to display only ten fields again. If there are records in the current snapshots that reference the current record, they are displayed in the grids below, grouped by the objects. The grids with related records are titled with the name of the object and the name of the corresponding foreign key field that references the current record. To open record details, click the button. \nIf a field is a foreign key that references another backed up record, the value of this field is displayed as a link that allows you to navigate to the details of this record. Polymorphic relations (when the same foreign key can reference different objects) are not supported. For example, suppose we have backed up Salesforce Account, Contact, Case, and User objects. Contact and Case objects reference Account by the AccountId foreign key field, and the three objects, Account, Contact, and Case , have multiple foreign keys to the User object (by the OwnerId, CreatedById, and LastModifiedById fields). In this case, when viewing record details for an Account record, the Record Details page displays all the cases and contacts that reference this account, if there are some. The grids with related records are titled with the name of the object and the name of the corresponding foreign key field. In the list of the Account fields above the grids with related records, the OwnerId, CreatedById, and LastModifiedById foreign key field values are links to the details of the corresponding User record. Comparing Snapshots Skyvia supports comparing different snapshots and viewing data differences between them. To compare a snapshot with the previous one, click the Changed tab when viewing the snapshot details or object data. After this, you can also compare the current snapshot with an even older one, view field changes between snapshots in a single record, etc. For more details, read Comparing Snapshots and Viewing Data Differences . Data Search Skyvia offers powerful data search functionality for your backups. You can search your backed up data by a fragment of one of its fields. For example, if you want to find a specific account in your backups, you can find it by a part of its name, address, phone, etc. You can search for data in a specific snapshot or in multiple snapshots, from more recent down to older ones. To search for a record, open a snapshot and open an object in which you want to search for the record. Click the search button and enter the search string into the Search box above the grid. Backup displays the records containing the searched string in any field of the searched object. To continue the search in older or newer backups, click the corresponding buttons on the right in the search box. Alternatively, you can search data in snapshots made within the specific date range. Select the specific period in the Date Range box. Skyvia will search for records containing the searched string in any of their fields, from the most recently displayed snapshot to the older one. It will open the most recent snapshot object data containing the matching records filtered by the search string. Restoring Data You can restore data by selecting records or whole objects and clicking the Restore button. Skyvia then provides you a choice of operations you can do with the selected data \u2014 insert to the data source, update the corresponding records in the data source to the values from the snapshots, or delete them from the data source (NOT from Skyvia\u2019s snapshot). You can restore separate records, whole cloud objects, or all the data. Skyvia Backup is able to restore records with their relations. To do that, select all the necessary related records and restore them together in one restore operation. As described above, you can easily browse record relations using the Record Details view. For more details, please read Restoring Data . Deleting Snapshots You can free up storage space by deleting snapshots in several ways. See Managing Storage Space and Deleting Old Snapshots for more information. You can only delete the whole snapshot. You cannot delete specific records within a snapshot." }, { "url": "https://docs.skyvia.com/backup/working-with-backed-up-data/comparing-snapshots-and-viewing-data-differences.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backed Up Data Comparing Snapshots and Viewing Data Differences Skyvia supports comparing snapshots and viewing and undoing the data differences between them. If a backup contains more than one snapshot, when you view details of a snapshot or view data of one of the backed up objects, you can see the All and Changed buttons above the grid. The snapshot must not be the oldest snapshot of this backup. The All view displays the backed up data. The Changed view allows you to compare data in different snapshots and see changes between them. The Changed view displays the numbers of changed (inserted, updated, and deleted) rows per backed up object when viewing snapshot details , and it displays the data changes between the compared snapshots in the current object when viewing object data. Selecting Snapshots to Compare By default, the Changed view displays data changes from the previous snapshot. Next to the Changed button, you can see the date and time of the snapshot with which the current snapshot is compared. Click the down arrow to the right of this date and time to select another, older snapshot if necessary. This list only displays snapshots on the Snapshot tab before switching to the Changed view. If you want to compare a snapshot with a very old one, ensure both snapshots are present in the selected period on the Snapshots tab. Then scroll down till you see the older snapshot, open the newer snapshot, and click the Changed button. If you have never compared this snapshot pair, the corresponding message will be displayed. Click the Compare Now button to compare snapshots. It can take some time if a lot of data is backed up. If source column names have changed between snapshots, the Compare operation will not run. You will see a \u2018No changed objects\u2019 message Record States When you view the changed records since the second compared snapshot, the State column displays the record\u2019s status \u2014 whether it was added, deleted, or updated between snapshots. It can have the following states: \u2014 The record is not present in the older snapshot but is present in the newer one. \u2014 the record is present in the older snapshot and not present in the newer one. \u2014 the record is present in both snapshots, but some of its fields were changed. For updated records ( ) you can see the changed fields on the Record Preview Page. To do this, open the record details. Undoing Data Changes When comparing snapshots, you can select checkboxes for the data changes and then click Restore to undo them. Thus, you can compare two snapshots and then select and undo changes between them. The operation to apply is selected automatically. For added rows (marked with in the State column) restore (undoing adding this row) means that the row will be deleted from the data source. For deleted rows (marked with in the State column) restore (undoing the deletion of this row) means that the row will be inserted into the data source. For updated rows (marked with in the State column) restore means undoing the row update. Values from older snapshots will be applied. For updated records ( ) you can select and undo the separate field changes . To do this, click the Select Fields link in the Action column and select the fields to restore. Thus, for example, to undo the recent changes you can create a fresh snapshot, compare it with one of the previous snapshots, select the changes you want to undo and click Restore . This can be useful when you want to revert data in some or all of your cloud objects to some previous state. In order to do it, you will need to make a new snapshot of the current state of the data, compare this snapshot with the snapshot for the point in time you want to restore the cloud data to, select the changes you want to undo (or all the changes, if you want to restore the cloud application to the previous state completely), and click the Restore button. More details are available in the Restoring Data topic. You can also use this functionality when you need to perform a mass data update or import to your cloud application and want to ensure that this operation won\u2019t break anything. In this case, create snapshots before and after the operation, compare them, and analyze the data changes. If you don\u2019t like them, you can select and undo them." }, { "url": "https://docs.skyvia.com/backup/working-with-backed-up-data/restoring-data.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backed Up Data Restoring Data When you need to restore data from a backup, open your backup and switch to its Snapshots tab. Then perform the following steps: Open the snapshot with the necessary data, browse its data, and select checkboxes for data you want to restore. You can select checkboxes for separate records or whole objects. Click the Restore button and select the operation to apply for the selected data \u2014 Insert , Update , or Delete . When restoring data while viewing data changes , you don\u2019t need to select the operation manually. Skyvia selects the operation to apply that undoes the selected change: Inserted records are deleted. Deleted records are inserted. Updated records are updated with their previous values. For the Insert operation, you can optionally select another connection to restore data to in the Restore details dialog box. Then simply click Apply , and the data will be restored. Restoring Related Data Suppose you want to restore records with their relations, for example, the Salesforce account and all its contacts. You must select and restore all the required related records in one restore operation . If you restore them separately, the relations may be lost. When a record is inserted in a cloud application, it gets a new Id value. When you restore the related records that reference it with an old backed up Id value, the relation cannot be restored using this old value. When you restore the related records in one operation, Skyvia retrieves the new IDs and builds relations between restored records. Skyvia cannot build relations between the records that have been restored separately. Skyvia allows you to easily navigate by relations when browsing data and quickly select all the related records. Hover over the related record and click the button to open the related record details. This page displays all its related Contacts and other records for an Account record. You can navigate to the details of these related contacts and see and select their related records. The foreign keys referencing parent records are displayed as links to their details, so you can easily navigate to them and select parent records, too. You can find more details here . After you select all the necessary records, click Restore . Then select the operation to apply for the selected data - Insert , Update , or Delete and click Apply . Finding and Restoring Specific Record If you need to find and restore a specific record, you can use a backup search. Skyvia offers powerful data search functionality for your backups. You can search your backed up data by a fragment of one of its fields. For example, if you want to find a specific account in your backups, you can find it by a part of its name, address, phone, etc. You can find more information about backed up data search here . Thus, you can easily find the necessary record and then use the Search in older backup/Search in newer backup buttons near the Search box to navigate to its newer/older versions to find the necessary version of the record. After you find it, select it and click Restore . Then choose the operation to apply for the selected data - Insert , Update , or Delete and click Apply . Restoring Separate Field Values Skyvia allows restoring even separate field values. If you made some undesired changes to a field and want to restore the previous value from the backup, you can do it in the following way: Find the necessary record using search as described here . Continue Search in older backup till you find the version of a record with the necessary value of the field you want to restore. Click the Search in newer backup button once to open a snapshot that was made after the change you want to revert. Click the Changed button to see the data changes. It should display the record as updated (with the mark). Click the button to select the changed records. Click Restore to revert the change. Use this procedure for a case when an undesired change was made some time ago, several backups were already made since the change, and you don\u2019t know which snapshot contains the necessary version of a record. If you know exactly when the change was made, open the data of the necessary object in the first snapshot made after the change, find the necessary record using the Search button, and proceed directly to step 4. If no snapshots were made after the change, you will need to create one first. Bulk Restore You can restore all the records in a specific object, or all backed up objects from your backup. When viewing snapshot details , you can select checkboxes for the backed up objects. Then, you can click Restore and select the operation to apply to the records. To select all the objects, select the checkbox in the header of the grid. Skyvia only performs the selected actions for selected backed up data. It does not restore the data in the source to its original form completely. For example, it does nothing to the newly created records that haven\u2019t been backed up yet. If you want to restore all the data or a specific object to a point in time when one of your snapshots was made, you can do it in the following way. Create a new snapshot. Restore in Skyvia won\u2019t affect the records that are not backed up yet. If you want to restore data to the previous point in time, you need to have all the data backed up. Make sure that the snapshot for the date you want to restore data to is displayed in the list of the snapshots. If it is not displayed, make sure that the All time period is selected on the Snapshots tab, and scroll down till you see the required snapshot. After you see the required snapshot, scroll up to the most recent snapshot and click it. Click the Changed button. From the drop-down list on the right, select the date/time of the snapshot you want to restore data to. If this backup pair was never compared, the corresponding message is displayed. Click the Compare Now button to compare backups. Note that it can take some time if there is a lot of data backed up. Select the checkbox for the corresponding object or all the objects and click Restore . Then click Apply . Restoring Data to Other Connections Skyvia allows you to restore data from a backup to another connection. It must be another connection of the same cloud application. For example, if you have backed up data from Salesforce, you can restore them to another Salesforce organization but not other cloud applications. Select the necessary data to restore data to another connection and click Restore . Then click Insert Records and select a connection to restore data. Selecting a connection is not available for Update and Delete operations or when viewing and undoing data changes (using the Changed view). More Complex Scenarios If you need to perform even more complex restore operations, for example, use Upsert or restore only records that meet some criteria, you can use Skyvia\u2019s Data Integration functionality with your backed up data. You can create a connection to your backed up data and then configure an Import that loads data from your backup to your data source. Import provides advanced functionality such as source data filters , expressions , all DML operations support including UPSERT , and many others." }, { "url": "https://docs.skyvia.com/backup/working-with-backups/", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backups Managing Backups Skyvia backups are available in the object list. You can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, filter by name or data source, etc. You can create a new backup from the + Create New menu, following a simple instruction . Backup Details Page Click on a backup to open the backup details page. It consists of a title bar and three tabs. To create a snapshot manually, click the Backup now button on the title bar. The Edit settings button allows you to edit this backup . Click it to select what data from the connection to back up. By using the button, you can delete your backup with all its snapshots to the Trash bin. Deleted backups with their snapshots are stored in the Trash bin for two weeks, and you can restore them during this time. After two weeks, they are deleted completely. Whenever necessary, you can delete objects from the Trash bin. In this case, they are deleted completely and are unrecoverable. Overview Tab This tab displays general information about the backup and consists of several blocks. The Connection block displays the source connection details. To edit the connection, click Connection settings . The Storage Information block displays the number of snapshots, how much space they use, and how much space is used by all your backups in total. To free up the storage space, click Clean up . To acquire more space, click Expand storage . The Last Snapshot block displays the backup last run details. It shows the last snapshot\u2019s starting and finishing time, the number of backed up records, the number of records modified since the previous snapshot, and the number of errors, if any. The Schedule block displays the current schedule settings. To configure the schedule, click the button. Here, you can also add a general description to your backup and track who modified this backup and when. Snapshots Tab This tab contains the list of all the snapshots made by this backup. It allows you to work with backed up data \u2014 restore it to the data source or export to CSV. Activities Tab This tab displays the list of all operations performed with this backup and their details. Here, you can view all the backup activity ." }, { "url": "https://docs.skyvia.com/backup/working-with-backups/editing-backup.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backups Editing Backup To edit a backup, click Edit settings on the title bar of the Backup details page or hover the cursor over the backup and click its Settings button. Select objects to back up and set filters if needed. Backup displays the previously selected data source objects by default. Disable the Show selected for backup only toggle to display all available objects. Changes that you make to the backup do not affect already backed up data. For example, if you exclude an object from the backup, you can still see its data in the snapshots made before this change. Metadata Changes Skyvia does not support automatic metadata change detection. If the objects or fields were added, deleted, or modified in the data source, you have to manually apply the corresponding changes to your backup. For sources with the metadata cache enabled, clear or disable the metadata cache first. To add a new object to your backup, disable the Show selected for backup only toggle and select the needed object. To update the list of backed up fields, click Edit near the needed object and click Refresh in the bottom left corner of the object editor. Select new objects checkboxes and clear the deleted objects checkboxes. Editing Backup and Snapshots Comparison Note that modifying a backup also affects snapshot comparison . If you added or removed fields in a backup or renamed the field names between snapshots, the compare operation won\u2019t run. The \u201cNo changed objects\u201d error will occur. If you add, delete, or modify filters for an object, and the next snapshot after this change contains a different set of records than the ones before the change, Backup will display all these changes when comparing snapshots. It does not check whether these differences are caused by filter settings modification or by actual data changes." }, { "url": "https://docs.skyvia.com/backup/working-with-backups/how-to-create-backup.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backups How to Create Backup You can create backups in the convenient Backup Wizard through a series of well-defined steps. Backup Wizard contains a progress bar, which shows how successful you are in creating your backup. To create a new Backup, perform the following simple steps: Click the + Create New button in the top menu and select Backup . Select a connection or create a new one by clicking + Add new in the Backup Wizard. Select checkboxes next to the objects you want to back up and click Next step to proceed. Enable backup to run on a certain schedule or omit this step to run it manually. Scheduled backup is available for paid subscriptions only. Users on free subscriptions can run backup manually. Name your backup and save it. Having made all the steps and created a backup, you can start to back up your data by clicking Backup now at the top right of the page." }, { "url": "https://docs.skyvia.com/backup/working-with-backups/managing-storage-space-and-deleting-old-snapshots.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backups Managing Storage Space and Deleting Old Snapshots Skyvia offers free and paid subscription plans for backups. Backup pricing plans differ by the storage space provided and snapshot lifetime duration. To choose the right pricing plan that meets your storage needs, you should multiply your data volume by the number of snapshots you plan to store. If you have a free Backup subscription, Skyvia cleans the snapshots older than 3 months automatically. If you exceed the storage limit, but your snapshots were created less than 3 months ago, Backup won\u2019t create new snapshots until you free up some space. If you have a paid Backup subscription and exceed the storage limit, Backup will preserve the old snapshots, but won\u2019t create new ones until you free up storage space. However you can enable the automatic snapshot deletion and Backup will clean storage automatically everytime the limit is exceeded. Paid Backup subscription suits best if you plan to back up large data volumes and clean storage automatically. Monitoring Storage Space Used by Backups You can find the information on how much storage space is used by your backups on the Subscriptions tab of your Account page, and on the Overview tab of any backup. The Account page displays how much space is used, and how much space is provided by the current pricing plan. A backup overview includes a visual diagram, displaying how much space is taken by the current backup, by all of your backups, and how much space your current pricing plan provides. How to Free Up Storage Space Deleting Snapshots Automatically with the Autoclean Toggle To enable automatic deletion of snapshots on the Subscriptions tab of your Account page, click the Storage details link in the Backup subscription plan and turn on the Autoclean toggle in the opened Storage Manager window. When you activate the autocleaning mode for the first time, Skyvia selects all backups from the list automatically (checkboxes next to all backups will be selected). If you want to disable the autocleaning mode for specific backups, clear checkboxes next to them. The Storage Manager window also shows information on the total size of free and used storage space of the subscription and how much space each backup consumes. How the Snapshot Deleting Algorithm Works When the autocleaning mode is on, once every 24 hours Skyvia launches automatic deletion of snapshots starting from the oldest ones (i.e. cleaning of snapshots older than 2 weeks). The autocleaning lasts until enough storage space is cleared not to exceed the subscription limit. When deleting snapshots, Skyvia stores at least one successful snapshot of a backup. For example, if your last snapshot failed or was canceled, the previous successful snapshot will be saved. If you use a paid subscription plan but want to downgrade to a less expensive one, Skyvia does not disable the autocleaning mode. However, next time when the autocleaning of snapshots is done, Skyvia deletes larger number of old snapshots to meet the storage limit of a new subscription plan. Deleting Multiple Snapshots Manually If you want to delete multiple snapshots manually, you can do it on the Overview tab of a backup. For this, click the Clean up link in the right part of a pane with the storage space information. Then, in the Clear Backup History window, select which snapshots you want to delete: All \u2014 this option deletes all the snapshots except the latest one, and also all the backup activity records , except the latest ones. Please note that if the latest snapshot failed, this option will only keep this snapshot, and thus, it won\u2019t keep any backed up data at all. As for restore operation, it only keeps the latest restore record in the backup log, and only if it happened later than the latest snapshot. Older Than \u2014 it specifies the date, and all the snapshots and backup activity records prior to this date (not including on this date) will be deleted. Between \u2014 it specifies dates to delete snapshots and backup activity records between these dates (including on these dates). When you delete old snapshots, using Clean up functionality, this action is irreversible. Deleting Single Snapshots Manually If you want to delete separate snapshots in your backup, you can do it on the Snapshots tab. For this, you need to go to the required backup, switch to the Snapshots tab, click the Clean up button and select checkboxes next to snapshots you want to delete. And only selected snapshots will be deleted. Deleting the Entire Backup You can delete a backup either by dragging it to the Trash folder on the left or by selecting it in the objects list and clicking the Delete to trash button. You may also open a backup and click the button on the title bar. If you delete the entire backup, it is placed to Trash and can be restored with all its snapshots within two weeks, unless you delete it from the Trash too. After this, it is deleted completely and irreversibly." }, { "url": "https://docs.skyvia.com/backup/working-with-backups/monitoring-backup-activity.html", "product_name": "Backup", "content_type": "Documentation", "content": "Product: Backup. Documentation Backup Working with Backups Monitoring Backup Activity You can monitor the Backup log on its Activities tab. It displays a timeline with all snapshots and restore operations, performed by this backup. To open the operation details, just click the needed operation. You can select any period to display activity for. You can also click any of the displayed records to see its details. Displayed Information Backup activities log displays every snapshot and restore operation indicating its status and time. Backup operation status is indicated with circles of different colors. \u2014 operation is currently running. \u2014 operation succeeded. \u2014 operation failed. \u2014 operation was canceled. The operation date and time is displayed on the left of the timeline, and numbers of records are displayed on the right. You can click any operation in the timeline to see its additional details. Activity Period By default, the Activities tab displays the current backup activity since its creation starting from the latest operation. To see the older activities, scroll down the timeline or use the Date Range filter. You can select one of the predefined periods or a custom period. If you select Custom range , you can specify any period. Enter start and end dates manually in the box or click the start date and then the end date in the calendar. Snapshot Activity For the snapshot operations, Skyvia displays the number of backed up records, the number of records modified since the previous snapshot, and the number of errors, if any. If you click it, Skyvia additionally displays the snapshot start and end time and the number of backed up records per object on the Details pane. The Details pane also displays error messages, if any. For more information see Troubleshooting . You can also click the View Snapshot button on this pane in order to open the snapshot details on the Snapshots tab of the backup. There you can view, export, and manage backed up data of this and other snapshots. Restore Activity For restore operations, Skyvia displays the numbers of successfully restored and failed records, if any. If you click it, Skyvia additionally displays the snapshot start and end time and the numbers of records, inserted, updated, and deleted in the data source for each object. It also displays the number of failed records per object. If there are any, the number of failed records is a link, which downloads a per-record error log in a CSV format. You can download and study it to determine why each record failed to restore. For more information see Troubleshooting ." }, { "url": "https://docs.skyvia.com/collaboration.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Collaboration Skyvia offers convenient collaborative environment both for legal entities such as companies, organizations and individuals such as private customers, consultants, etc. There is a variety of collaboration tools you can use in Skyvia \u2014 starting from shared accounts and subscription plans to shared workspaces and objects. With Skyvia, companies will be able to easily increase workforce productivity, improve intercompany and team communication, conveniently set up and manage the entire organization\u2019s platform for working with data while controlling costs and ensuring security. What is also important to mention is that in our platform we separate the concept of user profile and account . Profile contains user\u2019s personal information (first name, last name, job title, password and other credentials) and serves for associating user with his/her account(s) and roles in certain workspaces. Meantime, account is an identity created for a user in the system to manage subscriptions and workspaces, which belong to this account. Key Terms User is a person/company registered in Skyvia, having unique name, password and other credentials stored in the Profile. Account is an identity created for a person/company in the system to associate subscriptions and workspaces with and to manage them. Any subscriptions to Skyvia products and all the payment information, including invoices, are related to a certain account and managed solely by account admin. Workspace is a working area of a user or a team of users. Workspace may represent your private workspace if your work alone or may represent the workspace of the company you were invited to. Workspace combines all the objects (integrations, queries, etc), created by you or by company, in one place. Workspace role is a group of certain permissions assigned to a user in the workspace. The permission itself is a right to access, manage or perform other actions with one or several types of Skyvia objects (integration, backups, etc.) in the workspace. Subscription is a fixed annual or monthly pricing plan selected and payed by the customer to use Skyvia products. In total, Skyvia offers 4 main products, each having separate subscription plans. Customer can invite other users to share his/her account and subscription plan, as well as workspaces and objects. Please note that each user can have access to several accounts and several workspaces. Users can switch between accounts and workspaces when they need to; their rights and roles in each account or workspace may differ. Introduction to Users\u2019 Roles and Privileges Skyvia separates users\u2019 rights in the account from users\u2019 rights in the workspace. It is done to provide the overall account, workspace and data security. If we are talking about the account level, there can be two kinds of account users: with or without administrative rights and privileges. If we are talking about the workspace level, there can be four standard users\u2019 roles with the corresponding permissions. These roles are assigned to account users depending on the tasks they should perform in the workspace. User Rights in the Account In Skyvia, user can be an account admin or account member , or both but for separate accounts. The member status gives minimum rights to users, while administrator status gives full administrative privileges. User Roles in the Workspace Invited users can access a certain workspace and its objects solely based on the roles they were assigned in this workspace. Each workspace role has a set of associated permissions. Permissions can be both general, like managing workspace settings or workspace objects (creating, updating, deleting), usually assigned to a workspace admin or developer, and separate, like viewing, executing certain types of Skyvia objects, usually assigned to workspace members or supporters. Skyvia offers 4 standard, ready-to-use roles, which can be assigned to account members to work in the workspace, such as administrator, developer, member, supporter. These roles have appropriate, already predefined permissions. For more specialised needs, you can easily create your own custom roles and select the permissions you want from the available list. You can assign both standard and custom roles to invited users. Custom roles can be created and added to the workspace only by account admin. Workspace admin can assign workspace custom roles to users, but cannot create such roles himself/herself. Benefits of Workspace Roles Workspace roles give an opportunity to onboard new users with ease and with no access errors. You save time and effort by associating new users with roles from the start. With roles, you do not have to set up each new user one by one, manually applying the same permission set over and over to each user when adding them to the workspace. Instead you can use automatic selection of all users to add them to the workspace, and when they are added to the workspace, you can conveniently change/update their roles in a couple of seconds even if you add users from the entire organization to a single workspace. Workspace Security When it comes to security, we use the best practices to protect your data and help you feel confident about doing business with us. For security reasons, we have developed such functionality as Personal workspace . When migrating users to the new workspace functionality, Skyvia automatically changes users\u2019 default workspaces to personal ones. Personal workspace has been developed for compatibility with the old Skyvia. In old Skyvia, users could not collaborate in the common workspace sharing connections, integrations, queries and other Skyvia objects with each other, but they could share same subscriptions to Skyvia products after being invited to the account. Now, we have implemented the possibility for users to team up in common workspaces. When migrating several users of one account to a new Skyvia interface, we separate their workspaces in order for them not to be able to see objects of each other until they allow it themselves. That is why the system automatically enables the Personal Workspace toggle for workspaces of current Skyvia users when migrating them to a new Skyvia interface with new workspace structure. Please note you can change one of your workspaces to personal on your own to limit access of other users, including other account admins, to it. After that, other users will not be able to view, edit or do any other actions with your personal workspace. You can make a workspace personal only if you are the single user of this workspace. In case there are other workspace members, you will not be able to make it personal unless you delete other members first. Your default workspace can be your personal workspace. To make your account personal, go the default workspace settings, click General on the left and turn on the Personal workspace toggle. below. Popular Collaboration Scenarios in Skyvia The secure, convenient collaborative environment is what many small, medium-sized or large project-oriented companies need to ensure effective work of their employees. Among our corporate clients, which are companies with more than one user, such 2 collaboration scenarios are most popular: The manager registers in Skyvia, invites one or two technical specialists to the account, assigns them a certain workspace role \u2014 usually an administrator or developer. The technical specialists configure workspace settings, create connections, integrations and other necessary Skyvia objects. Later technical specialists can give back their workspace admin rights to the manager they were invited by. The technical specialist registers in Skyvia and receives automatically an account admin status. The specialist customizes the workspace, i.e. creates necessary objects, like connections to cloud apps or databases, integrations, queries, endpoints, etc., invites other users as account members, invites a manager as the second account admin and assigns him/her the workspace admin rights. Later, if needed, the manager can revoke the account admin rights from the technical specialist." }, { "url": "https://docs.skyvia.com/common-platform-features/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Common Platform Features This section contains the description of Skyvia tools and features that are used across multiple Skyvia products. Please select the corresponding item from the navigation menu to find out more about it." }, { "url": "https://docs.skyvia.com/common-platform-features/expression-editor.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Common Platform Features Expression Editor Expression editor is a visual interface for building expressions using operators, variables, functions, and properties. Expressions can be used to map fields, create conditions, generate file names, and more. This chapter tells you how to use Expression Editor and explains its role in Skyvia. Getting Familiar With the UI By the end of this paragraph you will get familiar with the UI elements of the Expression Editor and learn how to use them. Expression Area Expression Area is taking the major part of Expression Editor\u2019s UI. Use it to create and edit your expressions. You can use the sidebar or the operators menu to add elements to the Expression Area. Operators Menu You can find Operators Menu at the bottom. It provides the list of operators that you can use to create expressions. Click on the operator to add it to the Expression Area. Sidebar You can find sidebar on the right. It gives you a quick access to the properties, [variables](https://docs.skyvia.com/data-integration/data-flow/parameters-and-variables.html) , and [functions](https://docs.skyvia.com/expression-syntax/functions.html) . Combine them with the operators to create complex expressions. Click on the prefered item on the sidebar to add it to the expression area. The arrow next to the item means that there are nested items in it. To quickly find a needed item use search . Preview Preview pane allows you to check the expression results by manually setting values to properties and variables. Validation Use Validate button to check if your expression is valid. If it is not, you can check the reason by clicking Details. Moreover, Expression Editor automatically underlines the invalid elements of the expression in the Expression Area. Expression Editor provides code completion. To use code completion, start typing your expression and Skyvia will offer you the autocomplete options. Expression Syntax The syntax used to create expressions is described in details in the [Expression Syntax](https://docs.skyvia.com/expression-syntax/) chapter of the documentation. Read it to learn about identifiers, data types, literals, opertators, and functions. Expression Editor Usage Expression editor is widely used throughout Skyvia products. You can open it whenever you see icon. To find examples of the most common use cases, check examples below. Mapping Fields to Expressions You can map fields to expressions on the Mapping Definition page by choosing Expression from the dropdown. As an example, you can map FullName field to the following expression: FirstName + ' ' + LastName Setting Conditions In Data Flow, Control Flow, and Automation, components use conditions to navigate the execution logic. Conditions are expressions that are being evaluated to a boolean value. Example of such expression would be City == 'London' . Generating Files Names While loading data to JSON or CSV targets, you can choose to generate a name of the file based on a condition instead of typing it in manually. For example you can use the following expression to add a timestamp to the file name: 'CompanyReport ' + ' ' + string(get_date())" }, { "url": "https://docs.skyvia.com/common-platform-features/mapping-editor.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Common Platform Features Mapping Editor Mapping Editor is a tool that enables you to transform and map data with the help of output schema, various mapping types, [functions](https://docs.skyvia.com/expression-syntax/functions.html) , [properties](https://docs.skyvia.com/data-integration/data-flow/parameters-and-variables.html) , and [variables](https://docs.skyvia.com/data-integration/data-flow/parameters-and-variables.html) . In Mapping Editor you work with input data and output data. Input data may come from various places and in different formats. Mapping editor\u2019s task is to transform this data to the format expected in the output. Output data format is defined by Output Schema . Mapping Editor provides four mapping types: Expression , Object Map , Array Map , and Array . Expression Expression mapping allows you to create complex mapping logic, make calculations, and transform data with the help of expressions. Use functions, properties, and variables combined with the logical operators to complete the task. Object Map Use Object Map to add an object type property. Each object type property may contain a number of nested properties.\u00a0For each of those nested properties, you may use Object Map or Array Map to create a more complex nested structure or use Expression to map them. To add an object type property, do the following: Add new or select an existing property in Output Schema . Select Object Map . Click + next to the property name to add a nested property. Array Map Array Map is used to create a new array of objects based on the existing one. Use Array Map to create an array of objects type property. Each array of objects type property contains List and Item . List is used to map the array and Item is used to configure the structure of objects in the array. To create and map an array of objects property, do the following: Add new or select an existing property in Output Schema . Click Array Map . Map List to the array of objects property. Define the sctructure of the object inside the array by adding properties to Item with the help of Object Map . Use Expression to map those properties. Mapping Examples Example 1. Concatenate Two Properties Task : You have an object with FirstName , LastName , and Title properties. You want to receive an object with FullName and Title properties instead. Solution : \u0421reate a FullName property in the Output Schema . Choose Expression mapping type and enter the following expression into expression editor: FirstName + ' ' + LastName . Example 2. Nested Object Mapping Task: You have an object with OrderId , CustomerFirstName , and CustomerLastName properties. You want to receive an object with a different structure, where customer data is stored in a nested Customer object. Solution: Create a Customer property in the Output Schema and choose Object Map . Click + next to Customer to add FirstName property to the Customer object. Click + next to Customer to add LastName property to the Customer object. Map LastName and FirstName properties to CustomerLastName and CustomerFirstName accordingly. Example 3. Concatenate Two Properties in a Nested Object Task: You have an object with a nested Customer object with FirstName and LastName properties. You want to change the nested object structure to store the first and last name data in a FullName property. Solution: Create a Customer property in the Output Schema and choose Object Map . Click + next to Customer to add FullName property to the Customer object. Click on FullName and choose Expression . Enter the following expression into expression editor: Customer.FirstName + ' ' + Customer.LastName . Note that you should use Customer.FirstName instead of FirstName as you deal with a nested property. Example 4. Concatenate Two Properties in Each Object in the Nested Array of Objects Task: You have an object with a Company property and a nested array of Contacts objects with FirstName , LastName , and Email properties. You want to receive an object with a different structure and different property names. Solution: Create Account property in the Output Schema . Choose Expression and map Account to Company . Create Users property in the Output Schema and choose Array Map to make it a nested array of objects. List and Item will appear. Map List to Contacts array. Click + next to Item to add a FullName property. Click FullName property, choose Expression , and enter the following expression: FirstName + ' ' + LastName Click + next to Item to add an Email property. Map Email to Email ." }, { "url": "https://docs.skyvia.com/common-platform-features/working-with-csv.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Common Platform Features Working with CSV Skyvia uses CSV files for data-related tasks. CSV files may use different format options to define the file structure. This article highlights the possible CSV formatting by describing each CSV option separately. CSV Separator CSV separator is used to divide the fields in the row from each other. Possible values: Comma (,) \u2014 most commonly used; Semicolon (;) ; Slash (/) ; Vertical bar ( | ) . Text Qualifier Text qualifier is a character that encloses values in the CSV files. It defines the stating and the ending points of the values. Possible values: empty string single quotation mark (\u2018) double quotation mark (\u201c) . The field value may contain a comma. The comma may be recognized as a field separator. Text qualifier will define this comma as a part of the value, not the separator. Row Delimiter Row delimiter is a character or a set of characters that separates data rows in the CSV files: Possible values: CRLF \u2014 a carriage return-line feed combination. Is used in Windows OS. The default value for Export and Query Advanced Export in Skyvia. CR \u2014 carriage return (MAC pre-OSX). LF \u2014 line feed (Linux, Mac OSX). When importing a CSV file, Skyvia tries to detect the Row Delimiter automatically. Locale The CSV file locale name determines locale settings including DateTime format, number format, string collation, currency format, etc. If you export the CSV file with the default locale value English (United States) the date-time field value will match PC regional settings and have the following format: If we select another locale value, for example, English (Canada) , we get the same date values in the following format: The locale also dictates the delimiter used between data values in the CSV file. If you select the locale, which uses a comma as a decimal separator, the CSV file data values will be separated by a semicolon. The default locale value in Skyvia is determined by the browser settings (preferred language). Check [Data Types and Limitations](https://docs.skyvia.com/data-integration/common-package-features/data-types-and-limitations.html) for more information. Code Page This option determines how the source values will be interpreted in the CSV file.\nBy default, Skyvia uses UTF-8 which can encode all of the possible character code points in Unicode. If the source field value contains special characters from different languages, you can select the UTF-8 code page value, and keep all the values in the result CSV file. If you select another option Cyrillic (Windows) [1251] only the Cyrillic characters will be parsed in the result file: CSV Opportunities in Skyvia Importing, Extracting from CSV Files CSV files could be used as the data source for [Import](https://docs.skyvia.com/data-integration/import/configuring-import.html) and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/components.html#csv-source) integrations. CSV files could be uploaded manually from your PC or automatically from the available [storage](https://docs.skyvia.com/connectors/file-storages/) services. Exporting, Loading to CSV Files CSV files could be used as the Target for [Export](https://docs.skyvia.com/data-integration/export/export-package.html) and [Data Flow](https://docs.skyvia.com/data-integration/data-flow/components.html#csv-target) integrations. CSV files could be downloaded manually to your PC or automatically to the available storage services. You can also download query results into a CSV file in Skyvia [Query](https://docs.skyvia.com/query/) ." }, { "url": "https://docs.skyvia.com/concepts.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Concepts This topic introduces basic terms, used in this documentation and Skyvia interface. Products Skyvia provides several different, but integrated products for solving different data-related tasks. These are the following products: Data Integration , Backup , Query , and Connect . Each of the products has its own set of pricing plans and is priced separately. And if you use just one product, you don\u2019t need to pay for other products. Objects Another key Skyvia concept is objects. We call everything that user can create on Skyvia as Skyvia objects. This term unites connections, integrations, backups, user queries, connect endpoints, etc. Don\u2019t mix Skyvia objects with cloud objects. Skyvia objects are items that you create in Skyvia, and cloud objects are like tables with data in cloud data sources that Skyvia works with. When using each Skyvia product, you create Skyvia objects of the corresponding kind \u2014 integrations in Data Integration , backups in the Backup product, queries in the Query product, and endpoints in Connect . Besides, there are two common kinds of Skyvia objects that are used by all products: connections and agents that are used for connecting Skyvia to data sources. Connections Connections are one of the key Skyvia concepts. Skyvia works with data sources via connections to them. They represent sets of connection parameters required to access data from the corresponding connector . Connections to some of the supported cloud applications also store metadata cache \u2014 the information about cloud objects in this data source and their fields. Connections you create can be used in all Skyvia products and are not priced separately. You can create as many connections to any data sources as you need in Skyvia, and this won\u2019t increase the price. You can also have multiple connections to the same data source if necessary. Agents Agents are the instances of Skyvia agent application \u2014 an application for connecting Skyvia to data sources on the users\u2019 computers or in their local networks. To identify an instance, each created Agent has a unique key , which you should download and place with the application. Integrations An integration is a unit of work in the Data Integration product that performs a set of data ETL operations between source and target connection. Integrations consist of tasks \u2014 units of data extraction, transformation and loading process. An integration can contain one or more tasks. Integration is scheduled and executed as a whole, you cannot execute its tasks separately. Integration log stores information about integration runs and their results \u2014 success and error information. Backups A backup is a configuration of backing up data from a cloud data source. It also contains a set of snapshots \u2014 when a backup runs, it creates a snapshot of the data source data, specified in backup configuration. Endpoints In the Connect product you create web API endpoints, providing OData or SQL API. It is an API layer for data from a connection, which makes it available via the endpoint URL. Endpoints store their security settings (set of users and IP address ranges, for which access is allowed). OData endpoints also store endpoint model settings, determining which data source data are available via endpoint. Skyvia also logs all the requests to an endpoint in the endpoint log . Queries A query simply represents a visual query model (if a query is built via visual query builder) or an SQL command text. Profile and Account Profile contains user\u2019s personal information (first name, last name, job title, password and other credentials) and serves for associating user with their account and with roles in certain workspaces. In the Profile, user can manage email notifications and Skyvia newsletter subscription settings. Account contains user\u2019s subscriptions to Skyvia products, user\u2019s payment details and payment history. If you are an account admin, you can invite other users to your account. Other users invited to the account as members can share subscription resources but cannot manage them or view any payment details. Account settings can be managed only by account admin. Account also contains user\u2019s workspaces with objects (connections, integrations, backups, etc.). Every user, added to the workspace by account/workspace admin, is assigned a certain role based on which this user can perform tasks and other actions, having limited or no rights to manage overall account settings. You can find more information about profiles and accounts in the Profile Management and Account Management sections." }, { "url": "https://docs.skyvia.com/connect/", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect Overview Skyvia Connect is an API-server-as-a-service product that allows you to quickly and easily create web API endpoints to provide access to your data from anywhere. A convenient GUI allows you to easily create web API for your data without typing a line of code. You get a ready-to-use endpoint URL, and there is no need to care about API server hosting, deployment, and administration at all. Currently, Skyvia Connect allows you to create two kinds of endpoints: OData endpoints and SQL endpoints. OData Endpoints OData endpoints allow access to your data via OData protocol. OData is a widely accepted open standard protocol for data access over the Internet. It provides queryable and interoperable RESTful APIs for working with data. You can find more information about the OData standard and more useful OData tools and resources at [www.odata.org](https://www.odata.org/) . OData protocol is supported in a wide variety of OData consumer applications: BI tools, integration solutions, like Salesforce Connect , programming libraries and frameworks, etc. After you create an OData endpoint, you need to simply specify its URL, and you can work with its data in your OData consumer tool. SQL Endpoints SQL endpoints provide custom API that allows executing SQL queries against the data source and obtain the returned data in a JSON format. DML SQL commands are also supported. For database SQL endpoints, DDL is also supported. You can work with data of an SQL endpoint via our custom ADO.NET provider or via ODBC driver , designed specially for SQL endpoints. This means that you can use usual SQL statements via standard ADO.NET and ODBC interfaces against any data source that you can connect to from Skyvia. Creating Endpoints Skyvia offers a convenient GUI wizard that allows creating an endpoint in a few simple steps without coding. You can start creating a new endpoint using the New menu. First, you need to select a connection to publish data from or create a new one. Then, for an OData endpoint, specify which data (tables and columns) should be available via the endpoint. In SQL endpoints, all the tables and columns, available via the selected connection, are automatically published, and this step is omitted. Note that you cannot publish data from a custom query - only data from cloud objects and database tables and views. OData endpoints allow you to hide some columns. If you need to publish the result of a custom query against the database, you can use a view for this purpose. After this, you need to configure endpoint security - user accounts and IP addresses, for which access is allowed. Finally, you need to specify the endpoint name and, for OData endpoints, some additional settings. For more details, see the How to Configure OData Endpoint and How to Configure SQL Endpoint topics. Managing Endpoints As well as the other Skyvia objects, endpoints are available in the object list, and you can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, filter by name or data source, etc. To create a new endpoint, click the corresponding link in the New menu. Skyvia offers a convenient GUI wizard that allows creating an endpoint in a few simple steps without coding. For detailed instructions on using this wizard, please see the How to Configure OData Endpoint in Simple Mode , How to Configure OData Endpoint in Advanced Mode , and SQL Endpoints topics. Active/Inactive Endpoints At any time you can activate or deactivate an endpoint - either by using its corresponding Quick Action in the object list or in the endpoint details. When endpoint is inactive, its data cannot be accessed, and it returns an error message that it is inactive for any request. Note that if an endpoint is invalid or uses features, not provided in your Connect pricing plan , you cannot activate it. The free pricing plan allows only one active endpoint. Endpoint Security Skyvia Connect allows you to create additional security layer over your data source. You can create user accounts with passwords for your endpoints and pass these credentials to users you want to share your data with. You don\u2019t need to share your original data source credentials with them. You also can limit access to your endpoints by specifying IP address ranges for which access is allowed. Together with exposing only the necessary entities, this makes data sharing via Skyvia Connect convenient and secure. See more details about user account and IP configuration in the Security Settings topic. If you don\u2019t specify any users, the endpoint data will be publicly available to anyone without authentication, and if you don\u2019t specify any IP address ranges, the endpoint data will be available from any IP by default. Logging Skyvia provides advanced monitoring functionality for created endpoints. For each endpoint you can view a detailed log with all the requests to an endpoint with their URLs, user names, executed SQL statements, error messages (if any), etc. See more details about logging in the Monitoring Endpoint Activity topic." }, { "url": "https://docs.skyvia.com/connections/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections Connections in Skyvia are used for connecting to various data sources and working with their data. They represent sets of authentication parameters required to access data from the corresponding data source and some settings that determine how Skyvia works with the data source. Connections are used in all Skyvia products, and you need to create a connection to a data source so that Skyvia could interact with it. Skyvia does not limit the number of connections that you can use, and you can create and use any number of connections to supported data sources on any pricing plan. If necessary, you can also create multiple connections, using different parameters/data source accounts, to the same data source. Creating Connections Creating a connection takes two steps: First you open the list of connectors and select one you want to connect to. Skyvia supports various connectors, including popular cloud applications, relational databases, cloud data warehouses, and file storage services, and even allows defining custom connectors . After you select a connector, you need either to specify the required authentication parameters or perform web login depending on the authentication used by the connector. You can see more details on the necessary parameters or a detailed procedure of creating a connection in the topics about specific connectors in the Connectors section. In Skyvia, you can open the connector list and create connections in different ways, from different Skyvia tools. Currently, you can create a connection in the following ways: You can create a connection, like any other object, by clicking +NEW in the top menu and selecting Connection in the list on the left. In this case, you create a connection in full-page editors. Alternatively, you can create a connection from the editor of any other Skyvia object, which uses connection. Wherever you can select an existing connection in a list, you can also select + New connection . Or, when selecting a connection in wizards, like Backup Wizard or SQL Endpoint Wizard, you can click the + Add new button to create a new connection. In this case, the connection is created in a dialog window. Finally, some wizards, like [Create Endpoint for Looker Studio Wizard](https://app.skyvia.com/#/wizards/gds) , include connection creation steps as their pages. Managing Connections As well as the other Skyvia objects, connections are available in the object list, and you can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, filter by the connector used, etc. To display only connections, switch to the Connections tab on the workspace toolbar. For connections, Skyvia displays the information when a connection was created and last modified, the connector used, and whether the connection is direct, or it connects via an agent. The following Quick Actions are available for connections: Test Connection , View Dependencies , Query SQL and Query Builder . By clicking a certain connection, you are transferred to the Connection Editor. Here you can view and modify connection parameters, clone, test connections or delete them. Different Kinds of Authentication Different connectors use different authentication kinds, and thus, connections to them are configured differently. For example, for connectors with OAuth authentication, you don\u2019t need to enter your login and password in Skyvia. You just sign in to the cloud application itself, and Skyvia does not know your username and password at all. On the contrary, for connectors with username/password authentication, you need to specify and store your username and password in Skyvia. Some connectors support multiple kinds of authentication. You may select the authentication to use when creating a connection. You can find more information about different authentication kinds in the Authentication topic. Metadata Connection metadata is the information on the structure of data available in the connected data source. Metadata includes the list of cloud objects or database tables and views available, their fields with their data types, information about primary keys and relations between the connection objects. Database Metadata For different data sources metadata is treated differently. For databases and cloud data warehouses metadata is queried every time when the connection is opened, and you always work with the actual metadata. Cloud App Metadata For different cloud applications, the metadata-related behavior is different. For some applications, like Salesforce or Dynamics 365 metadata can be customized, and Skyvia can see these customizations by default. For other cloud applications, custom objects and fields can be added, but querying them takes more time and reduces performance. In order to access these custom fields, you may need to explicitly set corresponding options while creating or editing a connection. For example, NetSuite, Mailchimp, or Salesforce Marketing Cloud connections have corresponding settings in their connection editors. Retrieving metadata from a cloud application can be a slow process. In order to avoid re-reading metadata each time, Skyvia caches cloud app metadata. So, once it retrieves metadata, it works with metadata cache , and it won\u2019t detect metadata changes. To see metadata changes, you will need to clear the metadata cache. For more information on how to configure metadata cache, read the Metadata Cache topic. Some cloud apps don\u2019t support metadata customizations at all, and their metadata is constant. Some support customization, but Skyvia can work only with their predefined objects and fields. For such data sources, metadata information is not read from the data source itself. It is considered constant and is predefined in Skyvia itself. Connecting to Local Databases Connecting to local database servers can be challenging because on-premise databases are usually behind a restrictive firewall and/or have no dedicated external IPs. Skyvia allows connecting to these servers in two ways: Direct connections . Skyvia connects to the database server directly. This requires database servers to be available from the Internet. You may need to configure port forwarding if the server does not have a dedicated external IP and add Skyvia\u2019s IPs to the Firewall access rules. See Skyvia IPs and more information in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections . Skyvia connects to the database server via a secure Agent application. You create an Agent , download and install the Agent application on a computer that has both Internet access and access to the database server, and create Agent connections . Data is securely transmitted between the Agent and Skyvia in an encrypted form, and it doesn\u2019t require a dedicated external IP or port and usually does not require additional firewall configuration. Agent also often provides better performance for querying local databases because of protocol optimizations." }, { "url": "https://docs.skyvia.com/connections/agent-connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections Agent Connections When connecting to on-premise databases, like SQL Server, Oracle, MySQL and related, or PostgreSQL, you can select Connection Mode : Direct - Skyvia connects to the database server directly. This requires database servers to be available from the Internet. Agent - Skyvia connects to the database server via a secure Agent application. You create an Agent , download and install the Agent application on a computer that has both Internet access and access to the database server. Data is securely transmitted between the Agent and Skyvia in an encrypted form, and it doesn\u2019t require a dedicated external IP or port, and usually does not require additional firewall configuration. Agent with Alias - This mode also uses the Agent application. For this mode, you need to add a specific configuration file with connection settings and aliases to the Agent application folder. Additionally, ODBC connections can only be Agent or Agent with Alias connections. Creating Agent Connections Agent connection is created like a usual database connection. You only need to select the Agent Connection Mode and then select the Agent to use. After this enter corresponding database connection parameters. Note that you need to specify connection parameters for an agent connections so as if you are connecting from the computer where the agent is installed. For example, specify the local IP (or domain name) of the database server instead of the external one, if the Agent application is installed on a computer in the same local network. Creating Agent with Alias Connections In order to use such connections, you need to add a special configuration file to the folder where the agent application is installed. This configuration file contains connection parameters for database connections that you plan to use via this agent and the corresponding connection aliases. Features of Agent with Alias connections: All the connection parameters are only stored on the computer where the Agent application is installed, in the configuration file. They are not send to/stored on the Skyvia servers. The Agent application creates actual connections to local databases, so Skyvia service does not need to know the parameters at all. If you add such a configuration file with connections and aliases, the Agent application allows creating only connections with aliases. It no longer allows normal Agent connections. In this way you may limit, connections to which your resources can be created via this Agent instance. To create an Agent with Alias, perform the following steps: Download and install the Skyvia Agent . In the Connection Editor, choose Agent with Alias as the connection mode. Click Generate Alias Configuration File and fill in the required fields to create a connection to your database. This information is used temporarily to generate the configuration file and is not stored. Click Download File . Move the alias file to the Skyvia Agent folder, typically located at C:\\Program Files (x86)\\Skyvia Agent. In the Connection Editor, select your Agent and enter the name of the Alias file without including the file extension. The generated file is usually named skyvia_security_agent ." }, { "url": "https://docs.skyvia.com/connections/authentication.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections Authentication Different data sources use different authentication methods in order to allow access to their data. For Skyvia users, these authentication methods have differences in what parameters are required to create a connection and how to obtain them, what parameters are entered and stored on Skyvia. There are also internal differences on which steps Skyvia performs and how it authenticates using the data source API, but they are transparent for Skyvia users. Here are the most common authentication methods used by data sources: OAuth 1 or 2 API key or secret User name and password Some data sources support multiple authentication methods or use different methods with different API versions. In such cases Skyvia allows you to select the authentication method to use. OAuth Authentication OAuth authentication is used by many cloud applications and services. When creating a connection with OAuth authentication, Skyvia redirects you to the corresponding cloud service page. You sign in on this cloud service page, not on Skyvia\u2019s, and Skyvia does not have access to the credentials entered. After you sign in, some applications allow you to also specify the access rights that will be provided to this connection and some will only ask whether you allow access. When you confirm access, the application generates an OAuth token that grants access. This token is stored on Skyvia and is used to gain access to data source data. For example, let\u2019s look at Salesforce connection creation. Salesforce supports multiple authentication methods, and OAuth 2.0 is selected by default. So you just click the Sign In with Salesforce button, and a new browser window opens, which prompts you to sign in to Salesforce. Please pay attention to the address bar in the new browser window \u2014 it starts with login.salesforce.com . This means that you are entering your credentials on the Salesforce page, not on Skyvia\u2019s. In this case Skyvia does not have access to your credentials at all. It only stores an automatically generated token in the connection settings. Please note that OAuth token may expire or be revoked with time, and your OAuth connection may become invalid. For example, if you see an error like \u201cexpired access/refresh token\u201d, this means that you need to edit your connection and perform the procedure of signing in and granting access again. API Key or Secret This authentication is another way of granting access to data source APIs without providing actual credentials. In this case, the data source generates an API key or a set of parameters which you need to enter in the Skyvia\u2019s connection editor. So first you go to your cloud app, obtain or generate the required API key there, and then enter them into Skyvia\u2019s connection editor. The key is usually found in the integrations settings or somewhere like this. For example, in Freshdesk, you can find your API key by clicking your profile icon and then the Profile settings link. Then you can see your API key in the upper right part of the Profile settings page. Then you simply copy this key and paste it to the Skyvia connection editor. As well as in the previous case, your login and password are neither accessed by nor stored on Skyvia. User Name and Password Authentication There are actually multiple kinds of authentications that require username and password. There is the HTTP Basic authentication when username and password are sent with every request, or authentication, used by databases, when username and password are sent when establishing a session, and then this session can be used for executing multiple commands. From the Skyvia user\u2019s point of view, they are similar in that they require your data source credentials specified and stored in the connection settings. Skyvia stores the necessary connection parameters encrypted using AES 256-bit encryption, so you don\u2019t need to worry about their security. Our employees don\u2019t have access to connection strings of our users. Such authentication is used for databases, like Oracle, MySQL, PostgreSQL, SQL Server. Some cloud sources, like Salesforce, also support it. For Salesforce, you can select either OAuth 2.0 or User Name & Password authentication. However, in order to use the latter, you also need the Security token , which you can generate in your Salesforce settings. Other Ways of Authentication Some data sources may use different authentication methods. Several data sources require creating an API account or registering an integration record in the data source settings in order to obtain the required parameters. In some cases this procedure is used for obtaining an OAuth token instead of performing a sign in to a cloud app. In any case, you can find the information about the required connection parameters for each specific data source either in our documentation, in the Connectors section or in the documentation of a specific data source." }, { "url": "https://docs.skyvia.com/connections/how-to-configure-local-database-server-to-access-it-from-skyvia.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections How to Configure Local Database Server to Access It from Skyvia In order to connect directly to a database server, Skyvia requires it to be accessible from the Internet (using TCP protocol). For this, several conditions must be met. The Firewall must allow access to the database server port. Additionally, the database server must have the corresponding login registered and must be accessible from Skyvia. SQL Server must allow access via the TCP/IP protocol and SQL Server authentication. Additionally, if you are connecting to a computer in your local network , you should use port forwarding . This topic contains some information about configuring databases, useful for creating direct connections from Skyvia. As an alternative, for local database servers you can also use agent connections . In most cases agent connections does not require any additional database or firewall configuration. Skyvia will access your server from the IP 40.118.246.204, 13.86.253.112, and 52.190.252.0. How to Configure Windows Firewall to Allow Access from Skyvia Please note that this information is about configuring the standard Windows Firewall. If you use other firewall application or other operating system, read its documentation for the information on how to allow access. Skyvia requires the 1433 port (default SQL Server port) to be open. To open this port in Windows Firewall, perform the following steps: Open Windows Firewall with Advanced Security using the Start menu or Start screen. In the tree on the right part of the Windows Firewall with Advanced Security window, right-click the Inbound Rules node and then click New Rule on the shortcut menu. In the New Inbound Rule Wizard dialog box, click Port and then click Next . Specify the 1433 port number in the Specific local port box and click Next . Click Allow the connection and then click Next . Select the Public , Private , and Domain checkboxes and click Next . Enter Name and Description for the rule and click Finish . How to Configure PostgreSQL Server for Remote Access If your PostgreSQL server is not configured for remote access, you may need to modify its postgresql.conf and pg_hba.conf files. In the postgresql.conf file, check the listen_addresses setting. You need to either set it to listen_addresses = '*' in order to allow connections from any IP addresses. Or you may list the necessary IPs separated by commas in this setting. In the latter case, add the Skyvia IPs (40.118.246.204, 52.190.252.0, and 13.86.253.112) to this list. In the pg_hba.conf file, add the following lines to the end of the file: 1\n2\n3 host all all 40.118.246.204/32 md5\nhost all all 13.86.253.112/32 md5\nhost all all 52.190.252.0/32 md5 This allows access from the Skyvia IP for all the PostgreSQL users. For more information, please refer to [PostgreSQL documentation](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) . After modifying these files, please restart the server. How to Configure SQL Server for Connecting via TCP/IP The TCP/IP protocol is usually enabled in SQL Server configuration by default, and you may omit the steps below. However, if your SQL Server has custom configuration with TCP/IP protocol disabled, you can enable it in the following way: Click the Start button and type SQL Server Configuration Manager . Then press Enter. In the tree on the right part of the SQL Server Configuration Manager window, expand the SQL Server Network Configuration node and click the Protocols for SQL node. If the TCP/IP protocol has status Disabled in the right pane of the SQL Server Configuration Manager window, right-click it and then click Enable on the shortcut menu. Right-click the TCP/IP protocol in the right pane of the SQL Server Configuration Manager window and then click Properties . On the Protocol tab of the TCP/IP Properties dialog box, make sure that the Enabled property is set to Yes . If not, set it to Yes . Then click the IP Addresses tab. On the IP Addresses tab, for the IP, corresponding to the external IP address of your server, set the Enabled property to true. Then click OK . Then you need to restart SQL Server in order for configuration changes to be applied. In the tree on the right part of the SQL Server Configuration Manager window, click the Protocols for SQL node, and then right click the corresponding SQL Server instance in the right pane and click Restart on the shortcut menu. How to Create SQL Server Authentication Logins You can use various applications or tools to manage authentication logins on SQL Server. Here we will show how to do it with Microsoft SQL Server Management Studio. Since Windows Authentication cannot be used from outside the domain, logins that will be used for connecting from Skyvia must use SQL Server authentication. To create SQL Server authentication logins with Microsoft SQL Server Management Studio, please perform the following steps: In SQL Server Management Studio Object Explorer, expand the node of the server instance you want to create a new login to. Right-click the Security node, then point to New and click Login . Enter the name of the new user in the Login name box. Select SQL Server authentication . Enter a password for the new login to the Password box and reenter it to the Retype Password box. In the Default database list, select a default database for the login or leave the default value master . Change other login settings if necessary. Click OK . How to Configure SQL Server for Using Mixed Mode Authentication As written above, SQL Server authentication must be used for connecting to SQL Server from the Internet. So, if your SQL Server is configured to use Windows Authentication only, you need to configure it to use Mixed mode authentication. Mixed mode authentication allows using both SQL Server Authentication and Windows Authentication. We will use Microsoft SQL Server Management Studio again in order to configure mixed mode authentication. Perform the following steps: In SQL Server Management Studio Object Explorer, right-click the node of the server instance you want to configure and then click Properties . In the Select a page pane, click Security . Select SQL Server and Windows Authentication mode and click OK . SQL Server Management Studio will display the message that SQL Server should be restarted. Click OK . In Object Explorer, right-click your server and then click Restart ." }, { "url": "https://docs.skyvia.com/connections/index.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections Connections Connections in Skyvia are used for connecting to various data sources and working with their data. They represent sets of authentication parameters required to access data from the corresponding data source and some settings that determine how Skyvia works with the data source. Connections are used in all Skyvia products, and you need to create a connection to a data source so that Skyvia could interact with it. Skyvia does not limit the number of connections that you can use, and you can create and use any number of connections to supported data sources on any pricing plan. If necessary, you can also create multiple connections, using different parameters/data source accounts, to the same data source. Creating Connections Creating a connection takes two steps: First you open the list of connectors and select one you want to connect to. Skyvia supports various connectors, including popular cloud applications, relational databases, cloud data warehouses, and file storage services, and even allows defining custom connectors . After you select a connector, you need either to specify the required authentication parameters or perform web login depending on the authentication used by the connector. You can see more details on the necessary parameters or a detailed procedure of creating a connection in the topics about specific connectors in the Connectors section. In Skyvia, you can open the connector list and create connections in different ways, from different Skyvia tools. Currently, you can create a connection in the following ways: You can create a connection, like any other object, by clicking +NEW in the top menu and selecting Connection in the list on the left. In this case, you create a connection in full-page editors. Alternatively, you can create a connection from the editor of any other Skyvia object, which uses connection. Wherever you can select an existing connection in a list, you can also select + New connection . Or, when selecting a connection in wizards, like Backup Wizard or SQL Endpoint Wizard, you can click the + Add new button to create a new connection. In this case, the connection is created in a dialog window. Finally, some wizards, like [Create Endpoint for Looker Studio Wizard](https://app.skyvia.com/#/wizards/gds) , include connection creation steps as their pages. Managing Connections As well as the other Skyvia objects, connections are available in the object list, and you can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, filter by the connector used, etc. To display only connections, switch to the Connections tab on the workspace toolbar. For connections, Skyvia displays the information when a connection was created and last modified, the connector used, and whether the connection is direct, or it connects via an agent. The following Quick Actions are available for connections: Test Connection , View Dependencies , Query SQL and Query Builder . By clicking a certain connection, you are transferred to the Connection Editor. Here you can view and modify connection parameters, clone, test connections or delete them. Different Kinds of Authentication Different connectors use different authentication kinds, and thus, connections to them are configured differently. For example, for connectors with OAuth authentication, you don\u2019t need to enter your login and password in Skyvia. You just sign in to the cloud application itself, and Skyvia does not know your username and password at all. On the contrary, for connectors with username/password authentication, you need to specify and store your username and password in Skyvia. Some connectors support multiple kinds of authentication. You may select the authentication to use when creating a connection. You can find more information about different authentication kinds in the Authentication topic. Metadata Connection metadata is the information on the structure of data available in the connected data source. Metadata includes the list of cloud objects or database tables and views available, their fields with their data types, information about primary keys and relations between the connection objects. Database Metadata For different data sources metadata is treated differently. For databases and cloud data warehouses metadata is queried every time when the connection is opened, and you always work with the actual metadata. Cloud App Metadata For different cloud applications, the metadata-related behavior is different. For some applications, like Salesforce or Dynamics 365 metadata can be customized, and Skyvia can see these customizations by default. For other cloud applications, custom objects and fields can be added, but querying them takes more time and reduces performance. In order to access these custom fields, you may need to explicitly set corresponding options while creating or editing a connection. For example, NetSuite, Mailchimp, or Salesforce Marketing Cloud connections have corresponding settings in their connection editors. Retrieving metadata from a cloud application can be a slow process. In order to avoid re-reading metadata each time, Skyvia caches cloud app metadata. So, once it retrieves metadata, it works with metadata cache , and it won\u2019t detect metadata changes. To see metadata changes, you will need to clear the metadata cache. For more information on how to configure metadata cache, read the Metadata Cache topic. Some cloud apps don\u2019t support metadata customizations at all, and their metadata is constant. Some support customization, but Skyvia can work only with their predefined objects and fields. For such data sources, metadata information is not read from the data source itself. It is considered constant and is predefined in Skyvia itself. Connecting to Local Databases Connecting to local database servers can be challenging because on-premise databases are usually behind a restrictive firewall and/or have no dedicated external IPs. Skyvia allows connecting to these servers in two ways: Direct connections . Skyvia connects to the database server directly. This requires database servers to be available from the Internet. You may need to configure port forwarding if the server does not have a dedicated external IP and add Skyvia\u2019s IPs to the Firewall access rules. See Skyvia IPs and more information in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections . Skyvia connects to the database server via a secure Agent application. You create an Agent , download and install the Agent application on a computer that has both Internet access and access to the database server, and create Agent connections . Data is securely transmitted between the Agent and Skyvia in an encrypted form, and it doesn\u2019t require a dedicated external IP or port and usually does not require additional firewall configuration. Agent also often provides better performance for querying local databases because of protocol optimizations." }, { "url": "https://docs.skyvia.com/connections/metadata-cache.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connections Metadata Cache For cloud data sources that allow modifying their objects and adding custom objects and/or fields, Skyvia caches metadata by default. Skyvia caches metadata, because reading metadata from a cloud source may take much time, and reloading it every time a connection is used decreases performance. However, you may change this behavior when creating or editing a connection to such data source, by changing a value of the Metadata Cache parameter. This parameter determines for how long the cached metadata for the connection are considered valid. You can configure this time interval or reset it manually in the Connection Details window by clicking the Clear now link near the Metadata Cache parameter. The time the cache was last cleared is displayed next to the Clear now link. You can also disable metadata cache completely, and metadata will be queried whenever necessary. By default, the Metadata Cache parameter is set to Infinite , and cache is reset only manually. If metadata of your cloud application changes often, you may need to change this setting. Metadata Changes If metadata of your cloud app changes, by default, Skyvia knows nothing about it. If some fields or objects in your data source that are used in your integrations, backups, or endpoints, are deleted, this will cause errors next time when the integration or backup runs, or endpoint data are queried. If an object is added in the data source, existing backups, integrations, etc. will not fail; however, Skyvia will not know about these changes until metadata cache is cleared. Unless you set the Metadata Cache parameter of your connection to Disabled or to a short period, you will need to edit your connection and click the Clear now link. In some cases you will also need to edit your backups or integrations and apply the corresponding changes to them. You can easily find out which objects use a specific connection by clicking the View Dependencies button in the connection details. List of Connectors with Metadata Cache Salesforce Dynamics 365 SugarCRM Zoho CRM NetSuite* Salesforce Marketing Cloud* Mail\u0441himp* HubSpot Marketo Magento * These connectors also have additional parameters, determining whether their custom objects or fields are available in Skyvia." }, { "url": "https://docs.skyvia.com/connectors-sdk/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK Skyvia allows creating your own custom connectors to the data sources not yet supported that have REST API. In our SDK (Software Development Kit) you can create the custom connector, configure basic settings, define objects and fields, and configure procedures. You can use the schema reference provided in the REST Connector section to define your custom REST connector." }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector One of the options of connecting to a data source not supported directly is via a REST connector. When setting up the REST connector, you must specify JSON code that describes the connector API. This section contains the reference of connector configuration code. It consists of two sections: The ProviderConfiguration section contains primary settings. The Metadata section describes connector metadata. It describes connector Objects , Procedures and complex Types , and allows you to override most connector behavior settings for specific objects and procedures." }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/metadata/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector Metadata The Metadata section contains settings that define connector objects, stored procedures, and complex types. \nThe metadata section contains the following sections: Objects \u2014 this section contains settings that define the objects and their fields in the connector. Types \u2014 this section contains settings that define complex structured objects \u2014 complex types. Procedures \u2014 this section contains settings that define connector\u2019s stored procedures. Metadata Section Example 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63 \"Metadata\" : { \"Objects\" : [{ \"Name\" : \"Customers\" , \"Url\" : \"/customers\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 50 , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"Name\" , \"APIPath\" : \"name\" , \"DbType\" : \"String\" , \"Length\" : 100 , \"Required\" : true }, { \"Name\" : \"Balance\" , \"APIPath\" : \"balance\" , \"DbType\" : \"Int32\" }, { \"Name\" : \"Addresses\" , \"APIPath\" : \"addresses\" , \"DbType\" : \"JsonArray\" , \"SubType\" : \"AddressType\" }, ] }], \"Types\" : [{ \"Name\" : \"AddressType\" , \"Columns\" : [{ \"Name\" : \"Country\" , \"APIPath\" : \"country\" , \"DbType\" : \"String\" , \"Length\" : 50 }, { \"Name\" : \"City\" , \"APIPath\" : \"city\" , \"DbType\" : \"String\" , \"Length\" : 100 }, { \"Name\" : \"Addr\" , \"APIPath\" : \"addr\" , \"DbType\" : \"String\" , \"Length\" : 500 } ] }], \"Procedures\" : [{ \"Name\" : \"Ping\" , \"Url\" : \"/accounts\" , \"Method\" : \"GET\" , \"ResultPath\" : \"data\" }] }" }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/metadata/objects.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector Metadata Objects Objects is a section that defines the objects and their fields in the connector. It contains object configurations. Object configurations contain the following settings: Setting Description Name The name of the object Url The object endpoint URL Method The web request method ResultPath The path to the response result BodyPattern The object body format settings FieldsParameterSupported Indicates whether the object supports querying only a part of the object fields by specifying the field list in a parameter. FieldsParameterName The name of the web request URL parameter which contains the field list Headers The headers of the web request for reading the object PagingStrategy Object pagination settings ErrorHandling Error processing settings ConstantParameters Constant parameters of the web request for reading object Columns The section of object fields settings ParentRelations The section of settings that configure object relationships RetrieveSingleOperation The section for configuring the operation of retrieving the object records by IDs CRUD Operations Section for configuring the object\u2019s CRUD operations For example, 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n140\n141\n142\n143\n144\n145 \"Objects\" : [{ \"Name\" : \"Tickets\" , \"Url\" : \"/tickets?include=requester,tags\" , \"ResultPath\" : \"tickets\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"Int64\" , \"Primary\" : true , \"ReturningResultJPath\" : \"ticket.id\" , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"GroupId\" , \"APIPath\" : \"group_id\" , \"DbType\" : \"Int64\" , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"group_id\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"LessThan\" , \"ParameterName\" : \"group_id\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"GreaterThan\" , \"ParameterName\" : \"group_id\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"LessThanOrEquals\" , \"ParameterName\" : \"group_id\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"GreaterThanOrEquals\" , \"ParameterName\" : \"group_id\" , \"ParameterType\" : \"UrlParameter\" } ] }, { \"Name\" : \"DepartmentId\" , \"APIPath\" : \"department_id\" , \"DbType\" : \"Int64\" }, { \"Name\" : \"Status\" , \"APIPath\" : \"status\" , \"DbType\" : \"String\" , \"NativeDbType\" : \"Int32\" , \"Required\" : true , \"Enum\" : true , \"EnumValues\" : [{ \"Name\" : \"Open\" , \"Value\" : \"2\" }, { \"Name\" : \"Pending\" , \"Value\" : \"3\" }, { \"Name\" : \"Resolved\" , \"Value\" : \"4\" }, { \"Name\" : \"Closed\" , \"Value\" : \"5\" } ], \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"status\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"LessThan\" , \"ParameterName\" : \"status\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"GreaterThan\" , \"ParameterName\" : \"status\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"LessThanOrEquals\" , \"ParameterName\" : \"status\" , \"ParameterType\" : \"UrlParameter\" }, { \"Operation\" : \"GreaterThanOrEquals\" , \"ParameterName\" : \"status\" , \"ParameterType\" : \"UrlParameter\" } ] } ] \"ParentRelations\" : [{ \"Field\" : \"DepartmentId\" , \"RelatedTable\" : \"Departments\" , \"RelatedField\" : \"Id\" }], \"RetrieveSingleOperation\" : { \"Url\" : \"/tickets/?include=tags,requester,problem,assets\" , \"ResultPath\" : \"ticket\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"ticket_id\" , \"ParameterType\" : \"UrlPath\" }] }, \"InsertOperation\" : { \"Url\" : \"/tickets\" , \"Method\" : \"POST\" , \"ReturningStrategy\" : { \"Type\" : \"Row\" }, \"InputType\" : \"JsonBody\" }, \"UpdateOperation\" : { \"Url\" : \"/tickets/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"ticket_id\" , \"ParameterType\" : \"UrlPath\" }], \"Method\" : \"PUT\" , \"ReturningStrategy\" : { \"Type\" : \"Row\" }, \"InputType\" : \"JsonBody\" }, \"DeleteOperation\" : { \"Url\" : \"/tickets/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"ticket_id\" , \"ParameterType\" : \"UrlPath\" }], \"Method\" : \"DELETE\" } } General Settings Name The name of the object. URL The object endpoint URL. It can be an absolute or relative URL. An absolute URL is a full URL, such as https://api.example.com/accounts. Relative URL, for example, /customers, is set relatively to the base URL. For example, if the base URL is https://api.example.com and the relative is /customers, then the object\u2019s URL is formed of base and relative URL conjunction: https://api.example.com/customers. If the URL value starts with the \u2018/\u2019 character, Skyvia treats it as a relative URL and appends it to the BaseUrl . Method A web request method, for example, GET, POST, PUT. Default method is GET. ResultPath A path to a response field containing the endpoint call\u2019s result. More details are available in ProviderConfiguration ResultPath . BodyPattern A setting that describes the object body format. \nFor more details, refer to ProviderConfiguration BodyPattern setting. FieldsParameterSupported This setting defines whether the source object supports specifying the fields list as an endpoint URL parameter for getting data. Default value is false . For more details, refer to the FieldsParameterSupported setting in ProviderConfiguration section. FieldsParameterName This setting determines the name of the web request URL parameter, which contains the list of fields.\nFor more details, see the FieldsParameterName Headers The headers of the web request for reading the object. Specify Headers as a JSON array of objects with the Key and Value properties. These headers are merged with the respective constant headers, specified in ProviderConfiguration Headers section. PagingStrategy This section defines the specific paging strategy for an object. Add this section if it differs from the ProviderConfiguration PagingStrategy ErrorHandling Determines the specific object behavior in case of errors. Add this section if the behavior differs from the general settings provided in ProviderConfiguration ErrorHandling section. ConstantParameters The constant parameters of the web request for reading the object. These constant parameters are merged with the respective constant parameters, specified in ProviderConfiguration ConstantParameters section. Columns This section contain column configurations, which define connector object fields: Setting Description Name The name of the field APIPath The path to the field in the response DMLAPIName The field API name for CRUD operations InsertAPIName The field API name for INSERT operation UpdateAPIName The field API name for UPDATE operation JPath The JSON path to the corresponding property in the response SingleResultJPath The JSON path to the corresponding property in the response for the RetrieveSingleOperation ReturningResultJPath The JSON path to the corresponding property in the response after performing the INSERT or UPDATE operation Primary Defines whether the field is the primary key DbType Defines the field data type SubType Defines the element type for complex JSON object or JSON array object field Length Max number of characters for text type fields Precision Max number of significant digits for a numeric field Scale Number of characters after comma for numeric fields TimeStampStoreMode The timestamp data format setting DateTimeFormat Defines how to process the DateTime data DateFormat Defines how to process the Date data TimeFormat Defines how to process the Time data Nullable Defines if the field accepts the null value Required Defines whether the field is required or optional Createable Defines if the field supports the INSERT operation Updateable Defines if the field supports the UPDATE operation ReadOnly Defines whether the field is read-only DefaultValue Allows setting the default value for the field NullAs This setting defines a value the source uses as a null value ExcludeNullValuesFromUpdate Defines whether to pass the null values to this field during the UPDATE operation ExtendedRequest Defines whether this field can be obtained only via requesting a record by its ID Enum Defines if the field has a set of specific valid values MultiEnum Defines if the field is an array of specific valid values EnumValues Defines the set of specific valid values for the field StrictEnumValues Defines whether to check if the field values match the list of valid values ExcludeFromFieldsParameter Use this setting to exclude a field from the list specified in the FieldsParameterName parameter Expression Defines a field value using an expression FilterOperations Section that specifies filters natively supported by the data source API Name The field name the user will see when querying the object data. For example, {\"Name\":\"Amount\"} . APIPath The name of the respective property in a JSON object corresponding to a record. If the property is nested, then the APIPath specifies its path in the format . . The nesting level is unlimited. For example, the data source returns a record in the following format: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 { address : { country : \"USA\" , city : \"New York\" , contactInfo : { phone : \"1111\" , fax : \"1111\" } } } The corresponding columns in the corresponding object configuration should look like this: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28 { \"Name\" : \"AddressCountry\" , \"APIPath\" : \"address.country\" , \"DbType\" : \"String\" , \"Createable\" : true , \"Updateable\" : true }, { \"Name\" : \"AddressCity\" , \"APIPath\" : \"address.city\" , \"DbType\" : \"String\" , \"Createable\" : true , \"Updateable\" : true }, { \"Name\" : \"AddressPhone\" , \"APIPath\" : \"address.contactInfo.phone\" , \"DbType\" : \"String\" , \"Createable\" : true , \"Updateable\" : true }, { \"Name\" : \"AddressFax\" , \"APIPath\" : \"address.contactInfo.fax\" , \"DbType\" : \"String\" , \"Createable\" : true , \"Updateable\" : true } If a field belongs to a foreign key with AggregateParent = true in ParentRelations section, APIPath must be specified related to the parent object web response, not to the current object. For more details, see the example . You can use the <@=parent> setting in the APIPath to access the object on a level higher than the current object. It allows displaying the nested arrays as separate objects, having some fields from their parent objects, for example, their parent IDs. For example, the data source has the CustomerList object storing customers, and customer records have the Addresses JSON array field, storing customer addresses. We want to represent customer addresses as a separate object and have the relation to the corresponding customers in them by the customer ID field (which we label as CustomerID ). 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18 { \"Name\" : \"CustomerAddresses\" , \"Url\" : \"/customer\" , \"ResultPath\" : \"CustomerList.Addresses\" , \"Columns\" : [{ \"Name\" : \"ID\" , \"APIPath\" : \"ID\" , \"DbType\" : \"Guid\" , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"CustomerID\" , \"APIPath\" : \"<@=parent>.ID\" , \"DbType\" : \"Guid\" , \"Createable\" : false , \"Updateable\" : false } If the dot character is a part of the name in JSON, and not the separator in the hierarchy, then you need to use quotation with the {} characters, for example: 1\n2\n3\n4 { \"Name\" : \"Amount\" , \"APIPath\" : \"{my.amount}\" } If the path to the corresponding property in JSON is more complex, for example, if it is inside of a JSON array, or API names in nesting include spaces/other special characters, you need to specify the JPath setting instead of APIName . JPath setting supports more advanced JSON path notation. DMLAPIName This setting is used when the data source API uses different property names/paths for DML operations and for selecting data. In this case, it specifies the corresponding API name of the field for CREATE/UPDATE operations. For example, 1\n2\n3\n4\n5\n6\n7 \"Columns\" : [ { \"Name\" : \"OrderId\" , \"APIPath\" : \"order_id\" , \"DbType\" : \"String\" , \"DMLAPIName\" : \"tracking.order_id\" } Use the same notation for this setting as for the APIPath setting. InsertAPIName Use this setting when the data source API uses different property names/paths for CREATE and for UPDATE operations. In this case, it specifies the corresponding API name of the field for CREATE operations. Use the same notation for this setting as for the APIPath setting. For example, in our Stripe connector the BankAccounts object Country field is configured like this: 1\n2\n3\n4\n5\n6\n7\n8\n9 { \"Name\" : \"Country\" , \"APIPath\" : \"country\" , \"InsertAPIName\" : \"source.country\" , \"DbType\" : \"String\" , \"Length\" : 2 , \"Required\" : true , \"Updateable\" : false } UpdateAPIName Use this setting when the data source API uses different property names/paths for CREATE and for UPDATE operations. In this case, it specifies the corresponding API name of the field for UPDATE operations. Use the same notation for this setting as for the APIPath setting. For example, in our Stripe connector the Plans object ProductId field is configured like this: 1\n2\n3\n4\n5\n6\n7 { \"Name\" : \"ProductId\" , \"APIPath\" : \"product\" , \"InsertAPIName\" : \"product.id\" , \"UpdateAPIName\" : \"product\" , \"DbType\" : \"String\" } JPath JSON path to search for a property with a field value in web response. This is the alternative setting for APIPath in more complex cases. If a field is defined via JPath , it automatically becomes Required = false, Createable = false and Updateable = false. If a field belongs to a foreign key with AggregateParent = true in ParentRelations section, JPath must be specified related to the parent object web response, not to the current object. JSON path is an expression which queries and gets data from JSON structure in the specified JSON structure elements. You must follow JSON Path syntax to specify it right. You need to use escaping for fields with special characters in their names. Names with dots, spaces, or other special characters must be quoted with square brackets. Data sources may sometimes return arrays, which include elements with a space character in their names. \nFor example, 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 { \"fields\" : [ { \"first name\" : [{ \"value\" : \"John\" , \"label\" : \"main\" , \"is_primary\" : true }] } ] } You can set JPath in the following format to get such field values. 1 \"JPath\": \"fields.['first name'][0].value\" SingleResultJPath Specifies the JSON path to search for a property with a field value (relative to ResultPath . Use this setting when the data source API uses a different property name/path for the operation of retrieving a single record by ID ( RetrieveSingleOperation ). You need to use escaping for fields with special characters in their names. Names with dots, spaces, or other special characters must be quoted with square brackets. ReturningResultJPath Specifies the JSON path to search for a property with a field value (relative to ResultPath . Use this setting when: ReturningStrategy is defined for an object or for the connector globally. The field path in the INSERT/UPDATE operation result differs from the path in APIPath or JPath . You need to use escaping for fields with special characters in their names. Names with dots, spaces, or other special characters must be quoted with square brackets. Primary Defines whether the field is the primary key or not. It accepts true and false values. Default value is false . Set Primary = true for every composite key field to define the composite primary key. We recommend specifying the primary key fields at the beginning of object configuration. DbType Defines the field data type. Default data type is String . 1\n2\n3\n4\n5 { \"Name\" : \"FirstName\" , \"APIPath\" : \"first_name\" , \"DbType\" : \"String\" } The list of possible data types is the following: Data Type Description Boolean A simple type representing Boolean values of true or false Byte An 8-bit unsigned integer ranging in value from 0 to 255 Int16 An integral type representing signed 16-bit integers with values between -32768 and 32767 Int32 An integral type representing signed 32-bit integers with values between -2147483648 and 2147483647 Int64 An integral type representing signed 64-bit integers with values between -9223372036854775808 and 9223372036854775807 Single A floating point type representing values ranging from approximately 1.5 x 10 -45 to 3.4 x 10 38 with a precision of 7 digits Double A floating point type representing values ranging from approximately 5.0 x 10 -324 to 1.7 x 10 308 with a precision of 15-16 digits Decimal A simple type representing values ranging from 1.0 x 10 -28 to approximately 7.9 x 10 28 with 28-29 significant digits Time A type representing a time value Date A type representing a date value DateTime A type representing a date and time value DateTimeOffset Date and time data with time zone awareness. Date value range is from January 1,1 AD through December 31, 9999 AD. Time value range is 00:00:00 through 23:59:59.9999999 with an accuracy of 100 nanoseconds. Time zone value range is -14:00 through +14:00 String A type representing Unicode character strings Guid A globally unique identifier (or GUID) Binary A variable-length stream of binary data ranging between 1 and 8,000 bytes JsonArray JSON Array JsonObject JSON Object SubType Specifies the data type for complex JSON object or JSON array fields (having DbType set to JsonArray or JsonObject ). For JsonObject fields, it can be a name or complex type . For JsonArray fields, it can be the name of a complex type or the name of a simple data type, and it specifies the type of array elements. If the SubType is not specified, the JSON field content is passed as is without further processing and mapping to a specific sub type fields structure. Length Number of characters for text type fields. Default values are: 38 for Guid, int.Maxvalue for JSON Object or JSON Array, 255 for other text types. Precision Max number of significant digits for a numeric field. For example, 1\n2\n3\n4\n5\n6\n7 { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Decimal\" , \"Precision\" : 15 , \"Scale\" : 5 } Scale Number of characters after comma for numeric fields. 1\n2\n3\n4\n5\n6\n7 { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Decimal\" , \"Precision\" : 15 , \"Scale\" : 5 } TimeStampStoreMode Use this setting to redefine the TimeStampStoreMode from the ProviderConfiguration section for the specific field. For example, 1\n2\n3\n4\n5\n6 { \"Name\" : \"AvailableOn\" , \"APIPath\" : \"available_on\" , \"DbType\" : \"DateTime\" , \"TimeStampStoreMode\" : \"SecondsSinceEpoch\" }, DateTimeFormat This setting defines how to process the DateTime data. If you omit it, it inherits the DateTimeFormat value from the ProviderConfiguration section. DateFormat This setting defines how to process the Date data. If you omit it, it inherits the DateFormat value from the ProviderConfiguration section. TimeFormat This setting defines how to process the Time data. If you omit it, it inherits the TimeFormat value from the ProviderConfiguration section. Nullable This setting defines if the field can accept null values. Accepts true or false values. Default value is true . Required This setting defines whether the field is required or optional to fill in. Accepts true or false values. Default value false . DMLKey This setting defines if you can use this field in the WHERE condition of the UPDATE or DELETE operations to facilitate a record search and processing. Accepts true or false values. Createable Boolean setting, which defines if the field supports the INSERT operation. Accepts true or false values. Default value is True . You must declare the InsertOperation section to use the INSERT operation. Updatable Boolean setting, which defines if this field supports the UPDATE operation. Accepts true or false values. Default value is True . You must declare the UpdateOperation section to use the UPDATE operation. ReadOnly This boolean setting defines if the field value can be modified. Accepts true or false values. Default value is False . DefaultValue This setting defines the default value for the field. 1\n2\n3\n4\n5\n6 { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Decimal\" , \"DefaultValue\" : 0 } NullAs This setting defines a value the source uses as an empty value. This setting impacts the web requests for CRUD operations. For example, some source uses \"name\": \"\" instead of \"name\": null . In this case, you specify \"NullAs\": \"\" . ExcludeNullValuesFromUpdate Use this setting to omit fields with null values during the UPDATE operation. The connector will omit fields, for which null values are passed, in the request. ExtendedRequest Source API may return only part of the fields for some objects when querying multiple records. To query the values of lacking fields, Skyvia can perform additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. This setting defines whether it is necessary to perform such additional web requests by a record ID to get this field value for each record. Skyvia performs additional requests only if at least one field with ExtendedRequest = true is present in the web request. If no fields with ExtendedRequest = true are selected in the request, then no additional requests exist. Use this setting only if the RetrieveSingleOperation setting is determined for the object. ExtendedRequest is boolean and accepts the true and false value. Default value is false . For example, Pipedrive has the Filters object. When querying all records, the value in the Conditions field returns an empty result. If you set the ExtendedRequest to true for the Conditions field, Skyvia will perform an additional web request to each record. It will obtain all Conditions field values by record IDs. Consider the source API limits when using this setting. We recommend using it to get data from really necessary or required fields. Enum This setting defines if the field has a set of specific valid values. Enum and accepts True or False values. Use this setting always together with EnumValues setting. MultiEnum This setting defines if the field is an array of specific valid values. MultiEnum accepts True or False values. Use this setting always together with EnumValues setting. EnumValues This setting defines the set of specific valid values for the field if Enum or MultiEnum is true. It is represented as key-value pairs: [{Name1:value1}, {Name2:value2}] .\nFor example, the Status field valid values are: canceled, chargeable, consumed, failed . The Columns setting for the Status field will look like: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23 { \"Name\" : \"Status\" , \"APIPath\" : \"status\" , \"DbType\" : \"String\" , \"Enum\" : true , \"EnumValues\" : [{ \"Name\" : \"Canceled\" , \"Value\" : \"canceled\" }, { \"Name\" : \"Chargeable\" , \"Chargeable\" : \"chargeable\" }, { \"Name\" : \"Consumed\" , \"Value\" : \"consumed\" }, { \"Name\" : \"Failed\" , \"Value\" : \"failed\" } ] } Here, Name is a name of the list item, displayed to the Skyvia user. Value is the internal data source API value of the corresponding list item. StrictEnumValues Determines whether you can assign only values from the EnumValues list to this field, and custom values are not allowed. The default value is False . ExcludeFromFieldsParameter Use this setting to exclude a field from the list, passed in the FieldsParameterName parameter. For example, it is necessary if the data source always returns this field, and it does not allow passing the field name in the field list. Expression This setting defines a field value using an expression. You can use it both to define a completely virtual calculated field and to redefine the value of an existing field. A virtual field defined by expression becomes Createable = false and Updateable = false by default. However, you can define a real field with an Expression setting and can explicitly set Createable = true or Updateable = true and specify APIName , InsertAPIName or UpdateAPIName for such field. When specifying an expression, use the API names of the response object properties. You must specify API names without parentheses or quotes. If the expression uses the name of a nested JSON property, separate the names with the dot (for example, shipping.country). You can find the Expression syntax and the list of supported functions and operators in our Expression Syntax documentation. For example, Mailjet API returns null for the UpdatedDate field if this field has never been updated since its creation. Thus, we declared the Expression setting for the never-updated records to accept the CreatedDate field if the UpdatedDate is null: 1\n2\n3\n4\n5\n6 { \"Name\" : \"UpdatedDate\" , \"APIPath\" : \"ModifiedAt\" , \"DbType\" : \"DateTime\" , \"Expression\" : \"datetime(is_null_or_empty(string(ModifiedAt)) ? CreatedAt: ModifiedAt)\" } FilterOperations This section contain filter operation configurations, which determine the filtering operations natively supported by the data source API.\nFor example, 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Int32\" , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"Amount\" , \"ParameterType\" : \"UrlPath\" }, { \"Operation\" : \"LessThan\" , \"ParameterName\" : \"Amount\" , \"ParameterType\" : \"UrlPath\" } ] } Every FilterOperation setting has the following nested settings: Operation Defines the type of filter operation supported by source API. Valid values are Equals, LessThan, LessThanOrEquals, GreaterThan, GreaterThanOrEquals . ParameterType Defines how the filter value is passed in the web request.\nValid values: Value Description UrlPath Value is a part of URL, for example, https://us11.api.mailchimp.com/3.0/lists/ list_id UrlParameter Value is a parameter in web request URL, for example, https://us11.api.mailchimp.com/3.0/lists?Name=mylist BodyParameter Value is contained in the web request body, for example, POST https://us11.api.mailchimp.com/3.0/lists {\"Name\":\"mylist\"} ParameterName Name of the URL parameter (when ParameterType = UrlParameter) or web request property (when ParameterType = BodyParameter) where the filter value is passed. Required Defines whether the filter is required or not. Can accept the true or false value. Default value is false . If the filter is required, you can\u2019t execute the web request to this object without a filter by this field. DbType Redefine the filter argument data type. If omitted, the argument data type is inherited from the field DbType . SubType Redefines the filter argument subtype. If omitted, argument subtype is inherited from the field SubType . TimeStampStoreMode Redefines the filter argument TimeStampStoreMode . If omitted, argument TimeStampStoreMode is inherited from the field TimeStampStoreMode . DateTimeFormat Redefines the filter argument DateTimeFormat . If you omit it, the argument DateTimeFormat is inherited from the field DateTimeFormat . DateFormat Redefines the filter argument DateFormat . If you omit it, the argument DateFormat is inherited from the field DateFormat . TimeFormat Redefines the filter argument TimeFormat . If you omit it, the argument TimeFormat is inherited from the field TimeFormat . NullAs Redefines the filter argument NullAs . If you omit it, the argument NullAs is inherited from the field NullAs . ParentRelations This section contains parent relationship configurations that define the parent relationships of the current object with another object. Parent relationship configurations have the following settings: Field The foreign key field that refers to the parent object. RelatedTable The name of the parent object. RelatedField Key field in the parent object by which the foreign key relation is built. AggregateParent Some data sources allow getting records from some child objects only by specifying the corresponding parent primary key (ID) values. This setting defines whether querying data from this object is performed via the parent primary key, corresponding to this relation. Accepts true or false values. Default value is false . When you define child object foreign key fields, that act as references to parent objects, their APIPath or JPath values must relate to the parent object web response. Skyvia does not require the ID of the parent object from user in such cases. If you don\u2019t specify the IDs of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their IDs and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes more time and consumes more API calls. It uses at least one API call for every parent object record. Example 1 Invoice and InvoiceLineItems Relationship There is a parent Invoices object and a child InvoiceLineItems object. The InvoiceLineItems object references the Invoices object by the InvoiceId field. Source API allows retrieving records from the InvoiceLineItems object only via IDs from the parent Invoices object. To retrieve each set of records from the InvoiceLineItems , we specify the InvoiceId in the request URL. We set the AggregateParent = true in the ParentRelations . It will automatically process such a situation and get the records from the InvoiceLineItems . Example of the parent Invoice object: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28 { \"Name\" : \"Invoices\" , \"Url\" : \"/invoices\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"CustomerId\" , \"APIPath\" : \"customer\" , \"DbType\" : \"String\" , \"Required\" : true , \"Updateable\" : false }, { \"Name\" : \"Total\" , \"APIPath\" : \"total\" , \"DbType\" : \"Int32\" , \"Createable\" : false , \"Updateable\" : false } ] } Example of the child InvoiceLineItems object: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35 { \"Name\" : \"InvoiceLineItems\" , \"Url\" : \"/invoices//lines\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"InvoiceId\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 50 , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"InvoiceId\" , \"ParameterType\" : \"UrlPath\" }] }, { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Int32\" }], \"ParentRelations\" : [{ \"Field\" : \"InvoiceId\" , \"RelatedTable\" : \"Invoices\" , \"RelatedField\" : \"Id\" , \"AggregateParent\" : true }] } In this example, the child object foreign key InvoiceId field APIPath is set to \"APIPath\": \"id\" , because such APIPath relates to the parent object web response. Example 2 ParentRelations for the Objects with Nested Relationships. This is how the ParentRelations settings are defined in our SurveyMonkey connector. There are three objects Surveys, SurveyPages , and SurveyPageQuestions . APIPath of the SurveyId field from the SurveyPages and SurveyPageQuestions objects is set to id , because it is set related to the parent Surveys object web response. The PageId field from the SurveyPageQuestions object APIPath is set to id (related to the parent Pages object web response). 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n140\n141\n142\n143\n144\n145\n146\n147\n148\n149\n150\n151\n152\n153\n154\n155\n156\n157\n158\n159\n160\n161\n162\n163\n164\n165\n166\n167 { \"Name\" : \"Surveys\" , \"Url\" : \"/surveys\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"Title\" , \"APIPath\" : \"title\" , \"DbType\" : \"String\" , \"Required\" : true , \"Length\" : 1000 }, { \"Name\" : \"Nickname\" , \"APIPath\" : \"nickname\" , \"DbType\" : \"String\" }, { \"Name\" : \"ResponseCount\" , \"APIPath\" : \"response_count\" , \"DbType\" : \"Int32\" , \"Createable\" : false , \"Updateable\" : false }, ], } { \"Name\" : \"SurveyPages\" , \"Url\" : \"/surveys//pages\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"SurveyId\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Required\" : true , \"Updateable\" : false , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"survey_id\" , \"ParameterType\" : \"UrlPath\" }] }, { \"Name\" : \"Title\" , \"APIPath\" : \"title\" , \"DbType\" : \"String\" , \"Length\" : 1000 }, { \"Name\" : \"Description\" , \"APIPath\" : \"description\" , \"DbType\" : \"String\" , \"Length\" : 4000 }, { \"Name\" : \"QuestionCount\" , \"APIPath\" : \"question_count\" , \"DbType\" : \"Int32\" , \"Createable\" : false , \"Updateable\" : false }, ], \"ParentRelations\" : [{ \"Field\" : \"SurveyId\" , \"RelatedTable\" : \"Surveys\" , \"RelatedField\" : \"Id\" , \"AggregateParent\" : true }] } { \"Name\" : \"SurveyPageQuestions\" , \"Url\" : \"/surveys//pages//questions\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"PageId\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Required\" : true , \"DMLKey\" : true , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"page_id\" , \"ParameterType\" : \"UrlPath\" }] }, { \"Name\" : \"SurveyId\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Length\" : 190 , \"Required\" : true , \"DMLKey\" : true , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"survey_id\" , \"ParameterType\" : \"UrlPath\" }] }, { \"Name\" : \"Heading\" , \"APIPath\" : \"heading\" , \"DbType\" : \"String\" , \"Length\" : 2000 , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"Position\" , \"APIPath\" : \"position\" , \"DbType\" : \"Int32\" }, { \"Name\" : \"Subtype\" , \"APIPath\" : \"subtype\" , \"DbType\" : \"String\" , \"Required\" : true , \"ExtendedRequest\" : true , \"Updateable\" : false }, ], \"ParentRelations\" : [{ \"Field\" : \"PageId\" , \"RelatedTable\" : \"SurveyPages\" , \"RelatedField\" : \"Id\" , \"AggregateParent\" : true }, { \"Field\" : \"SurveyId\" , \"RelatedTable\" : \"Surveys\" , \"RelatedField\" : \"Id\" } ] } RetrieveSingleOperation This section defines the operation of fetching a record by its primary key. We highly recommend specifying this setting if the source API allows getting the object records by their IDs. For example, the connector performs the request SELECT * FROM Object WHERE Id = via 1 API call, with the RetrieveSingleOperation defined. If you omit RetrieveSingleOperation , the connector will read all the object records into the cache and then perform the request to this cache instead of calling the object directly. It takes more time and more API calls. 1\n2\n3\n4\n5\n6\n7\n8 \"RetrieveSingleOperation\" : { \"UrlSuffix\" : \"/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"id\" , \"ParameterType\" : \"UrlPath\" }] }, Url Absolute or relative URL to retrieve a single object. Use this parameter, if the base URL for retrieving a single object is different from the object URL. UrlSuffix Suffix for the URL parameter. It is added at the end of the URL.\nFor example, the Customers object URL is /customers , and the URL suffix in RetrieveSingleOperation is /customer_id . In this case, web request with WHERE-condition by primary key field will have the URL */customers/ *. ResultPath The path to the field that contains the result record JSON object in the response. ResultPath should be specified in the RetrieveSingleOperation if it differs from the ResultPath , specified in the object configuration. For example, the whole object ResultPath is \u201cissues\u201d , and the ResultPath for the request by ID is \u201cissue\u201d : 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 { \"Name\" : \"Issues\" , \"Url\" : \"/issues.json\" , \"ResultPath\" : \"issues\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"JPath\" : \"$.id\" , \"DbType\" : \"Int32\" , \"Primary\" : true }], \"RetrieveSingleOperation\" : { \"Url\" : \"/issues/.json\" , \"ResultPath\" : \"issue\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"issue_id\" , \"ParameterType\" : \"UrlPath\" }] } Method Web request method. If skipped, the object\u2019s Method field value is used. Arguments This section contains argument configurations which define the arguments of RetrieveSingleOperation. Argument configurations contain the following settings: ColumnName Name of the key field for RetrieveSingleOperation . ParameterName Name of the web request parameter. ParameterType Defines the type of the argument. Valid values are the following: Value Description UrlPath Value is a part of URL UrlParameter Value is a URL parameter in a web request BodyParameter Value is contained in the web request body You must specify all the fields needed for the record search in the Arguments setting. If the object URL refers to a parent object, you must also specify the field by which the object is related in the Arguments setting. For example, The TransferReversals object relates to the Transfers object (its URL is /transfers/ TransferId /reversals) by aggregate relation. You can get the TransferReversals records only specifying the TransferId value. We specify the TransferId setting in Arguments when declaring the RetrieveSingleOperation for such object. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52 { \"Name\" : \"TransferReversals\" , \"Url\" : \"/transfers//reversals\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"Amount\" , \"APIPath\" : \"amount\" , \"DbType\" : \"Int32\" , \"Updateable\" : false }, { \"Name\" : \"TransferId\" , \"APIPath\" : \"id\" , \"DbType\" : \"String\" , \"Required\" : true , \"Updateable\" : false , \"FilterOperations\" : [{ \"Operation\" : \"Equals\" , \"ParameterName\" : \"TransferId\" , \"ParameterType\" : \"UrlPath\" }] } ], \"ParentRelations\" : [{ \"Field\" : \"TransferId\" , \"RelatedTable\" : \"Transfers\" , \"RelatedField\" : \"Id\" , \"AggregateParent\" : true }], \"RetrieveSingleOperation\" : { \"UrlSuffix\" : \"/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"reversal_id\" , \"ParameterType\" : \"UrlPath\" }, { \"ColumnName\" : \"TransferId\" , \"ParameterName\" : \"TransferId\" , \"ParameterType\" : \"UrlPath\" } ] } } CRUD Operations Settings for INSERT, UPDATE, and DELETE operations for the connector\u2019s objects are defined in the InsertOperation, UpdateOperation and DeleteOperation sections. If any of these sections is not specified in an object configuration, the corresponding DML operation is not available for the object. Every CRUD operation section contains the following settings: UrlSuffix A suffix added in the end of operation URL. For example, we perform the POST request to the Lists object. Web request URL should look like /lists.create. We declare the following UrlSuffix : \"UrlSuffix\":\".create\" . Url Specify the URL for CRUD operation if it significantly differs from the GET request URL. Method Web request method. Default values are POST for InsertOperation , PUT for UpdateOperation , and DELETE for DeleteOperation . BodyPattern Request body format. More details are available in the BodyPattern section of ProviderConfiguration topic. Headers Additional HTTP request headers that are sent for this operation. Specify Headers as a JSON array of objects with the Key and Value properties. They are merged with Headers specified in the ProviderConfiguration section as well as with headers in the corresponding setting of an object. ConstantParameters Allows to set constant parameters for the specific operation. These parameters are merged with ConstantParameters specified in the ProviderConfiguration section as well as with ConstantParameters specified on the object level. For example, 1\n2\n3\n4\n5\n6\n7\n8 \"ConstantParameters\" : [ { \"ParameterName\" : \"resource_type\" , \"ParameterType\" : \"BodyParameter\" , \"DbType\" : \"String\" , \"Value\" : \"lead\" } ] InputType Defines how the request parameters are passed in the web request. Valid values are: Value Description JsonBody Default value. Request parameters are contained in the request body RestParameters Request parameters are passed as URL parameters separated by the & character UrlencodedBody As Key-Value pairs, where the Key is the parameter name and Value is the parameter value Arguments This section contains argument configurations which define the arguments of the corresponding DML operation. Argument configurations contain the following settings: ParameterName Argument name. ParameterType The type of the argument. Valid values are the following: Value Description UrlPath Value is a part of URL UrlParameter Value is a URL parameter in a web request BodyParameter Value is contained in the web request body ColumnName Name of the key field for the current operation. ReturningStrategy Section that determines how to process the returned data fields after the INSERT or UPDATE operation was performed. \nSee the ReturningStrategy in the ProviderConfiguration section. Example: Declaring all CRUD Operations for the Customers Object 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46 { \"Name\" : \"Customers\" , \"Url\" : \"/customers\" , \"ResultPath\" : \"data\" , \"Columns\" : [{ \"Name\" : \"Id\" , \"APIPath\" : \"id\" , \"Primary\" : true , \"Createable\" : false , \"Updateable\" : false }, { \"Name\" : \"Email\" , \"APIPath\" : \"email\" }], \"InsertOperation\" : { \"UrlSuffix\" : \"\" , \"Method\" : \"POST\" , \"InputType\" : \"RestParameters\" , \"ReturningStrategy\" : { \"Type\" : \"Row\" } }, \"UpdateOperation\" : { \"UrlSuffix\" : \"/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"customer_id\" , \"ParameterType\" : \"UrlPath\" }], \"Method\" : \"PUT\" , \"InputType\" : \"RestParameters\" , \"ReturningStrategy\" : { \"Type\" : \"Row\" } }, \"DeleteOperation\" : { \"UrlSuffix\" : \"/\" , \"Arguments\" : [{ \"ColumnName\" : \"Id\" , \"ParameterName\" : \"customer_id\" , \"ParameterType\" : \"UrlPath\" }], \"Method\" : \"DELETE\" } }" }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/metadata/procedures.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector Metadata Procedures The Procedures section contains stored procedure configurations. Their settings define the connector stored procedures - auxiliary connector objects that allow you to implement specific operations and execute endpoints that cannot be represented as classic connector table objects. For example, the Ping procedure checks whether the connection is valid, or the SendMessage procedure, which sends a message, etc. You can call a stored procedure , for example, as a text of the command: CALL (:, :, :, ..., :) in the ExecuteCommand action in a Target component of a Data Flow or in Query . Connector stored procedures cannot participate in SELECT or CRUD requests. Stored procedure configuration includes the following settings. Setting Description Name The name of the stored procedure URL The procedure endpoint URL Method The web request method BodyType The type of a web request body ResultPath The path to the response result BodyPattern Body format settings of the procedure request Headers HTTP request headers PagingStrategy Stored procedure result paging strategy settings ErrorHandling Error processing settings ConstantParameters Parameters with constant values that are sent with every request. Parameters The stored procedure parameter settings Results The settings for stored procedure result Here is an example of a stored procedure configuration. This is the InitiateUserInvite stored procedure from the TeamworkCRM connector . It initiates an invitation for a user. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38 { \"Name\" : \"InitiateUserInvite\" , \"Url\" : \"/users/actions/initiateInvite.json\" , \"Method\" : \"POST\" , \"BodyPattern\" : \"{ \\\" invite \\\" : <$body$> }\" , \"BodyType\" : \"Json\" , \"Parameters\" : [ { \"Name\" : \"email\" , \"APIPath\" : \"email\" , \"DbType\" : \"String\" , \"Required\" : true , \"ParameterType\" : \"BodyParameter\" }, { \"Name\" : \"firstName\" , \"APIPath\" : \"firstName\" , \"DbType\" : \"String\" , \"Required\" : true , \"ParameterType\" : \"BodyParameter\" }, { \"Name\" : \"lastName\" , \"APIPath\" : \"lastName\" , \"DbType\" : \"String\" , \"Required\" : true , \"ParameterType\" : \"BodyParameter\" }, { \"Name\" : \"title\" , \"APIPath\" : \"title\" , \"DbType\" : \"String\" , \"Required\" : true , \"ParameterType\" : \"BodyParameter\" } ], \"ResultPath\" : \"\" } General Settings Name Stored procedure name. Url The stored procedure endpoint URL. It can be an absolute or relative URL. If the URL value starts with the \u2018/\u2019 character, Skyvia treats it as a relative URL and appends it to the BaseUrl . Otherwise, it treats it as an absolute URL. Method A web request method, for example, GET, POST, PUT. This setting is required. BodyType The type of web request body. Valid values are: Value Description None Web request body is not sent Json Web request body is sent in JSON format Urlencoded Web request body is sent as key-value pairs, where key is a web request parameter name, and a value is body value ResultPath A path to the response field containing the result of the procedure endpoint execution. See ProviderConfiguration for more information. BodyPattern Request body format. See ProviderConfiguration for more information. Headers Additional HTTP request headers that are sent every time you call the procedure. Specify Headers as a JSON array of objects with the Key and Value properties. These headers are merged with the respective constant headers, specified in ProviderConfiguration Headers section. PagingStrategy Allows getting records by pages, if the source supports PagingStrategy for stored procedure endpoint. Add this section if paging for this stored procedure differs from the paging, configured in the ProviderConfiguration section. See ProviderConfiguration for more information. ErrorHandling Allows you to determine the procedure behavior in case of errors. Add this section if this behavior for this stored procedure must differ from the behavior, configured in the ProviderConfiguration section. See ProviderConfiguration for more information. ConstantParameters Allows you to set constant web request parameters, sent with every procedure call. These constant parameters are merged with the respective constant parameters, specified in ProviderConfiguration ConstantParameters section. Parameters This section contains stored procedure parameter configurations for every stored procedure parameter. These configurations have the following settings: Name Stored procedure parameter name. ParameterType Specifies how the parameter value is passed in the web request. The default value is UrlParameter .\nValid values: Value Description UrlPath Value is a part of URL UrlParameter Value is a parameter in web request URL BodyParameter Value is contained in the web request body APIPath The name of the parameter in the data source API. If the parameter is nested in the API, the APIPath should have the following format: parent object name . parameter name . Nesting level is not limited. If the dot character is a part of the name in JSON, and not the separator in the hierarchy, then you need to use quotation with the {} characters. DbType Specifies the parameter data type. The default value is String. You can find the supported types in the Objects topic. Special JsonArray and JsonObject types mean that the parameter value is an array or a JSON object correspondingly. SubType Specifies the data type for complex JSON object or JSON array parameters (having DbType set to JsonArray or JsonObject ). For JsonObject parameters, it can be a name or complex type . For JsonArray parameters, it can be the name of a complex type or the name of a simple data type, and it specifies the type of array elements. If the SubType is not specified, the JSON parameter content is passed as is without further processing and mapping to a specific sub type fields structure. Length Number of characters for text type fields. Default values are: 38 for Guid, int.Maxvalue for JSON Object or JSON Array, 255 for other text types. Precision Max number of significant digits for a numeric field. Scale Number of digits after comma for numeric fields. TimeStampStoreMode This settings specifies the timestamp format for the procedure. If you omit it, the corresponding global setting from the ProviderConfiguration section is used. See ProviderConfiguration for more information. DateTimeFormat This setting specifies how to process the source DateTime data for the procedure. If you omit it, the corresponding global setting from the ProviderConfiguration section is used. See ProviderConfiguration for more information. DateFormat This setting specifies how to process the source Date data for the procedure. If you omit it, the corresponding global setting from the ProviderConfiguration section is used. See ProviderConfiguration for more information. TimeFormat This setting specifies how to process the source Time data for the procedure. If you omit it, the corresponding global setting from the ProviderConfiguration section is used. See ProviderConfiguration for more information. Nullable Defines if the field can accept null values. The default value is true . Optional Use this setting to define whether the parameter is optional or required. Accepts the true and false values. DefaultValue This setting sets the default value for the parameter. It is useful when Optional = true. NullAs This setting specifies a value the source uses as an empty value. For example, some source uses \"name\": \"\" instead of \"name\": null . In this case, you specify \"NullAs\": \"\" . Enum This setting specifies if the parameter has a set of specific valid values. Enum accepts true of false values. Use this setting always together with the EnumValues setting. MultiEnum This setting specifies if the field is an array of specific valid values. MultiEnum accepts true of false values. Use this setting always together with the EnumValues setting. EnumValues This setting defines the set of valid values for the field if Enum or MultiEnum is true. It is represented as key-value pairs: [{Name1:value1}, {Name2:value2}] .\t\nName, in this case, is a list item display name available to the connector end user. \nValue is an internal API value of the valid enum list item. StrictEnumValues Determines whether you can assign only values from the EnumValues list to this field, and custom values are not allowed. The default value is False . Results This section contains column configurations with settings defining the fields of the result set returned by the stored procedure. This section is not needed if the stored procedure does not need to return a resultset. Name A field name displayed to user after the procedure execution, for example {\"Name\":\"Amount\",...} . APIPath The JSON path to the result field in the web response. If the result field is nested, then you specify the parent and child fields in the format: parent setting name . setting name . Nesting level is unlimited. JPath JSON path to a property with the field value in the response. For fields with special characters in their names, you need to use escaping. For example, JSON path for a field with a dot in the name LocationName.ID , looks like ['LocationName.ID'] . DbType Specifies the field data type. Default data type is String. {\"Name\":\"FirstName\", APIPath: \"first_name\", \"DbType\":\"String\" ...} You can find the supported types in the Objects topic. SubType{: #subtype2} Specifies the data type for complex JSON object or JSON array result fields (having DbType set to JsonArray or JsonObject ). For JsonObject result fields, it can be a name or complex type . For JsonArray result fields, it can be the name of a complex type or the name of a simple data type, and it specifies the type of array elements. If the SubType is not specified, the JSON field content is passed as is without further processing and mapping to a specific sub type fields structure. Length Number of characters for text type fields. Default values are: 38 for Guid, int.Maxvalue for JSON Object or JSON Array, 255 for other text types. Precision Max number of significant digits for a numeric field. Scale Number of digits after comma for numeric fields. TimeStampStoreMode Use this setting to redefine the TimeStampStoreMode from the ProviderConfiguration section. DateTimeFormat This setting specifies how to process the source DateTime data. Default value is yyyy-MM-ddTHH:mm:ssZ . See ProviderConfiguration for more information. DateFormat This setting specifies how to process the source Date data. Default value is yyyy-MM-dd . See ProviderConfiguration for more information. TimeFormat This setting specifies how to process the source Time data. Default value is HH:mm:ssZ . See ProviderConfiguration for more information. Nullable Boolean setting which specifies if the field can accept null values. Default is false . Enum This setting specifies if the parameter has a set of specific valid values. Enum is boolean and accepts true of false values. Use this setting always together with EnumValues setting. MultiEnum This setting specifies if the field is an array of specific valid values. MultiEnum is boolean and accepts True of False values. Use this setting always together with the EnumValues setting. EnumValues This setting defines the set of specific valid values for the field if Enum or MultiEnum is true. It is represented as key-value pairs: [{Name1:value1}, {Name2:value2}] .\t\nName, in this case, is a list item display name available to the connector end user. \nValue is an internal API value of the valid enum list item. Expression This setting allows creating virtual calculated fields in the data set, returned by a stored procedure. \nWhen building an expression, use the API names of the response fields. You must specify API names without parentheses or quotes. If the expression uses the name of the nested field, separate the names with the dot (for example, shipping.country). You can find the Expression syntax and the list of supported functions and operators in our [Expression Syntax](https://docs.skyvia.com/expression-syntax/) documentation." }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/metadata/types.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector Metadata Types This section contains complex type configurations. Their settings define complex structured objects \u2014 Complex types. These types can represent complex JSON objects. Complex types can have fields of other complex types or arrays of complex types. Types are not independent connector objects. They are auxiliary objects, which cannot have relationships or participate in SELECT queries. You can\u2019t implement CRUD operations for them. You can use complex types to define: JSON object and JSON array fields in the connector objects JSON object and JSON array fields in other complex types stored procedure parameters for passing JSON object and JSON array values stored procedure result fields of JSON object and JSON array type You can assign these complex types to the SubType settings in the Columns , Parameters , and Results sections. Settings A complex type declaration may contain the following settings: Name The name of the complex type. Columns The Columns section contains column configurations. Their settings define complex type fields. Column configuration settings for complex types are almost identical to the column configuration settings for objects, except for some properties associated with operations that complex types don\u2019t support. See the Columns section for the information about their settings. Type Example The following example demonstrates the InvoiceLineItem complex type from the TMetric connector . Here is the complex type configuration code: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32 \"Types\" : [ { \"Name\" : \"InvoiceLineItemsType\" , \"Columns\" : [ { \"Name\" : \"UnitCount\" , \"APIPath\" : \"unitCount\" , \"DbType\" : \"Decimal\" }, { \"Name\" : \"UnitPrice\" , \"APIPath\" : \"unitPrice\" , \"DbType\" : \"Decimal\" }, { \"Name\" : \"UnitAmount\" , \"APIPath\" : \"unitAmount\" , \"DbType\" : \"Decimal\" }, { \"Name\" : \"Description\" , \"APIPath\" : \"description\" , \"DbType\" : \"String\" }, { \"Name\" : \"ItemType\" , \"APIPath\" : \"itemType\" , \"DbType\" : \"String\" } ] } ] And here is how you define a column configuration for a field, storing JSON array of invoice line items: 1\n2\n3\n4\n5\n6\n7\n8\n9 { \"Name\" : \"Items\" , \"APIPath\" : \"items\" , \"DbType\" : \"JsonArray\" , \"SubType\" : \"InvoiceLineItemsType\" , \"Createable\" : false , \"Updateable\" : false , \"ExtendedRequest\" : true }" }, { "url": "https://docs.skyvia.com/connectors-sdk/rest-connector/provider-configuration.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors SDK REST Connector ProviderConfiguration The ProviderConfiguration section contains main provider settings that determine the connector authentication and behavior. Most of the behavior settings can be overridden for specific connector objects and stored procedures in the Metadata section . ProviderConfiguration section contains the following configuration sections and settings: Setting Description AuthenticationType The type of web request authentication AuthenticationToken Section determining how authentication token is passed BaseUrl Initial part of the API URL ResultPath The path to the field that contains the result collection of JSON objects in the response BodyPattern The setting to configure the unusual web request body format Headers Defines the additional constant web request headers FieldsParameterSupported Indicates whether the connector supports querying only a part of the object fields by specifying the field list in a parameter. FieldsParameterName The name of the web request URL parameter which contains the field list TimeStampStoreMode The timestamp data format DateTimeFormat Defines the text format for DateTime data processing DateFormat Defines the text format for Date data processing TimeFormat Defines the text format for Time data processing RateLimitThrottling Section for configuring API limits usage PagingStrategy Section for pagination settings ErrorHandling Section for error processing configuration ReturningStrategy Section for configuration of returned data processing ConstantParameters Section to configure parameters sent for all web requests ProviderConfiguration section example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24 \"ProviderConfiguration\" : { \"BaseUrl\" : \"https://api.stripe.com/v1\" , \"AuthenticationType\" : \"Basic\" , \"TimeStampStoreMode\" : \"SecondsSinceEpoch\" , \"FieldsParameterSupported\" : true , \"FieldsParameterName\" : \"opt_fields\" , \"ResultPath\" : \"data\" , \"Headers\" : [{ \"Key\" : \"X-API-Version\" , \"Value\" : 2 }], \"PagingStrategy\" : { \"Type\" : \"LimitOffset\" , \"LimitParameterName\" : \"limit\" , \"OffsetParameterName\" : \"offset\" , \"PageSize\" : 100 }, \"ErrorHandling\" : { \"ErrorMarkJPath\" : \"ok\" , \"ErrorMarkConditionValue\" : true , \"APILimitMessage\" : \"request over limit\" , \"ErrorMessageJPath\" : \"error_description\" }, } Authentication Settings The following settings define the web requests authentication to access the source API. AuthenticationType The setting which defines how the connector will authenticate the web requests. Valid values are the following: Value Description None No authentication Basic Requires the basic authentication parameters such as username and password AuthenticationToken The authentication is possible via authentication token OAuth2 Authentication via [OAuth2.0](https://oauth.net/2/) authorization protocol. For example, \"AuthenticationType\": \"Basic\" . AuthenticationToken If you use the AuthenticationToken or OAuth2 type, specify how the authentication token is passed. TokenName Name of the parameter which accepts the token value. TokenType This setting defines the way to pass the token. For example: 1\n2\n3\n4\n5 \"AuthenticationToken\" : { \"TokenType\" : \"Header\" , \"TokenName\" : \"Bearer\" , \"HeaderName\" : \"Authorization\" } Valid values are the following: Value Description Header Token value is located in the header, such as Bearer Token in the Authorization header UrlParameter Token value is located in a URL as a parameter. For example, https://us11.api.mailchimp.com/3.0/lists?apikey=11111 where the 11111 is the token value UrlPath Token value is located in the URL as a part of it. For example, https://us11.api.mailchimp.com/ token /3.0/lists HeaderName Name of the header for the Header token type. The default value is Authorization . General Settings BaseUrl The base URL is the initial part of the API URL. All API endpoints are relative to the base URL. For example, https://acuityscheduling.com/api/v1 or https://app.asana.com/api/1.0 ResultPath The path to the field that contains the result collection of JSON objects in the response. The field with the result collection of objects may be located on the root level of the data source response or nested within another object. For example, Root Field ResultPath: If the field is located on the root level of the response, 1\n2\n3 { \"data\" : [] } Its ResultPath value in the connector will look like this 1\n2\n3 \"ProviderConfiguration\" : { \"ResultPath\" : \"data\" } Nested Field ResultPath: If the field is nested within another object, 1\n2\n3\n4\n5 { response : { data : [] } } You must specify it in the ResultPath , for example 1\n2\n3 \"ProviderConfiguration\" : { \"ResultPath\" : \"response.data\" } In the Metadata section, you can also define or redefine this setting for a particular object. BodyPattern Data sources APIs require passing the web request body in a specific format. Usually, the data sources accept the web request body in JSON object format. For example, 1\n2\n3\n4 { \"firstName\" : \"John\" , \"lastName\" : \"Doe\" } By default, the connector passes the web request body as a JSON object.\nThus, you can skip this setting if the data source API requires passing the web requests body as a JSON object. However, if the data source requires the unusual web request body format, you can define this format in the BodyPattern setting. When the request body is an array: 1\n2\n3\n4\n5\n6 [ { \"firstName\" : \"John\" , \"lastName\" : \"Doe\" } ] Its BodyPattern will have the following format: 1 \"BodyPattern\" : \"[ <$body$> ]\" When the request body is an array inside the data property : 1 { \"data\" : [ { \"name\" : \"rndwork.com\" } ] } Its BodyPattern will have the following format: 1 \"BodyPattern\" : \"{ \\\" data \\\" : [ <$body$> ] }\" When the request body is a root object inside the data property: 1 { \"data\" : { \"name\" : \"rndwork.com\" } } Its BodyPattern will have the following format: 1 \"BodyPattern\" : \"{ \\\" data \\\" : <$body$> }\" In the Metadata section, you can define or redefine this setting for a particular object. Headers Constant HTTP request headers that are added to every request to the connector. Specify Headers as a JSON array of objects with the Key and Value properties. For example, 1\n2\n3\n4\n5\n6 \"ProviderConfiguration\" : { \"Headers\" : [{ \"Key\" : \"X-API-Version\" , \"Value\" : 2 }] } Headers indicated in the ProviderConfiguration section apply to all REST requests in the connector. If you want to apply a specific header to a particular object, you can define this setting directly in the Objects configuration section. FieldsParameterSupported This settings determines if the connector supports querying only a part of the object fields. The list of fields is passed as an URL parameter in this case. This functionality helps to improve the API performance when reading objects. The default value is false . FieldsParameterName This setting determines the name of the web request URL parameter, which contains the list of fields to query. Use this setting if the FieldsParameterSupported is set to true . The default value is field . The name field is typical for most APIs. You can define or redefine this setting for the particular object in the Metadata section. Date and Time Settings The following settings define the format of date and time data passed from and to the source API. TimeStampStoreMode Format in which timestamp data is processed in the connector. 1\n2\n3 \"ProviderConfiguration\" : { \"TimeStampStoreMode\" : \"SecondsSinceEpoch\" } Possible values are: Value Description Text Default value. Indicates that the date is processed in the text format. SecondsSinceEpoch This value indicates that the date is processed as a number of seconds passed since midnight of the 1 st January 1970. This format has to be converted to DateTime MillisecondsSinceEpoch This value indicates that the date is processed as a number of milliseconds passed since midnight of the 1 st January 1970. This format has to be converted to DateTime. You can also define or redefine this setting for the particular field in the Metadata section. DateTimeFormat This setting defines how to operate with the DateTime data. Use it when the TimeStampStoreMode = Text . The default value is yyyy-MM-ddTHH:mm:ssZ . To set the DateTimeFormat to MM-dd-YYYY HH:mm in all objects and all fields specify the following statement. 1\n2\n3 \"ProviderConfiguration\" : { \"DateTimeFormat\" : \"MM-dd-YYYY HH:mm\" } DateFormat This setting defines how to operate with the Date data. Use it when the TimeStampStoreMode = Text . The default value is yyyy-MM-dd . TimeFormat This setting defines how to operate with the Time data. The default value is HH:mm:ssZ . You can define and redefine these settings for the specific field in the Metadata section. RateLimitThrottling RateLimitThrottling is an allowed number of API requests per unit of time. It helps to avoid the \u201c429 Too Many Requests\u201d error. If the number of API calls reaches the limit per the specified time interval, the connector waits until this time ends and sends web requests afterward. RateLimitThrottling works on a single connection and can\u2019t prevent the 429 error if there are several parallel connections to one source. Example of allowing five web requests per 1 second: 1\n2\n3\n4 \"RateLimitThrottling\" : { \"RequestsLimit\" : 5 , \"TimeInterval\" : 1000 } The RateLimitThrottling setting contains two nested settings. RequestsLimit Maximum allowed number of web requests during the time span defined in the TimeInterval setting. TimeInterval Time span in milliseconds during which the maximal number of web requests can be sent. PagingStrategy Pagination is a technique that allows the splitting of the retrieved data into pages according to the defined strategy. You must define the PagingStrategy if the source uses pagination. Otherwise, the source returns only the first page. If the pagination differs from object to object, you can define or redefine the PagingStrategy in the Metadata section for particular objects. Common Paging Strategy Settings InputType This setting defines the way for passing paging strategy settings. Valid values are: Value Description RestParameters Default value. Settings are passed as URL parameters of the call, separated by the & character UrlencodedBody Settings are passed in the call body as key-value pairs Type Determines the strategy type. Most PagingStrategy settings depend on the Type specified. The list of possible values is: Type Description WithoutPaging No paging is used LimitOffset Page size and offset are specified. PageNo Page number must be passed in the request. NextPageToken The data source returns the token of the next page that must be passed in the request. NextPageUrl The data source returns the URL of the next page. See the detailed description of paging strategy types and settings required for them below WithoutPaging With this type selected, all the records return at once (when the source doesn\u2019t support pagination). 1\n2\n3 \"PagingStrategy\" : { \"Type\" : \"WithoutPaging\" } LimitOffset Use this type when records are requested in pages using limit and offset parameters. For example: 1\n2\n3\n4\n5 \"PagingStrategy\" : { \"Type\" : \"LimitOffset\" , \"LimitParameterName\" : \"limit\" , \"OffsetParameterName\" : \"offset\" , \"PageSize\" : 100 LimitParameterName Name of the Limit parameter. The default value is limit . OffsetParameterName Offset parameter name. If you skip it, the default value is offset . PageSize Defines the number of records selected in one call. For example, PageSize = 100 means that the first API call selects the first 100 records, and the subsequent API call selects another 100 from 101 to 200. StartIndex This setting defines the offset counting start (first page index). The default value is 0, but some sources require to set it to 1. PageNo Use this type when a web request must include the required page number. It is passed as the URL parameter of the request. The number of the next page is specified in every subsequent request. This paging strategy supports different ways how the data source can indicate whether the next page is available. The data source response may contain either the indicator of the next page\u2019s existence or the total number of pages or the total number of records. Alternatively, if the data source does not return any such information, Skyvia can stop querying the next pages if the number of records in the received page is less than the specified page size. If page size is also unknown, Skyvia continues to query next pages till the current page still contains data. Different settings are used for different ways of detecting whether the next page exists. For example, it may look like this: 1\n2\n3\n4\n5\n6 \"PagingStrategy\" : { \"Type\" : \"PageNo\" , \"PageNoParameterName\" : \"page\" , \"PageSizeParameterName\" : \"per_page\" , \"PageSize\" : 60 } All the possible settings which may be defined for the PageNo type are the following: PageNoParameterName Name of the page number parameter in the request URL. HasMoreJPath This setting is used if the response contains an indicator that more pages exist. It specifies the JSON path to the response property indicating if the next page with data exists. For example, the source response contains the following information: 1\n2\n3\n4\n5\n6\n7\n8\n9 { \"data\" : { \"pages\" : { \"hasMoreRecords\" : true , \"page\" : 3 , \"totalPages\" : 10 } } } In this case, the HasMoreJPath setting for the connector\u2019s paging strategy looks like this: 1 \"HasMoreJPath\" : \"data.pages.hasMoreRecords\" TotalPageCountJPath This setting is used if the response contains the total number of pages. It specifies a JSON path to the response property with the overall number of pages. If the response doesn\u2019t contain the indicator of the next page\u2019s existence, reading continues until the page number is less than this value. For example, ChartMogul source API returns the following pagination information in the object response: 1\n2\n3\n4\n5\n6 { \"per_page\" : 50 , \"page\" : 1 , \"current_page\" : 1 , \"total_pages\" : 4 } Thus, the PagingStrategy for ChartMogul conector, will look like this: 1\n2\n3\n4\n5\n6\n7 \"PagingStrategy\" : { \"Type\" : \"PageNo\" , \"PageNoParameterName\" : \"page\" , \"TotalPageCountJPath\" : \"total_pages\" , \"PageSizeParameterName\" : \"per_page\" , \"PageSize\" : 50 } TotalRowsJPath JSON path to search the general number of records on all pages. If there is no HasMoreJPath value in the response, the reading continues till the whole number of records is less than this value. PageSizeParameterName Name of the request parameter which defines the page size (Optional). If the default page size is known, but there is no corresponding name parameter, you can leave the PageSizeParameterName empty but specify the default number of records on one page as the PageSize value. In this case the connector can determine that the next page does not exist PageSize Number of records loaded per page. If it needs to be set in the request, specify it together with the corresponding PageSizeParameterName . You can also use it to determine the end of reading if you don\u2019t specify HasMoreJPath , TotalPageCountJPath , or TotalRowsJPath . If you know page size used by the data source, you can specify it without the PageSizeParameterName . In this case connector stop requesting next page after it receives a page with less records than the specified page size. The PageSize setting also has a special value -1, which means that the size of the page is unknown. In this case we do not pass the page size in requests. If you set the PageSize value to -1 and HasMoreJPath , TotalPageCountJPath , and TotalRowsJPath are not specified, the reading will continue until the number of records in the response exceeds 0. StartIndex The index of the first page. Default value is 1. NextPageToken Use this paging type when the data source returns the token of the subsequent page in the response body or header, and this token must be passed when requesting the next page. 1\n2\n3\n4\n5\n6\n7 \"PagingStrategy\" : { \"Type\" : \"NextPageToken\" , \"TokenJPath\" : \"status.next_page\" , \"NextPageParameterName\" : \"next_page_token\" , \"PageSizeParameterName\" : \"limit\" , \"PageSize\" : 500 } NextPageParameterName Name for the parameter used for passing the next page token in the request. TokenJPath JSON path to search the next page token in the response body. Required if the TokenHeaderName is not specified. TokenHeaderName The name of the response header, which contains the next page token. Required if the TokenJPath is not specified. PageSizeParameterName Name of the parameter which determines the page size. If you omit it, the request does not contain the page size. PageSize Number of records loaded in response to one call. It is desirable to determine the end of reading more accurately. If it is not declared and PageSizeParameterName is not set, the reading continues until the request stops returning records. If the PageSizeParameterName is set, the default PageSize value is 100. NextPageUrl Use this paging type when data source returns URL for reading the next page. It is returned in the response body or the header and is used as a URL for the next call. 1\n2\n3\n4\n5\n6 \"PagingStrategy\" : { \"Type\" : \"NextPageUrl\" , \"UrlHeaderName\" : \"Link\" , \"PageSizeParameterName\" : \"per_page\" , \"PageSize\" : 100 } UrlJPath JSON path to search the next page URL in the response body. It is required if the UrlHeaderName is not set. UrlHeaderName Name of the header where the next page URL is located. It is required if the UrlJPath is not specified. The header value looks like URL;rel=\u201dnext\u201d . The response may return several values, but you need only values containing rel=\u201dnext\u201d . If the response doesn\u2019t include the header with the rel=\u201dnext\u201d attribute, there are no more pages to read. PageSizeParameterName Name of the parameter which defines the page size. If not specified, the page size is not contained in the call. PageSize Number of records loaded per page. It is desirable to determine the end of reading more accurately. If the page size is not defined and the PageSizeParameterName is not specified, reading continues till the call stops returning records or the URL for the next call. If PageSizeParameterName is set, the default PageSize value is 100. Prefix Prefix that is added to the URL before reading the next page. For example, if the server returns the relative URL, you need to add the base URL of the connector. Suffix Suffix that is added to the URL before reading the next page. It may be some additional request parameters. ErrorHandling If the API request returns an error instead of a response, you can process it and determine the connector behavior in case of errors. For example, 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22 \"ErrorHandling\" : { \"ErrorMarkJPath\" : \"ok\" , \"ErrorMarkConditionValue\" : false , \"ErrorMessageJPath\" : \"error_description\" , \"ObjectNotFoundErrors\" : [ \"may not be available\" , \"not found\" ], \"HtmlResponseError\" : \"Connection settings are invalid. Please check your Domain, User, and/or Password values.\" , \"FatalErrors\" : [ \"The API call for this organization has exceeded the maximum call rate limit\" ], \"Failover\" : { \"FailoverErrors\" : [ \"Rate limited due to excessive requests\" , \"Too Many Requests\" , \"429\" ], \"MaxRetries\" : 5 , \"MinDelay\" : 500 } } ErrorMessageJPath JSON path to search the message in the error response. You can specify several search paths separated by semicolon. For example, 1\n2\n3 \"ErrorHandling\" : { \"ErrorMessageJPath\" : \"meta.message\" } Use the $full_error_response$ value to display the whole error response as it is.\nFor example, 1\n2\n3 \"ErrorHandling\" : { \"ErrorMessageJPath\" : \"$full_error_response$\" }, If the value for this setting is not set, or it can\u2019t find the error message, then the search tries to find the error tags in the response. It starts with errors or Errors and then checks tags error, Error, message, Message, statusmessage, title, Title, Exception . ObjectNotFoundErrors When Skyvia queries records with a filter by an ID value, it expects either a record or an empty result. However, some data sources return an error instead of an empty response when you request a non-existing record by an ID. In such cases we need to ignore this error and return an empty result. By default, Skyvia ignores the messages with the Not Found , No Result found , Gone occurrences. Some data sources may return atypical errors. For such cases, you can specify a JSON array of error message strings to ignore in the ObjectNotFoundErrors setting, for example, \"ObjectNotFoundErrors\" : [ \"may not be available\", \"not found\" ] . The error is ignored if the error message contains at least one of the specified strings. HtmlResponseError Use this setting for the cases when the source returns the HTML page instead of an error message. It allows replacing the HTML page with a readable error message. For example, Connection settings are invalid. Please check your Domain, User, and/or Password values. In this case, all the error information sent from the source server will be lost. ErrorMarkJPath This setting defines the JSON path in the response, where you can find the error. Use it if the source does not generate an exception in case of failure but immediately sends an error message in a field of the JSON response. ErrorConditionJPath and ErrorConditionValue Use these settings if the source returns a field with a sign that the request ends with an error instead of returning it. In this case, you can specify the JSON path to search whether the execution failed or succeeded in the ErrorConditionJPath . And you can set the value which indicates that the request failed. For instance, \"ErrorConditionJPath\" : \"ok\", \"ErrorConditionValue\" : false . FatalErrors Use this setting to define the array of the keywords inside the error messages which indicate fatal error. In case of such an error, the execution immediately stops without additional attempts to succeed. Failover Use this setting if you want the connector to retry the API request if it failed with certain errors. For example, many sources limit the number of API calls per unit of time. If this limit is exceeded (e.g., during the integration execution in Skyvia), the source returns an error, and the execution fails. Failover makes the execution run again automatically after such failure. FailoverErrors The array of string message parts of the server error messages that indicate that the execution must try again. MaxRetries The maximum number of attempts to retry the request. If all the attempts failed, an error is returned. The default value is 10. MinDelay MinDelay is a minimal delay in milliseconds between attempts to retry a request. By default, the minimal delay is 300 milliseconds. The delay is not constant and increases with the number of attempts using the exponential backoff algorithm. MinDelay determines only its minimal value. ReturningStrategy This section determines how to process the data returned from the data source for INSERT or UPDATE operations. ReturningStrategy section example: 1\n2\n3\n4\n5\n6\n7\n8 \"ProviderConfiguration\" : { \"ReturningStrategy\" : { \"ResultPath\" : \"data\" , \"Type\" : \"Row\" }, } If you need to override the ReturningStrategy settings for a particular object in the Objects section, you need to specify the separate InsertReturningStrategy and UpdateReturningStrategy settings for INSERT and UPDATE operations respectively. Type This setting determines what data is returned from the data source after INSERT or UPDATE. Valid values are the following. Value Description NotSupported Nothing is returned. Key After inserting a record, the API returns only its ID. If there are other fields to return, another request has to read the needed fields using record IDs. It is recommended to have the RetrieveSingleOperation defined for objects with such ReturningStrategy type. Otherwise, the INSERT operation can take significantly more time. Row After inserting or updating a record, the response returns all the record fields ResultPath Path to the returned data, if needed. For example, you need to specify it if the returned data is not on the root level of the response body, but is nested. KeyHeaderName This setting defines the name of the response header, which contains the ID of the inserted object. Use it when the record ID returns not in the response itself but in its headers. This setting is used only with the Key type. ConstantParameters You can set constant parameters sent with all requests for all the objects. This setting is needed when you have to pass the same parameters for every object, for example, authentication. For example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 \"ProviderConfiguration\" : { \"ConstantParameters\" : [ { \"ParameterName\" : \"club_secret\" , \"ParameterType\" : \"UrlParameter\" , \"DbType\" : \"String\" , \"Value\" : \"CS-19195-ACCESS-WO0qRGel5fKZMnMSvDNKofmq4\" } ] } ConstantParameters is an array of JSON objects with the following settings. ParameterName Constant parameter name. ParameterType This setting determines how the values are passed in the web request. Valid values are: Value Description UrlPath Value is passed as the URL part UrlParameter Value is passed as a URL parameter of the REST request BodyParameter Value is passed in the REST request body. DbType Defines parameter data type.\nThe list of possible data types is the following: Data Type Description Boolean A simple type representing Boolean values of true or false Byte An 8-bit unsigned integer ranging in value from 0 to 255 Int16 An integral type representing signed 16-bit integers with values between -32768 and 32767 Int32 An integral type representing signed 32-bit integers with values between -2147483648 and 2147483647 Int64 An integral type representing signed 64-bit integers with values between -9223372036854775808 and 9223372036854775807 Single A floating point type representing values ranging from approximately 1.5 x 10 -45 to 3.4 x 10 38 with a precision of 7 digits Double A floating point type representing values ranging from approximately 5.0 x 10 -324 to 1.7 x 10 308 with a precision of 15-16 digits Decimal A simple type representing values ranging from 1.0 x 10 -28 to approximately 7.9 x 10 28 with 28-29 significant digits Time A type representing a time value Date A type representing a date value DateTime A type representing a date and time value DateTimeOffset Date and time data with time zone awareness. Date value range is from January 1,1 AD through December 31, 9999 AD. Time value range is 00:00:00 through 23:59:59.9999999 with an accuracy of 100 nanoseconds. Time zone value range is -14:00 through +14:00 String A type representing Unicode character strings Guid A globally unique identifier (or GUID) Binary A variable-length stream of binary data ranging between 1 and 8,000 bytes JsonArray JSON Array JsonObject JSON Object Value Defines the constant parameter value. Disable The condition under which the parameter isn\u2019t used (for example, if the associated setting is NULL). It is usually used in conjunction with macros in Skyvia. Example: ConstantParameters when Authentication Parameters Required for Every Object For example, if a connector requires passing authentication parameters for every object, you can either add them to the Url setting of every object in the Objects section: \"Url\":\"/clubmembers?club_secret=CS-19195-ACCESS-WO0qRGel5fKZMnMSvDNKofmq4&club_key=HWGUG4xmhI7EWL7UOQ2Ew2V0vbmyjBUi2yfSOYoXCk\" Or you can configure ConstantParameters in the ProviderConfiguration section like this: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 \"ConstantParameters\" : [{ \"ParameterName\" : \"club_secret\" , \"ParameterType\" : \"UrlParameter\" , \"DbType\" : \"String\" , \"Value\" : \"CS-19195-ACCESS-WO0qRGel5fKZMnMSvDNKofmq4\" }, { \"ParameterName\" : \"club_key\" , \"ParameterType\" : \"UrlParameter\" , \"DbType\" : \"String\" , \"Value\" : \"HWGUG4xmhI7EWL7UOQ2Ew2V0vbmyjBUi2yfSOYoXCk\" } ] As a result, the URL setting for objects in the Metadata section will be simplified and will look like the following: \"Url\":\"/clubmembers\" ." }, { "url": "https://docs.skyvia.com/connectors/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Connections in Skyvia are used for connecting to various data sources and working with their data. Skyvia provides connectors to various data sources that you can select in order to create connections. Each connector has its own set of settings and the list of available actions . Connectors can be divided into three main categories: cloud apps, relational databases, and file storages. Additionally, Skyvia includes several special connectors: OData connector for data sources that support OData protocol. REST connector for data sources that have REST API. ODBC connector for connecting to data sources via ODBC drivers. Backup connector for connecting to data, backed up by Skyvia Backup . To find an information about connections in general, please read the Connections section. For the information on how to create a connection using a specific connector, its features and limitations please select a connector from the list below: Cloud Apps: Accounting: Avalara FreshBooks QuickBooks Desktop QuickBooks Online Sage Accounting Xero Zoho Books Analytics Amplitude FullStory Google Analytics Google Analytics 4 Zoho Analytics Ads & Conversion: ChartMogul Facebook Ads Google Ads LinkedIn Ads Microsoft Ads Outbrain Segment TikTok Ads X Ads Communication: CallRail ClickSend Fireflies.ai Front Gmail Heymarket Slack Twilio Zoom Zulip CRM: Affinity Agile CRM Booqable Capsule Close Dynamics 365 Dynamics 365 Business Central Exact Online Follow Up Boss Freshsales Classic Freshsales Suite HubSpot Insightly CRM LionOBytes NetSuite (SOAP) NetSuite V2 (REST) Nimble Pipedrive Pipeline CRM Pipeliner CRM RepairShopr Salesforce Streak SugarCRM Teamwork CRM Zendesk Sell Zoho CRM Developer Tools: Azure Application Insights Azure DevOps ServiceNow Documents: Formstack Documents E-commerce: AfterShip BigCommerce Cin7 Core Inventory Drip Magento Recharge Sendcloud Shippo ShipStation Shopify Starshipit WooCommerce Zoho Inventory Zoho Invoice Email Marketing: AWeber Brevo CleverReach Constant Contact Email Octopus G Suite Intercom Iterable Mailchimp Mailgun Mailjet MailerLite PersistIQ SendGrid SendPulse Forms & Surveys: Delighted FormCrafts Formstack Jotform SurveyMonkey Survicate Typeform Help Desk: Freshdesk Freshservice HelpDesk Help Scout Teamwork Desk Zammad Zendesk Zoho Desk Human Resources: BambooHR Greenhouse Zoho People Marketing Automation: ActiveCampaign Customer.io GetResponse Kit Klaviyo Marketo Marketing Cloud Reply Unbounce Yotpo Misc: Acumatica Airtable DigitalOcean Elasticsearch GitHub OData Okta Outreach PagerDuty SharePoint Lists SmartSuite Thinkific UserVoice WordPress Payment Processing: Chargebee ChargeOver Maxio Advanced Billing Paddle Recurly Square Stripe Zoho Billing Zuora Product Management: Aha! Onfleet ProdPad Zoho Sprints Project Management: Asana ClickUp Jira Jira Service Management Monday.com Motion Paymo Podio Productive.io Scoro Teamwork Trello Wrike Zoho Projects Scheduling & Booking: Acuity Scheduling Calendly Float Social Media Discourse Facebook Pages Pinterest Spreadsheets Excel Online Google Sheets Smartsheet Task Management: Todoist Team Collaboration Confluence Cloud Hive Time Tracking: Clockify Everhour Harvest Jibble My Hours Tempo Timely TMetric Toggl Track QuickBooks Time Databases and Cloud Data Warehouses: AlloyDB Amazon Redshift Azure Synapse Analytics Google BigQuery MySQL Oracle PostgreSQL Snowflake SQL Server File Storages: Amazon S3 Azure Blob Storage Azure File Storage Box Dropbox Google Drive OneDrive FTP SFTP SharePoint Files Zoho WorkDrive" }, { "url": "https://docs.skyvia.com/connectors/actions/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Action is an operation performed in order to obtain or store data in a data source. Each connector has a list of available actions, which can be used in different Skyvia components. You can find the information about supported actions and if an action is different for a connector in the Connectors documentation. However, most actions are common for most cloud app and database connectors. See the list of Common Actions for more information. Actions themselves are also connector-specific, which means even settings for the same action may differ for various connectors. However, if we perform the same actions for the same connector, they are configured in the same way regardless of where they are used. Action Settings and Parameters Most actions have two sets of options. One is usually the same for the same action, and another depends on the data source metadata and on how you configure the first set. We call the first set of options \u2014 action settings , and the second one \u2014 action parameters . For example, in the Insert action settings we specify the target table to insert records to, and the list of autogenerated columns to obtain the generated values for inserted records. After we specify the target table, the list of action parameters corresponds to the target table columns. The list of parameters updates usually automatically while you edit other action settings. You can also manually add or delete parameters to the list. Parameter Mapping Action parameters must be mapped in order to determine what data an action must query or store. What you can map to action parameters depends on where the action is used. For actions in Source component of a data flow, you can map action parameters to the data flow variables or data flow parameters. For actions in Lookup or Target component of a data flow, you may use data flow variables and input columns for mapping. For actions in control flow components, you can map action parameters to the control flow variables or control flow parameters You can edit parameter mapping by clicking the button near Parameters . It opens Mapping Editor , where you can add or delete parameters, rename them, and map them using Skyvia expressions . Common Actions Many actions are common for most of the cloud apps and databases. Here is the list of such common actions: Execute Query Execute Command Lookup Insert Update Delete Test" }, { "url": "https://docs.skyvia.com/connectors/actions/delete-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Delete Action This action deletes records in the data source. Action Settings Setting Description Table The target table to delete records from. Action Parameters Delete action parameters correspond to the target table ID/primary key fields. Result The action deletes a record in the specified table in the connector. Example Here is an example of the Delete action in the Target component of Data Flow ." }, { "url": "https://docs.skyvia.com/connectors/actions/execute-command-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Execute Command Action This action executes an SQL command against the data source. It can be used to both query and modify data. Settings Setting Description SQL Statement The SQL statement to execute against the connector. The Query Editor can be used to compose SQL Statement. Parameters Action parameters are just SQL parameters that you use in your command. Result Data, returned by the specified SQL Statement. Examples Here is an example of Execute Command action in an import . It shows how you can use the result of an SQL query as a source in import. It executes a complex SQL SELECT statement to import a report from BigCommerce to Google Sheets. Here is another example of using Execute Command action in the Target component of Data Flow . This example shows how you can use stored procedures in data flows via the Execute Command action. Here the Data Flow imports Salesforce contacts to Reply and immediately pushes them to a campaign using the corresponding Reply stored procedure." }, { "url": "https://docs.skyvia.com/connectors/actions/execute-query-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Execute Query Action This action executes the specified query against the data source. You can add parameters to your queries and preview a query while designing your integration. To add a parameter to a query, add a filter of the Parameter type to a query model, and map the parameter to a value using Mapping Editor . To test your query, use the Preview feature. You can also preview queries with parameters. \nClick Parameters to enter the parameter values and preview query with them. Action Settings Setting Description Query model The query to execute against the connector. Usually, the query model for the action can be configured with a visual Query Builder . Action Parameters Action parameters are the query parameters used in a filter. Result A set of records returned by the specified query. Examples Here is an example of Execute Query action in an Export . It shows a query model built with our Query Builder. Another example shows how to detect new records in the source that doesn\u2019t have fields storing creation or modification time. We use data flow with the Source component, which detects new records by maximum Record Id using the Execute Query action. This data flow remembers the last processed record Id and compares it with the source record Ids during every run." }, { "url": "https://docs.skyvia.com/connectors/actions/insert-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Insert Action This action writes new records to the data source. Action Settings Setting Description Table The target table to insert records to. Returning The fields to query for the inserted records from the target. It is often useful to query back certain fields that are autogenerated in the target data source when a record is inserted. For example, you may need to know the autogenerated values of the inserted record primary key or Id, or an exact record creation timestamp. Action Parameters Insert action parameters correspond to the target table fields that allow inserting data. You must map at least the parameters corresponding to the required target table fields. Result The action inserts a record to the specified table in the connector. It returns the values of fields, listed in the Returning setting. Example Here is an example of the Insert action in the Target component of Data Flow . This example shows the Insert action that inserts Accounts and gets the Ids of the inserted records." }, { "url": "https://docs.skyvia.com/connectors/actions/lookup-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Lookup This action queries records, matching to an input record, from the data source. Action Settings Setting Description Table An object to query records from. Keys The lookup table fields, by which input records and lookup table records are matched. Result Columns The columns from the matched records, returned as the result. Action Parameters Lookup action parameters correspond to the selected Keys . Result Result Column values from the matched records. Example Here is an example of Lookup action in the Lookup component of Data Flow . This example shows the Lookup action that matches Zoho CRM accounts with the input records by their names and returns the matched Zoho CRM account IDs." }, { "url": "https://docs.skyvia.com/connectors/actions/test-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Test Action This action whether the tests whether the connection is valid. It is available only in Control Flow . Action Settings This action has no settings. Action Parameters This action does not use action parameters . Result The action tests whether the connection is valid. It throws an exception if it is invalid, so it\u2019s best used inside a Try Catch branch. Example Here is an example of the Test action in the Action component of Control Flow . This example tests the connection before actual data loading, and writes a custom message into the error log in case the connection is invalid." }, { "url": "https://docs.skyvia.com/connectors/actions/update-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Actions Update Action This action updates existing records in the data source. Action Settings Setting Description Table The target table to update records in. Keys The target table fields, by which input records and target table records are matched. This parameter is optional, if you don\u2019t set it up, the primary key fields are used for matching. Action Parameters Update action parameters correspond to the target table fields that allow updating data, primary key, and Keys fields. You must map the parameters corresponding to the following fields: Keys fields if Keys are specified primary key fields if Keys are not set the fields that you want to update Result The action updates a record with the matching Keys in the specified table. Example Here is an example of the Update action in the Target component of Data Flow . This example shows the Update action that updates Accounts in Salesforce." }, { "url": "https://docs.skyvia.com/connectors/backup_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Backup Connections Skyvia allows creating connections to your backup snapshots . This means that you can create a connection either to a backup snapshot for a specific date or to the most recent snapshot from a backup , and then use them in your integrations or query backed-up data with the query tool. Backed-up data are always read-only, so you cannot modify them. You cannot import data to your backup snapshots or synchronize them with another data source. You can use them only as Source in Import or in Export , and Data Flow integrations. In query , you also cannot execute DML statements against backup connections. Only SELECT statements are allowed. When querying data from backup connections, you need to use SQLite syntax of SELECT statements, not the Transact-SQL. Backup connections can either always point to the most recent backup snapshot or to point to a backup snapshot for a specific date and time. How to Create Backup Connection To create a backup connection, perform the following steps: Click +NEW in the top menu. Open the Select Connector page by clicking Connection in the menu on the left. In the opened page, find and click Backup . In the Connection Editor page, specify a connection name that will be used to identify the connection. In the Source list, select the backup to connect to. If you want to always connect to the most recent snapshot of the backup, simply click the Create Connection button and omit the next step. Otherwise, if you want to connect to a specific backup snapshot (for a specific date), select Specific Date in the Type list. In the Date list, select the backup snapshot you want to connect to and click Create Connection ." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Skyvia supports the following cloud data sources: Accounting: Avalara FreshBooks QuickBooks Desktop QuickBooks Online Sage Accounting Xero Zoho Books Analytics Amplitude FullStory Google Analytics Google Analytics 4 Zoho Analytics Ads & Conversion: ChartMogul Facebook Ads Google Ads LinkedIn Ads Microsoft Ads Outbrain Segment TikTok Ads X Ads Communication CallRail ClickSend Fireflies Front Gmail Heymarket Slack Twilio Zoom Zulip CRM: Affinity Agile CRM Booqable Capsule Close Dynamics 365 Dynamics 365 Business Central Exact Online Follow Up Boss Freshsales Classic Freshsales Suite HubSpot Insightly CRM Keap LionOBytes NetSuite (SOAP) NetSuite V2 (REST) Nimble Pipedrive Pipeline CRM Pipeliner CRM RepairShopr Salesforce Streak SugarCRM Teamwork CRM Zendesk Sell Zoho CRM Developer Tools: Azure Application Insights Azure DevOps ServiceNow Documents: Formstack Documents E-commerce: AfterShip BigCommerce Cin7 Core Inventory Drip Magento Recharge Sendcloud Shippo ShipStation Shopify Starshipit WooCommerce Zoho Inventory Zoho Invoice Email Marketing: AWeber Brevo CleverReach Constant Contact Email Octopus G Suite Intercom Iterable Mailchimp Mailgun Mailjet MailerLite PersistIQ SendGrid SendPulse Forms & Surveys: Delighted FormCrafts Formstack Jotform SurveyMonkey Survicate Typeform Help Desk: Freshdesk Freshservice HelpDesk Help Scout Teamwork Desk Zammad Zendesk Zoho Desk Human Resources: BambooHR Greenhouse Zoho People Marketing Automation: ActiveCampaign Customer.io GetResponse Kit Klaviyo Marketo Marketing Cloud Reply Unbounce Yotpo Misc: Acumatica Airtable DigitalOcean Elasticsearch GitHub OData Okta Outreach PagerDuty SharePoint Lists SmartSuite Thinkific UserVoice WordPress Payment Processing: ChargeOver Chargebee Maxio Advanced Billing Paddle Recurly Square Stripe Zoho Billing Zuora Product Management: Aha! Onfleet ProdPad Zoho Sprints Project Management: Asana ClickUp Jira Jira Service Management Monday.com Motion Paymo Podio Productive.io Scoro Teamwork Trello Wrike Zoho Projects Scheduling & Booking: Acuity Scheduling Calendly Float Social Media Discourse Facebook Pages Pinterest Spreadsheets Excel Online Google Sheets Smartsheet Task Management: Todoist Team Collaboration Confluence Cloud Hive Time Tracking: Clockify Everhour Harvest Jibble My Hours Tempo Timely TMetric Toggl Track QuickBooks Time" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/activecampaign_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ActiveCampaign [ActiveCampaign](https://www.activecampaign.com) is an e-mail marketing automation solution to help marketers send fewer emails while still achieving better results. Data integration : Skyvia supports importing data to and from ActiveCampaign, exporting ActiveCampaign data to CSV files, replicating ActiveCampaign data to relational databases, and synchronizing ActiveCampaign data with other cloud apps and relational databases. Backup : Skyvia Backup supports ActiveCampaign backup. Query : Skyvia Query supports ActiveCampaign. Establishing Connection To create a connection with ActiveCampaign, you need to specify the Account to connect to and its API Key. Account \u2014 your ActiveCampaign subdomain name (for example, you have to specify only the yoursubdomain part of the API URL https:// yoursubdomain .api-us1.com). API Key \u2014 an automatically generated REST API key used for connecting to ActiveCampaign. Getting Credentials Perform the following steps to obtain ActiveCampaign credentials: [Login](https://www.activecampaign.com/login) to ActiveCampaign. Click Settings on the left and click Developer Copy the API Key and the subdomain part of the API URL. Creating Connection Enter the obtained connection parameters to the corresponding boxes in Skyvia connection editor. Connector Specifics Object Peculiarities Read-only Objects The following objects are read-only in ActiveCampaign: Automations, Campaigns, ContactTags, EventTracking, Messages, Segments, SiteTracking, Tags, UserGroups . Custom Objects Skyvia supports ActiveCampaign custom objects. The following custom field types are available for custom objects: ActiveCampaign Type DbType and Notes Text String Textarea String Number Decimal(9, 3) Money Consists of two fields: {Name}_Currency: String; {Name}_Value: Decimal(38, 2) Date Date DateTime DateTime Dropdown String Multiselect String. This type does not support native filtering. Custom Fields The following ActiveCampaign objects contain custom fields: Accounts, Contacts, Deals . Custom fields support the INSERT and UPDATE operations. The following custom field types are available in this connector: ActiveCampaign Type DbType and Notes Text String. The default length is 1000 characters. The length increases to 4000 characters, if the field name is memo or note , contains the description, comment, notes, address , or ends with url, reason , or keywords . If the name contains content or html , its length increases to 2147483647 charachters. Textarea String Number Double. Only available for Accounts and Deals objects. Money Consists of two fields: {Name}_Currency: String; {Name}_Value: Decimal(38, 2). Only available for Accounts and Deals objects. Date Date DateTime DateTime Dropdown String ListBox String Radio Button String Check Box String Hidden String Skyvia sends additional API requests to retrieve each custom field value, which may affect performance. To improve speed, you can exclude these fields or clear the Use Custom Fields checkbox. Contacts Status field values are not displayed by default when querying. The Status field is used for filtering only. You can filter contacts by the following Status values: Any, Unconfirmed, Active, Unsubscribed, Bounced . ActiveCampaign API supports the > and < operators and doesn\u2019t support >= and <= operators natively. Thus, when you set filters by CreatedDate and UpdatedDate fields, using the >= and <= operators, they work like > and < , respectively. Nested Objects The OrderProducts field in the ECommerceOrders stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the objects containing either CreatedDate or UpdatedDate field. Skyvia does not support Synchronization for the following objects: Addresses, Brandings, ContactTags, DealStages, EventTracking, EventTrackingEvents, Groups, Organizations, Segments, SiteTracking, SiteTrackingDomains, Tags, Webhooks . DML Operations Support Operations Objects INSERT, UPDATE, DELETE Addresses, Connections, Contacts, DealNotes, Deals, DealStages, DealTasks, DealTaskTypes, Forms, Groups, Lists, Accounts, Pipelines, Users, Webhooks INSERT, UPDATE ContactNotes, Messages INSERT, DELETE EventTrackingEvents, SiteTrackingDomains UPDATE Brandings Supported Actions Skyvia supports all the common actions for ActiveCampaign." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/acuityscheduling_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Acuity Scheduling [Acuity Scheduling](https://acuityscheduling.com/) is an online appointment scheduling software, where clients can schedule appointments, pay, and complete intake forms online 24/7. Data integration : Skyvia supports importing data to and from Acuity Scheduling, exporting Acuity Scheduling data to CSV files, and replicating Acuity Scheduling data to relational databases. Backup : Skyvia Backup does not support Acuity Scheduling backup. Query : Skyvia Query supports Acuity Scheduling. Establishing Connection To [create connection](https://docs.skyvia.com/connections/#creating-connections) with Acuity Scheduling you need to sign in with your AcuityScheduling account using your credentials. Creating Connection To create Acuity Scheduling connection, perform the following steps: Click Sign In with Acuity Scheduling . In the opened window, enter an email address used when registering in Acuity Scheduling and click Next . In the next window, enter your password and click Allow Access . Additional Parameters Suppress Extended Requests For the Appointments object Acuity Scheduling API does not return the ScheduledBy field when querying. To query the ScheduledBy values, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Incremental Replication and Synchronization Only Appointments and AppointmentPayments objects support replication with Incremental Updates . Synchronization is not supported for Acuity Scheduling. DML Operations Support Operations Objects INSERT, UPDATE, DELETE Appointments INSERT, DELETE Blocks, Certificates DELETE Clients Supported Actions Skyvia supports all the common actions for Acuity Scheduling." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/acumatica_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Acumatica [Acumatica](https://www.acumatica.com/) is an enterprise resource planning (ERP) platform helping small and midsize businesses with financial management, customer relationship management (CRM), project accounting, and distribution management. Data integration : Skyvia supports importing data to and from Acumatica, exporting Acumatica data to CSV files, replicating Acumatica data to relational databases, and synchronizing Acumatica data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Acumatica. Query : Skyvia Query supports Acumatica. Establishing Connection To create a connection to Acumatica, set the Connection Mode , specify the URL, enter Client Secret and Client ID, sign in with Acumatica, and select the endpoint. Getting Credentials To locate the Client Secret and Client ID, create an OAuth 2.0 client. Go to your Acumatica ERP deployment, click More Items on the left and choose Integration . Click Show more -> Connected Applications . Leave Client ID as it is . It will be automatically generated after you save the form. And give the name of your OAuth client. In the Flow drop-down list, select Authorization Code . On the SECRETS tab click Add shared Secret . Copy the Client Secret value from the Value box. Switch to the REDIRECT URIS tab. Click + and specify the following URL: https://app.skyvia.com/oauthcallback/acumatica. Save the OAuth client and copy the generated Client ID . Creating Direct Connection If Acumatica ERP installed on your local server make sure your server is available for external connections. Skyvia will access your server from the following IPs: 40.118.246.204, 13.86.253.112, and 52.190.252.0. To connect to Acumatica, select the Direct connection mode and enter the following connection parameters. Enter the Acumatica ERP instance URL in the URL box. Specify the obtained Client Secret and Client ID in the corresponding boxes in the Connection Editor. Click Sign In with Acumatica . Click Yes, Allow to grant the permissions to your OAuth client. Select the Endpoint from the drop-down list. Creating Agent Connection Install the Skyvia Agent application on a computer, from which the Acumatica server is available. To connect to Acumatica, select the Agent connection mode and enter the following connection parameters. Select the agent application from the drop down list. Enter the Acumatica ERP instance URL in the URL box. Specify the obtained Client Secret and Client ID in the corresponding boxes in the Connection Editor. Click Sign In with Acumatica . Click Yes, Allow to grant the permissions to your OAuth client. Select the Endpoint from the drop-down list. Additional Connection Parameters Metadata Cache You can specify the time after which Metadata Cache expires. Connector Specifics Supported Versions There are several versions of Acumatica servers. Single Acumatica server supports endpoints of different versions. Skyvia currently supports Contract v4 version endpoints 22.200.001 and 20.200.001. Object Peculiarities There are two types of Acumatica objects: standard objects and the *Attachments objects. Read-only Objects The following Acumatica objects are read-only: FinancialPeriod, FinancialYear, ItemSalesCategory, LaborCostRate, PhysicalInventoryReview, ProFormaInvoice . Incremental Replication and Synchronization Skyvia supports Synchronization and Replication with Incremental Updates for Acumatica objects containing the CreatedDateTime or LastModifiedDateTime fields. DML Operations Support Skyvia supports the following DML operations for Acumatica. Operation Object INSERT, UPDATE, DELETE Standard objects INSERT, UPDATE *Attachments Supported Actions Skyvia supports all the common actions for Acumatica." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/affinity_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Affinity [Affinity](https://www.affinity.co/) is an online customer relationship management tool focused on contacts and profile creation automation, easy dealing, and pipeline management. Data integration : Skyvia supports importing data to and from Affinity, exporting Affinity data to CSV files, and replicating Affinity data to relational databases. Backup : Skyvia Backup does not support Affinity. Query : Skyvia Query supports Affinity. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Affinity in Skyvia you have to enter the API key . Getting Credentials To obtain Affinity API key, follow the below instructions: Go to Affinity and click the user icon in the upper left corner of the page. Click Settings . Scroll down to the DATA section and click API . Generate the API key and copy its value. The API key is available for copying only once during the creation. Creating Connection Paste the obtained API key into the corresponding box in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For some objects, Affinity API returns only part of the fields when querying multiple records. In order to query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Persons InteractionDates_FirstEmailDate, InteractionDates_LastEmailDate, InteractionDates_LastEventDate, InteractionDates_LastInteractionDate, InteractionDates_NextEventDate, InteractionDates_FirstEventDate, Interactions_FirstEmail_Date, Interactions_FirstEmail_PersonIds, Interactions_LastEmail_Date, Interactions_LastEmail_PersonIds, Interactions_LastEvent_Date, Interactions_LastEvent_PersonIds, Interactions_LastInteraction_Date, Interactions_LastInteraction_PersonIds, Interactions_NextEvent_Date, Interactions_NextEvent_PersonIds, Interactions_FirstEvent_Date, Interactions_FirstEvent_PersonIds Organizations InteractionDates_FirstEmailDate, InteractionDates_LastEmailDate, InteractionDates_LastEventDate, InteractionDates_LastInteractionDate, InteractionDates_NextEventDate, InteractionDates_FirstEventDate, Interactions_FirstEmail_Date, Interactions_FirstEmail_PersonIds, Interactions_LastEmail_Date, Interactions_LastEmail_PersonIds, Interactions_LastEvent_Date, Interactions_LastEvent_PersonIds, Interactions_LastInteraction_Date, Interactions_LastInteraction_PersonIds, Interactions_NextEvent_Date, Interactions_NextEvent_PersonIds, Interactions_FirstEvent_Date, Interactions_FirstEvent_PersonIds To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Skyvia has the following limitations for Affinity: Object Peculiarities RelationshipStrengths The RelationshipStrengths object is read-only. To retrieve data from this object, you must use filter by the ExternalId . Reminders When importing data to the Reminders object, you have to map one of the following fields in addition to the required fields: Person_Id, Organization_Id or Opportunity_Id . Persons The WithInteractionDates, WithInteractionPersons , and withCurrentOrganizations fields are used for filtering only. They return empty results by default when querying. Organizations The WithInteractionDates and WithInteractionPersons field are used for filtering only. They return empty results by default when querying. Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: EntityFiles, ListEntries, Notes, Reminders . \nThese tables have only the CreatedDate field and do not have the UpdatedDate . It means that the Incremental Replication will consider only the new records. Skyvia does not support Synchronization for Affinity. DML Operations Support Skyvia supports DML operations for such Affinity objects: Operation Object INSERT, UPDATE, DELETE ListEntryFieldValues, Notes, Opportunities, OpportunityFieldValues, OrganizationFieldValues, Organizations, PersonFieldValues, Persons, Reminders, WebhookSubscriptions INSERT, DELETE Fields, ListEntries INSERT Tasks Supported Actions Skyvia supports all the common actions for Affinity." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/aftership_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources AfterShip [AfterShip](https://www.aftership.com/) is an automated shipment tracking solution that helps eCommerce businesses track their online shipments and keep their customers updated on the status of their deliveries from their online shops. AfterShip supports a wide variety of shipping services worldwide, provides users with a dashboard to check the status of shipments across multiple carriers, and sends out emails and notifications automatically at different shipment stages. Data integration : Skyvia supports importing data to and from AfterShip, exporting AfterShip data to CSV files, replicating AfterShip data to relational databases, and synchronizing AfterShip data with other cloud apps and relational databases. Backup : Skyvia Backup does not support AfterShip backup. Query : Skyvia Query supports AfterShip. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to AfterShip, specify your API Key . API Key is a REST API key for connecting to AfterShip. You can manage an API key in the AfterShip interface, more details are available below. Getting Credentials To get your API Key for AfterShip REST API, perform the following steps: Sign in to AfterShip. Click Settings in the menu on the left. Select API keys . Copy your API key. Creating Connection Paste the obtained API key to the API Key field in the connection editor and select the store. Additional Connection Parameters Store Name This parameter enables the commerse objects in the connector: Stores, Orders, Product, Fulfillments . Use this parameter if you work with at least one store. If you don\u2019t work with the commerse objects, don\u2019t use this parameter. Connector Specifics Objects Peculiarities Product To get data about a product, use filter by Id when querying. Nested Objects The following field store complex structured data in JSON format: Object Field Orders Items Product Variants Fulfillments LineItems, Trackings You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. Incremental Replication and Synchronization Skyvia supports Incremental Replication for the Fulfillments, Orders, Trackings, and Stores objects. Skyvia supports Synchronization for the Trackings object only. DML Operations Support Operations Objects INSERT, UPDATE, DELETE Trackings INSERT, UPDATE Fulfillments, Orders, Stores, Product Stored Procedures Skyvia represents part of the supported AfterShip features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateTrackingStatus To update the tracking status, use the following command. call UpdateTrackingStatus(:tracking_number,:reason) Parameter Description Tracking_number Tracking number value Reason Valid values are DELIVERED, LOST, or RETURNED_TO_SENDER Supported Actions Skyvia supports all the common actions for Aftership." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/agile_crm_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Agile CRM [Agile CRM](https://www.agilecrm.com) is an all-in-one CRM with sales, marketing, and service features in one platform. Data integration : Skyvia supports importing data to and from Agile CRM, exporting Agile CRM data to CSV files, replicating Agile CRM data to relational databases, and synchronizing Agile CRM data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Agile CRM. Query : Skyvia Query supports Agile CRM. Establishing Connection To connect to Agile CRM, you need to specify your Subdomain , User , API Key or Password , and select the Filter for Tickets . Getting Credentials To obtain your Agile CRM API key, click your profile icon in the top right corner and then click Admin Settings . Then switch to the Developers & API tab and copy your API key under REST API . Creating Connection Specify your Subdomain , User , and API Key or Password : Subdomain - the fragment of your Agile CRM URL, for example in URL https://yoursubdomain.agilecrm.com/ you need only the yoursubdomain part. User - the user email to sign in with. API Key or Password - here you can use either the user\u2019s Agile CRM password or API Key. See how to obtain the Agile CRM API Key below. Select the Filter for Tickets to use. Usually, you need to select the All Tickets filter. Only tickets that match this filter will be available via this connection. Connector Specifics Object Peculiarities Events When reading data from the Events object, the Contacts field returns related contacts as an array of JSON objects with all the Contacts data. When writing data to this field, you need to specify the value as a JSON array of ID values of existing contacts [\"4928077869678592\"] . Documents The Documents object returns all the documents linked to Contacts , Cases , Deals , or not linked to anything at all. You can see these relations via the ContactIds , CaseIds , and DealIds fields. When importing data to this object, you can map either the ContactIds field or the DealIds field. The value must be specified as a JSON array of the corresponding id values: [\"4789662314463232\",\"6659552797327360\"] . DealNotes, ContactNotes and CompanyNotes When you load data to DealNotes , ContactNotes , or CompanyNotes object, values for DealIds , ContactIds , or CompanyIds respectively, must be specified as a JSON array of corresponding ID values. If you insert a record with multiple ID values in such an array, multiple respective records are created in fact, but the execution log will show only one record. Besides, the id value in this log will be the id not of the respective created note, but the first id, specified in such an array. DealNotes , ContactNotes , and CompanyNotes do not support replication with incremental updates despite having the fields, storing record creation and modification time. Incremental Replication and Synchronization The following Agile CRM objects support Synchronization: Companies , Contacts , Deals .\nThe following Agile CRM objects support Replication with Incremental Updates: Campaigns , Companies , Contacts , ContactTagsWithTime , DealNotes , Deals , Events , Filters , Tasks , Tickets , TicketNotes . DML Operations Support Operation Object INSERT, UPDATE, DELETE Companies, Contacts, Deals, Events, Tasks INSERT, DELETE CompanyNotes, ContactNotes, Tickets INSERT, UPDATE DealNotes, Documents INSERT TicketNotes Stored Procedures Skyvia represents part of the supported Agile CRM features as stored procedures. \nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateContactLeadScore To update the contact lead score, use the command. call UpdateContactLeadScore(:Id,:LeadScore) UpdateContactStarValue To update the contact star value, use the command call UpdateContactStarValue(:Id,:StarValue) AddContactTags To add the tags to contact, use the command call AddContactTags(:Id,:Tags) Specify the list of tags as a JSON array of strings: [\"Tag1\",\"Tag2\",\"Tag3\"] as the :Tags parameter. DeleteContactTags To delete the specified tags, use the command call DeleteContactTags(:Id,:Tags) . Specify the list of tags as a JSON array of strings: [\"Tag1\",\"Tag2\",\"Tag3\"] as the :Tags parameter. Supported Actions Skyvia supports all the common actions for Agile CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/aha_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Aha! [Aha!](https://www.aha.io) is web-based roadmap software. With Aha! you can set strategies, prioritize features, share visual plans as well as crowdsource feedbacks and engage a community. You can also use Aha! as a fully extendable agile development tool. Data integration : Skyvia supports importing data to and from Aha!, exporting Aha! data to CSV files, replicating Aha! data to relational databases and synchronizing Aha! data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Aha!. Query : Skyvia Query supports Aha!. Establishing Connection When [creating a connection](https://docs.skyvia.com/connections/#creating-connections) to Aha!, you have to to enter a subdomain and log in to Aha! via OAuth 2.0. The OAuth authentication token is stored on the Skyvia server. Your Aha! account credentials are not stored on Skyvia side. Getting Credentials Subdomain To obtain the subdomain value for Aha! connection, do the following: Log in to Aha! and click the gear symbol on the top right corner of the page. Click Account and select Profile in the left menu. Copy the subdomain value from the Custom domain box. API Key To obtain the API Key for Aha! connection, do the following: Log in to Aha! and click the gear symbol on the top right corner of the page and select Personal . Select Developer on the left and click Generate API Key . Name the API key and click Generate API key . Copy the generated value and store it in a safe place. The API Key is available only once when generated. You can\u2019t access it later. Creating Connection To create an Aha! connection, perform the following steps: Paste your Aha! subdomain to the Subdomain field, which you can find in your Aha! account profile. Enter the API Key to the corresponding box in the connection editor. Additional Connection Parameters Suppress Extended Requests For some objects, Aha! API returns only part of the fields when querying multiple records. In order to query values of additional fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used.\nThe additional fields are the following: OBJECT FIELD ProductPersonas Color CreativeBriefs Color Features Release_Id Epics Release_Id To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector specifics Object Peculiarities Some objects like Goals, Initiatives, Epic, Releases , etc., contain the Watchers field. When querying data, data is not displayed in this field, but in import this field is available for mapping, and you can specify data that will be imported. The data must be specified as JSON array of Users Ids, fo example [\"123456\", \"789456\"] , where 123456 and 789456 are Ids from the Users object. The values for ScoreFacts field (in the Ideas objects) should be provided as a JSON array of objects, like the following [{\"name\": \"Benefit\", \"value\": 10}, {\"name\": \"Effort\", \"value\": 3}] , where the name field contains the Score name, and the value field - Score value. When querying data from the Features object, data in the Goals field is displayed as a JSON array of objects like the following: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 [\n {\n \"id\":\"7010029382627821579\",\n \"name\":\"Goal_Sk_Insert_0002\",\n \"url\":\"https://tohito-perfect.aha.io/strategic_imperatives/IT-G-11\",\n \"resource\":\"https://tohito-perfect.aha.io/api/v1/goals/IT-G-11\",\n \"created_at\":\"2021-09-20T14:54:13.186Z\",\n \"description\":\n {\"id\":\"7010029382751012013\",\n \"body\":\"Description body for Goals - from Skyvia import\",\n \"created_at\":\"2021-09-20T14:54:13.216Z\",\n \"attachments\":[]}\n }\n] However, when loading data to the Features object, you need to specify Goal ids as an array of numbers (ids of the Goals tables) ( [6984435874395374467] or [6984435874395374467\",6984435874150517722] ). Incremental Replication and Synchronization Replication with Incremental Updates is not supported for the following objects: AccountBackups, CapacityEstimateValues, CapacityScenarios,EpicCapacityInvestments, FeatureCapacityInvestments, IdeaOrganizations, InitiativeCapacityInvestments, ProductCapacityInvestments, ProductRoles,ProductTeams, ProductUsers, StrategyModels, StrategyPositions, Teams, Workflows . Some objects, like Products , ProductPersonas , Notes , Releases , Competitors , Features , ToDos , ProductIntegrations and AccountIntegrations , return the value of their updated_at field only when querying separate records by their id. When querying all records, they do not return this field. This means that we cannot use this field when performing replication with incremental updates or synchronization. Replication with incremental updates and synchronization can detect and sync newly created records in these objects using their created_at field, but cannot detect changes made to existing records. Skyvia supports Synchronization for such Aha! objects as EpicComments, FeatureComments, FeatureRequirements, GoalComments, Goals, IdeaComments, IdeaEndorsements, InitiativeComments, Initiatives, ProductIdeas, ProductPersonas, Products, ReleaseComments, ReleasePhaseComments, RequirementComments, ToDoComments . DML Operations Support OPERATION OBJECT INSERT, UPDATE, DELETE CapacityEstimateValues, Competitors, CreativeBriefs, EpicComments, Epics, FeatureComments, FeatureRequirements, Features, GoalComments, Goals, IdeaComments, IdeaEndorsements, IdeaOrganizations, InitiativeComments, Initiatives, Notes, ProductIdeas, ProductPersonas, ReleaseComments, ReleasePhaseComments, ReleasePhases, Releases, RequirementComments, ToDoComments, ToDos INSERT, UPDATE EpicCapacityInvestments, FeatureCapacityInvestments, IdeaUsers, InitiativeCapacityInvestments, Products INSERT, DELETE IdeaSubscriptions, ProductRoles INSERT AccountIntegrations, IdeaPublicComments, ProductIntegrations, ProductUsers DELETE EpicRecordLinks, FeatureRecordLinks, GoalRecordLinks, IdeaRecordLinks, InitiativeRecordLinks, NoteRecordLinks, ReleasePhaseRecordLinks, ReleaseRecordLinks UPDATE Users Supported Actions Skyvia supports all the common actions for Aha!." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/airtable_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Airtable [Airtable](https://www.airtable.com/) is a powerful cloud database-spreadsheet hybrid platform for collaboration and data management. Data integration : Skyvia supports importing data to and from Airtable, exporting Airtable data to CSV files, replicating Airtable data to relational databases and replicating other cloud apps\u2019 data to Airtable, and synchronizing Airtable data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Airtable. Query : Skyvia Query supports Airtable. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) with Airtable, you have to sign in to Airtable with your credentials and select the database to connect to. Creating Connection To connect to Airtable, perform the following steps: In the Connection Editor, click Sign in with Airtable . Enter your Email and Password . Add the Airtable bases and workspaces you want to grant Skyvia access to and click Grant access . Select the Base in the Connection Editor. Connector Specifics Object Peculiarities All the Airtable objects are custom. There are three types of Airtable objects: standard tables, attachments tables, and comments tables. Standard Tables Standard tables are the tables created by user and present in Airtable UI. Attachments Skyvia supports getting content of file attachments. The information about attachments is stored as an array of nested JSON objects in the Attachments columns of the standard tables. For user convenience, Skyvia represents the information about attachments as separate read-only tables with the _Attachments suffix added to the parent table name. \nFor example, you can find the information about the attachments added to the Tasks table in the Tasks_Attachments table. Comments The information about the \u0441omments is stored in the tables with the _Comments suffix added to the parent table name.\nEvery standard table has the corresponding _Comments table by default. DML Operations Support Operation Object INSERT, UPDATE, DELETE All the standard tables and _Comments tables Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates and Synchronization for all standard tables and _Comments tables. Detecting Newly Created Fields Skyvia detects the newly created records using the default InternalCreatedDate field. Detecting Modified Records To detect the changed records, Skyvia automatically searches the field of the lastModifiedTime type. Such a field may have any name in the standard tables. In the _Comments tables, it has a default name InternalUpdatedDate . If the table has no lastModifiedTime fields, the changed records are not detected by Synchronization and Incremental Replication. If you want the Replication to track the changed records in the standard tables, you have to add the field of lastModifiedTime type to the table manually via Airtable UI before the Replication run. If you add the lastModifiedTime field after running the Replication, you need to delete and create the target table again. Replication Target You can replicate cloud apps data to Airtable. Airtable sets the primary key field automatically. It always uses the first field of the source object as primary field. Airtable API does not support the Drop Tables replication option. If you enable this option in your replication, you receive an error. \nYou can delete the tables in Airtable UI manually, if needed. Supported Actions Skyvia supports all the common actions for Airtable." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/amplitude_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Amplitude [Amplitude](https://amplitude.com/) is a product analytics software for self-service product analytics insights to help businesses better understand their customers. Data integration : Skyvia supports importing data to and from Amplitude, exporting Amplitude data to CSV files, and replicating Amplitude data to relational databases. It does not support synchronizing Amplitude data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Amplitude. Query : Skyvia Query supports Amplitude. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Amplitude, you need to select the Server Region to use and specify the API Key and API Secret . Getting credentials You can find your API Key and API Secret on the Amplitude Project Settings page. To find them, perform the following steps: In your workspace, click Settings on the bottom left. Under ORG SETTINGS , click Projects , and then select the project to obtain the API key for. Finally, on the Project Settings page, copy the API Key and Secret Key. Creating Connection In Skyvia Connection Editor, select the Server Region. Paste the obtained API key and Secret Key values to the corresponding boxes. Additional Connection Parameters Skyvia uses events API to query Events data. This API has a limit of 4 GB data per API call, and it allows querying data for certain period. If the limit is exceeded, Amplitude returns an error. You can control the total period, for which events are queried when you query events data, and the period per API call, for which events are queried in the Advanced Settings . The Events Export Period parameter determines the total period, for which the events are queried, when no filter on the ServerUploadTime field is specified. The Events Export Delta parameter determines the length of the period, for which events are queried with one API call, in hours. By default, Skyvia queries events for the last 30 days, and query events for one day per API call. If you have too many events, and the volume of data per day can exceed the 4 GB limit, decrease the value of the Events Export Delta parameter. On the other hand, if you have few events, you can increase this value to use less API calls. Connector Specifics Object Peculiarities Only the Id field of the Annotations object is marked as a primary key in Skyvia. For other objects, primary keys are not marked in the metadata. Users and Groups objects are not supported. When importing data to the Events table you are not required to specify UserId and Type values, existing in the corresponding objects. In case you insert a new Type value, the corresponding record is also created in the EventTypes object automatically. Incremental Replication and Synchronization Skyvia does not support Synchronization for Amplitude. Replication with Incremental Updates is supported only for the Events object. The ServerUploadTime field is used to determine record creation time. Please note that new events become available via Amplitude API only after 2-3 hours. It is recommended to run Replication with Incremental Updates at most every 3 hours or even more seldom. If you run Replication with Incremental Updates more often, part of the records can be skipped and never replicated. DML Operations Support Operation Object INSERT, UPDATE, DELETE EventCategories, EventTypes, UserProperties INSERT Annotations, Events Supported Actions Skyvia supports all the common actions for Amplitude." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/asana_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Asana [Asana](https://asana.com) is a collaborative web tool to help teams organize, track, and manage their work. Data integration : Skyvia supports importing data to and from Asana, exporting Asana data to CSV files, replicating Asana data to relational databases, and synchronizing Asana data with other cloud apps and relational databases. Backup : Skyvia Backup supports Asana backup. Query : Skyvia Query supports Asana. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Asana, you simply log in with Asana. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Skyvia does not store your credentials. Creating Connection To create Asana connection, perform the following steps: In the Connection Editor page, specify a connection name that will be used to identify the connection. Click Sign In with Asana . Enter your Asana credentials and click Log in . Connector Specifics Object Peculiarities Tasks Asana API allows querying tasks filtered by an Id of the task or of a related object, like project, tag, or section. When reading Asana tasks, Skyvia queries them by their parent projects. This means that tasks, not belonging to any project, are read by Skyvia only when querying a task by its Id. You can filter tasks by CompletedAt using >= operator (Returns tasks that are either incomplete or that have been completed since this time) or by the UpdatedDate and >= operator. When querying data from Asana Tasks , Skyvia gets task project Ids as a JSON array of objects ( [{\"id\":724859473695142}] ). However, when loading data to Tasks , you need to specify project Ids as an array of numbers (id values) ( [724859473695142] or [724859473695142,731306691185212,766026167260953] ). Skyvia does not support the Restore operation for the Tasks object in Backup. To restore tasks, you should export your Tasks backup to CSV file and then use it for restoring. Associating a Task to a Section To associate a new task to a specific section, specify the IDs of the project and section in the Memberships field when mapping. For example, to associate a new task to the project with ID 269451171754166 and the section 1143617431313814 , map the Membership to the value in the following format. 1\n2\n3 [\n{\"project\":\"269451171754166\",\"section\":\"1143617431313814\"}\n] To associate the existing task to a specific section, perform import into the SectionTasks object. \nSpecify the SectionId, and TaskId when mapping. You can optionally indicate a specific position of the task by mapping the InsertBefore or InsertAfter fields to the ID of the task before or after which the moved task should be placed. Attachments Skyvia does not support modifying data of Asana Attachments object. You can only get data from it, and that does not include attachment body. Projects To avoid timeouts for the Projects object, you can use filter by the TeamId when querying. TimeTrackingEntries Use filter by the TaskId to increase performance when querying the TimeTrackingEntries object. Custom Fields Custom field values for specific tasks and projects are stored as arrays in the Tasks and Projects objects CustomFields fields. For user convenience, Skyvia additionally represents the content of the CustomFields arrays in the separate read-only TaskCustomFields and ProjectCustomFields objects. Depending on the custom field type, you can find its values in the DisplayValue_TextOrEnum and DisplayValue_Number fields. To map specific custom field values from Asana to another app, you can use the following mapping example in your integrations: In the mapping editor of your integration, select the Source Lookup mapping. Lookup Object = TaskCustomFields . Result Column = DisplayValue_TextOrEnum (for the string fields) or DisplayValue_Number (for numeric fields). Lookup Key Column = Name , Constant = Custom field name which value we need to obtain. Lookup Key Column = TaskId , Column = Id . Lookup Key Column = ProjectId , Column = ProjectId . See Lookup Mapping for more information about lookups. Querying all the records from the TaskCustomFields and ProjectCustomFields objects may take a lot of time and API calls. We strongly recommend using filters by the ProjectId field. It would significantly reduce the number of API calls and save query time. Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: Attachments, Projects, ProjectStatuses, Portfolios, Sections, Subtasks, Tags, Tasks, TaskStories, TimeTrackingEntries, UserTasks, Webhooks . Skyvia tracks only new records for the Portfolios and TimeTrackingEntries objects. Synchronization is supported for the Projects, Tasks, TimeTrackingEntries, UserTasks objects. \nSkyvia tracks only new records for the TimeTrackingEntries object. DML Operations Support Operation Object INSERT, UPDATE, DELETE Projects, Portfolios, Sections, SubTasks, Tasks, TimeTrackingEntries INSERT, UPDATE Tags, TaskStories INSERT, DELETE ProjectStatuses INSERT TeamUsers, Webhooks UPDATE Workspace Supported Actions Skyvia supports all the common actions for Asana." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/avalara_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Avalara [Avalara](https://avalara.com) is a cloud-based platform that automates tax compliance for businesses, including tax calculation, filing, exemption management, and e-invoicing. Data integration : Skyvia supports importing data to and from Avalara, exporting Avalara data to CSV files, replicating Avalara data to relational databases, and synchronizing Avalara data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Avalara. Query : Skyvia Query supports Avalara. Establishing Connection To create a connection to Avalara, authenticate with one of the options: Username/Password and Account ID/License Key . Getting Credentials Account ID To obtain an Account ID, perform the following steps: Log in to your Avalara account. Click Account . Copy Account ID . License Key To obtain a License Key, perform the following steps: Log in to your Avalara account. In the navigation bar, select Settings > License and API keys . Click Generate new key . Avalara emails the license key and account information to account administrators. Creating Username/Password Connection To connect to Alavara, specify Username and Password . Creating Account ID/License Key Connection To connect to Avalara, specify Account ID and License Key . Additional Connection Parameters Is Sandbox Use this parameter to connect to the Avalara sandbox environment. Suppress Extended Requests For some objects, Avalara API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Reports Content To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Retrieving binary data from the Content field is limited to around 4 records per minute, which can significantly impact performance during bulk operations. Connector Specifics Object Peculiarities Reports When records are inserted into the Reports object, the data in the Content field of these newly inserted records becomes available in a few minutes, after the report Status changes to Complete . Filtering Specifics Avalara supports the following native filters : Object Fields (Operators) APConfigSetting Id (=, In), CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Accounts Id (=, In), CrmId, Name (=, !=, IsNull, IsNotNull, In, Like), EffectiveDate, CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), AccountStatusId, AccountTypeId (=, !=, IsNull, IsNotNull, In), CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between), IsSamlEnabled, IsDeleted (=, !=, IsNull, IsNotNull) Batches Id (=, In), Name, BatchAgent, Options, FileName, ContentType (=, !=, IsNull, IsNotNull, In, Like), AccountId, CompanyId, FileId, ContentLength, RecordCount, CurrentRecord, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between), Type, Status (=, !=, IsNull, IsNotNull, In), StartedDate, CompletedDate, CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=) BatchFiles BatchId, CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Certificates Id (=), CompanyId (=, In) CertificateAttributes CertificateId (=), CompanyId (=, In) ListCustomFieldsForCertificate CertificateId (=), CompanyId (=, In) Companies Id (=, In), IsDefault, IsActive, IsFein, HasProfile, IsReportingEntity, WarningsEnabled, IsTest, InProgress (=, !=, IsNull, IsNotNull), TaxDependencyLevelId, RoundingLevelId (=, !=, IsNull, IsNotNull, In), CompanyCode, SstPid, Name, TaxpayerIdNumber, DefaultCountry, BaseCurrencyCode, MossId, MossCountry (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), AccountId, ParentCompanyId, DefaultLocationId, BusinessIdentificationNo, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) CompanyParameters CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Contacts Id (=, In), ContactCode, FirstName, MiddleName, LastName, Line1, Line2, Line3, City, Region, PostalCode, Country, Email, Phone, Title (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) CostCenters Id (=, In), CostCenterCode, EntityUseCode (=, !=, IsNull, IsNotNull, In, Like), CompanyId, DefaultItemIdentifier, EffectiveDate, EndDate, CreatedUser, ModifiedUser, CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Customers, CustomerAttributes, ListCustomFieldsForCustomer CustomerCode (=), CompanyId (=, In) DataSources Id (=, In), IsEnabled, IsSynced, IsAuthorized (=, !=, IsNull, IsNotNull), Source, Name, Instance,\tExternalState (=, !=, IsNull, IsNotNull, In, Like), CreatedDate,\tUpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) DistanceThresholds Id (=, In), ThresholdExceeded (=, !=, IsNull, IsNotNull), Type (=, !=, IsNull, IsNotNull, In), OriginCountry, DestinationCountry (=, !=, IsNull, IsNotNull, In, Like), EffDate, EndDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) GLAccount CompanyId (=, In) Items Id (=, In), ItemCode, Description, ItemGroup, ItemType, Category (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, TaxCodeId, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Locations Id (=, In), IsDefault, IsRegistered (=, !=, IsNull, IsNotNull), AddressTypeId, AddressCategoryId (=, !=, IsNull, IsNotNull, In), LocationCode, Description, Line1, Line2, Line3, City, County, Region, PostalCode, Country, DbaName, OutletName (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, EffectiveDate, EndDate, LastTransactionDate, RegisteredDate, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) LocationParameters LocationId, CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) MultiDocument Id (=, In), Type (=, !=, IsNull, IsNotNull, In), Code (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), AccountId, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Nexus Id (=, In), HasLocalNexus, HasPermanentEstablishment, IsSellerImporterOfRecord (=, !=, IsNull, IsNotNull), JurisTypeId,\tJurisdictionTypeId, NexusTypeId, Sourcing, LocalNexusTypeId (=, !=, IsNull, IsNotNull, In), Country,\tRegion, JurisCode, JurisName, ShortName, SignatureCode, StateAssignedNo, NexusTaxTypeGroup (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate, CompanyId, EffectiveDate, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=) NexusParameters NexusId, CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Notifications Id (=, In), NeedsAction, Dismissed (=, !=, IsNull, IsNotNull), ReferenceObject, \tSeverityLevelId, Category, Topic, Message, ActionName, ActionLink (=, !=, IsNull, IsNotNull, In, Like), ActionDueDate, DismissedDate, ExpireDate, CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), AccountId, ReferenceId, DismissedByUser, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) ServiceTypesDefinitions Id (=, In), Description (=, !=, IsNull, IsNotNull, In, Like) Reports Id (=, In) Settings Id (=, In), CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between), Set, Name, Value (=, !=, IsNull, IsNotNull, In, Like) Subscriptions Id (=, In), AccountId, SubscriptionTypeId, EffectiveDate, CreatedUser, ModifiedUser, EndDate (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=) TaxCodes Id (=, In), IsPhysical, IsActive, IsSSTCertified (=, !=, IsNull, IsNotNull), TaxCode, TaxCodeTypeId, Description, ParentTaxCode, EntityUseCode (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, GoodsServiceCode, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) TaxRules Id (=, In), IsAllJuris, IsSTPro (=, !=, IsNull, IsNotNull), JurisdictionTypeId, TaxRuleTypeId (=, !=, IsNull, IsNotNull, In), TaxCode, StateFIPS, JurisName, JurisCode, EntityUseCode, TaxTypeCode, RateTypeCode, Options, Description, Country, Region, Sourcing, TaxTypeGroup, TaxSubType, UnitOfBasis, CurrencyCode (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, TaxCodeId, Value, Cap, Threshold, EffectiveDate, EndDate, UomId, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Transactions CompanyCode (=, In) Upcs Id (=, In), IsSystem (=, !=, IsNull, IsNotNull), Upc, LegacyTaxCode, Description (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), CompanyId, EffectiveDate, EndDate, Usage, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) Users Id (=, In), IsActive (=, !=, IsNull, IsNotNull), SecurityRoleId, PasswordStatus (=, !=, IsNull, IsNotNull, In), UserName, FirstName, LastName, Email, PostalCode, SubjectId (=, !=, IsNull, IsNotNull, In, Like), MigratedDate, CreatedDate, UpdatedDate (=, !=, IsNull, IsNotNull, <, <=, >, >=), AccountId, CompanyId (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) ProductClassificationSystemsDefinitions Id (=, In), SystemCode, Description, CustomsValue (=, !=, IsNull, IsNotNull, In, Like) ParametersDefinitions AccountId (=, In) RegionsDefinitions CountryCode, Code, Name, Classification (=, !=, IsNull, IsNotNull, In, Like), StreamlinedSalesTax, IsRegionTaxable (=, !=, IsNull, IsNotNull) CountriesDefinitions Code, Name (=, !=, IsNull, IsNotNull, In, Like) TaxCodesDefinitions Id (=, In), IsPhysical, IsActive, IsSSTCertified\t(=, !=, IsNull, IsNotNull), TaxCode, TaxCodeTypeId, Description, ParentTaxCode (=, !=, IsNull, IsNotNull, In, Like), CreatedDate, ModifiedDate, CompanyId, GoodsServiceCode, CreatedUser, ModifiedUser (=, !=, IsNull, IsNotNull, <, <=, >, >=, In, Between) TaxSubTypesDefinitions Id (=, In), TaxTypeGroup, TaxSubType, Description (=, !=, IsNull, IsNotNull, In, Like) TaxTypeGroupsDefinitions Id (=, In), TaxTypeGroup, Description (=, !=, IsNull, IsNotNull, In, Like) CertificateAttributesDefinitions Id (=, In), Name, Description (=, !=, IsNull, IsNotNull, In, Like), IsSystemCode (=, !=, IsNull, IsNotNull) Nested Objects Avalara includes objects with fields that store complex, structured data in JSON format. The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To replicate nested data into separate tables with the new replication runtime, select the Separate Tables option for Unwind Nested Objects. The list of objects with specified fields that store complex data structures is the following: Object Field Nested Object Certificates Customers CustomersType PoNumbers PoNumbersType Attributes CertificateAttributesType Histories HistoriesType Jobs JobsType Logs LogsType InvalidReasons InvalidReasonsType CustomFields CustomFieldsType Customers Certificates CertificatesType CustomFields CustomFieldsType ExposureZones ExposureZonesType BillTos CustomersType ShipTos CustomersType Attributes CustomerAttributesType ActiveCertificates ActiveCertificatesType Histories HistoriesType Jobs JobsType Logs LogsType ShipToStates ShipToStatesType Items Properties KeyValueType Classifications ItemClassificationsType Parameters ItemParametersType Tags TagsType ItemStatus ItemStatusType HsCodeClassificationStatus HsCodeClassificationStatusType Locations Parameters ParametersType MultiDocument Documents DocumentsType Nexus Parameters ParametersType TaxRules TaxRuleProductDetail TaxRuleProductDetailType Transactions Lines TransactionLineType Addresses AddressesType LocationTypes LocationTypesType Summary SummaryType TaxDetailsByTaxType TaxDetailsByTaxType Parameters ParametersType UserDefinedFields UserDefinedFieldsType Messages MessagesType InvoiceMessages InvoiceMessagesType ProductClassificationSystemsDefinitions Countries CountriesType RegionsDefinitions LocalizedNames LocalizedNamesType CountriesDefinitions LocalizedNames LocalizedNamesType DocumentsType Lines TransactionLineType Addresses AddressesType LocationTypes LocationTypesType Summary SummaryType Messages MessagesType TransactionLineType Details DetailsType AccountPayableSalesTaxDetails DetailsType NonPassthroughDetails DetailsType LineLocationTypes TransactionLineLocationType Parameters ParametersType UserDefinedFields UserDefinedFieldsType TaxAmountByTaxTypes TaxAmountByTaxType TaxDetailsByTaxType TaxSubTypeDetails TaxSubTypeDetailsType InvoiceMessagesType LineNumbers ValueType Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Accounts, Batches, Companies, Contacts, DataSources, Items, Locations, MultiDocuments, Nexus, Notifications, Subscriptions, TaxCodes, TaxRules, Upcs and Users objects. Skyvia supports Synchronization for Companies, Contacts, DataSources, Items, and TaxRules objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Certificates, Companies, CompanyParameters, Contacts, CostCenters, Customers, DataSources, DistanceThresholds, GLAccount, Items, LocationParameters, NexusParameters, TaxRules INSERT, UPDATE APConfigSettings INSERT, DELETE Batches, CertificateAttributes, CustomerAttributes, Locations, Nexus, Settings, TaxCodes, Users INSERT Reports, Upcs Stored Procedures Skyvia represents part of the supported Avalara features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . RequestCertificateSetup To request a tax exemption certificate, use the command: call RequestCertificateSetup(:companyId) An example of a command: call RequestCertificateSetup(832800) LinkCustomersToCertificate To link one or more customers to a specified certificate, use the command: call LinkCustomersToCertificate(:certificateId, :companyId, :customerCodes) Parameter Name Description certificateId ID of the certificate. companyId ID of the company. customerCodes Customer code as a string (\u2018code\u2019) or array of strings ([\u2018code\u2019, \u2018code1\u2019]). An example of a command: call LinkCustomersToCertificate(104, 832800, 'customerCode') or: call LinkCustomersToCertificate(104, 832800, '[\"customerCode1\",\"customerCode2\"]') UnlinkCustomersFromCertificate To unlink one or multiple customers from a specified certificate, use the command: call UnlinkCustomersFromCertificate(:certificateId, :companyId, :customerCodes) Parameter Name Description certificateId ID of the certificate. companyId ID of the company. customerCodes Customer code as a string (\u2018code\u2019) or array of strings ([\u2018code\u2019, \u2018code1\u2019]). An example of a command: call UnlinkCustomersToCertificate(104, 832800, 'customerCode') or: call UnlinkCustomersFromCertificate(104, 832800, '[\"customerCode1\",\"customerCode2\"]') LinkCertificatesToCustomer To link one or multiple certificates to a specified customer, use the command: call LinkCertificatesToCustomer(:customerCode, :companyId, :certificateIds) Parameter Name Description customerCode Customer code. companyId ID of the company. certificateIds Certificate ID as a number (10) or array of numbers ([10, 100]). An example of a command: call LinkCertificatesToCustomer('customerCode', 832800, 104) or: call LinkCertificatesToCustomer('customerCode', 832800, '[104, 105]') UnlinkCustomersFromCertificate To unlink one or multiple certificates from a specified customer, use the command: call UnlinkCertificatesFromCustomer(:customerCode, :companyId, :certificateIds) Parameter Name Description customerCode Customer code. companyId ID of the company. certificateIds Certificate ID as a number (10) or array of numbers ([10, 100]). An example of a command: call UnlinkCertificatesFromCustomer('customerCode', 832800, 104) or: call UnlinkCertificatesFromCustomer('customerCode', 832800, '[104, 105]') Supported Actions Skyvia supports all the common actions for Avalara." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/aweber_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources AWeber [AWeber](https://www.aweber.com/) is a cloud sales email automation software for small teams with templates for any occasion, drag-n-drop email designer, dynamic content, and more. Data integration : Skyvia supports importing data to and from AWeber, exporting AWeber data to CSV files, and replicating AWeber data to relational databases. Backup : Skyvia Backup does not support AWeber backup. Query : Skyvia Query supports AWeber. Establishing Connection When [creating a connection](https://docs.skyvia.com/connections/#creating-connections) to AWeber, you have to log in with AWeber. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Your AWeber credentials are not stored on the Skyvia side. Creating Connection To create AWeber connection, perform the following steps: In Skyvia Connection Editor, click Sign In with AWeber . In the opened window, enter your AWeber credentials, select the I\u2019m not a robot checkbox, and click Allow access . Save the connection. Connector Specifics Object Peculiarities Subscribers When you add a new record to the Subscribers table, map the Tags field to create Tags.\nWhen you add new tags to the existing record or remove the tags inside the record, map the TagsAdd or TagsRemove fields. These fields are used only for the UPDATE operation and return no data when queried. When updating the Statuses field, the valid values for mapping are subscribed or unsubscribed . Broadcasts The Broadcasts data is split into three objects: DraftBroadcasts , SentBroadcasts , and ScheduledBroadcasts . \nWhen performing the INSERT operation into the DraftBroadcasts table, you must map BodyHtml or BodyText fields in addition to the required AccountId, ListId , and Subject fields. Incremental Replication and Synchronization AWeber objects don\u2019t support Synchronization. Skyvia supports Replication with Incremental Updates only for the LandingPages object. DML Operations Support Operation Object INSERT, UPDATE, DELETE DraftBroadcasts UPDATE, DELETE Subscribers Stored Procedures AddSubscribers Skyvia represents part of the supported AWeber features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . To create new subscriber use command call AddSubscribers(:accountId, :listId, :email, :ad_tracking, :ip_address, :last_followup_message_number_sent, :misc_notes, :name, :tags, :update_existing) . PARAMETER NAME DESCRIPTION Account_Id The Customer Account Id (Required paramerer) List_Id The individual subscriber list Id within the AWeber Customer Account (Required parameter) Email Customer email (Required) Ad_tracking The customer ad tracking field Ip_address The subscriber\u2019s IP address Last_followup_message_number_sent The sequence number of the last followup message sent to the subscriber Misc_notes Miscellaneous notes Name The subscriber\u2019s name Tags A list of tags added to the subscriber in JSON format. Update_existing enum value. If a subscriber is already present on the list, the subscriber will be updated Example of the command is the following: call AddSubscribers(1934349,6302150, 'subscr.2007@gmail.com', 'website', '', 1001, 'misc notes', 'Subscriber_MO_2007', '[\"tag1\",\"tag2\",\"tag3\"]', false) . ScheduleBroadcast To assign a schedule to a DraftBroadcasts record and move it to the ScheduledBroadcasts object use command call ScheduleBroadcast(:accountId,:listId,:broadcastId,:scheduled_for) PARAMETER NAME DESCRIPTION Account_Id The Customer Account Id (Required paramerer) List_Id The individual subscriber list Id within the AWeber Customer Account (Required parameter) BroadcastId The Id of a broadcast for schedulling (Required parameter) Scheduled_for Scheduled time for sending broadcast message, ISO-8601 formatted (Required parameter) CancelScheduleBroadcast To unschedule a ScheduledBroadcasts record, use command call CancelScheduleBroadcast(:accountId,:listId,:broadcastId) PARAMETER NAME DESCRIPTION Account_Id The Customer Account Id (Required paramerer) List_Id The individual subscriber list Id within the AWeber Customer Account (Required parameter) BroadcastId The Id of a broadcast for schedulling (Required parameter) Supported Actions Skyvia supports all the common actions for Aweber." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/azure_app_insights_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Azure Application Insights [Azure Application Insights](https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview?tabs=net) is a crossplatform service for software diagnostics data collection, analysis and visualization. Data integration : Skyvia supports importing data from Azure Application Insights, exporting Azure Application Insights data to CSV files, replicating Azure Application Insights data to relational databases. Backup : Skyvia Backup does not support Azure Application Insights. Query : Skyvia Query supports Azure Application Insights. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Azure Application Insights, you have to specify App Id, User and Password. Connector Specifics Skyvia cannot write data to Azure Application Insights, its data is read-only. Object Peculiarities Events The events-related data is stored in separate tables according to the event type ( AvailabilityResults, CustomEvents, Dependencies, Exceptions, PageViews, Requests, Traces ). Every table stores events of specific type. \nAll events-related tables have the following specifics: When you select data from these tables, by default Azure Application Insights API returns data for the period from 1990-01-01 to the query execution date and time value, if no other filters are applied. To redetermine the time range for the selected data you can add a filter by the CreatedDated field. \nThe filtering using the >= and <= operators is performed directly via API. Filtering using all other operators =, <, > and by relative dates is performed via cache. Skyvia reads all the table records and caches all the existing data, then applies a filter to this cache. Although all events-related tables have an Id column containing GUID format value, these values are NOT always unique. \nThus, these tables do not have primary keys. The Events-related table may store a great number of records (tens of thousands of records could be added in just one day).\nThus we recommend to consider this when creating the export or replication integrations, where Skyvia reads all the existing records. We strongly recommend using the filter by CreatedDate to reduce the time interval for which the data is retrieved. Metrics The metrics-related data is stored in separate tables according to the metrics type. Every table stores metrics of specific type. \nAll metrics-related tables have the following specifics: When you select data from these tables, by default Azure Application Insights API returns data for the period from 1990-01-01 to the query execution date and time value, if no other filters are applied. To redetermine the time range for the selected data you can add filters by Start or/and End columns. Filter by Start/End fields supports only the = (equal to) operator. SegmentedMetrics The SegmentedMetrics -related data is stored in separate tables corresponding to the metrics-related tables. By default, when querying the SegmentedMetrics -tables without any filters, Azure AppInsights API returns the empty result. To obtain the data about specific segments with metrics in query results, you have to add filter by the Interval field. The Interval value have to be passed in the ISO 8601 format. For example, P1M = 1 minute, more details on ISO 8601 format is available [here](https://www.twilio.com/blog/parse-iso8601-duration-javascript) . Same as for events and metrics-related tables to redetermine the time range for the selected data you can add filters by Start or/and End columns. Incremental Replication and Synchronization Replication with Incremental Updates is supported for the Events-related tables ( AvailabilityResults, CustomEvents, Dependencies, Exceptions, PageViews, Requests, Traces ). These tables have only the CreatedDate field and do not have the UpdatedDate. It means that only the new records are considered by the Incremental Replication. Skyvia does not support Synchronization for Azure Application Insights. Supported Actions Azure Application Insights connector supports the Execute Command (SELECT operation only), Execute Query and Lookup actions." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/azure_devops_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Azure DevOps [Azure DevOps](https://azure.microsoft.com/en-us/products/devops/) is an end-to-end solution providing services for collaboration during the application development lifecycle:\nVersion control, reporting, requirements management, project management, automated builds, testing, and release management capabilities. Data integration : Skyvia supports importing data to and from Azure DevOps, exporting Azure DevOps data to CSV files, replicating Azure DevOps data to relational databases, and synchronizing Azure DevOps data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Azure DevOps backup. Query : Skyvia Query supports Azure DevOps. Establishing Connection When [creating a connection](https://docs.skyvia.com/connections/#creating-connections) to Azure DevOps, you have to log in to Azure DevOps via OAuth 2.0 and enter the Azure Organization name. Getting Credentials You can find the Organization value on the Azure DevOps main page in the list of available organizations or the page URL. \nFor example, if the URL looks like dev.azure.com/ your_organization , you enter only your_organization part of the URL in Skyvia. Creating Connection To create an Azure DevOps connection in Skyvia, perform the following steps: Enable the Third-party application access via OAuth option for your organization. \nTo do this, sign in with Azure DevOps and go to Organization settings -> Security -> Policy . Click Sign In with Azure DevOps on the Connection Editor page. Enter your credentials. Give your consent to the access request from Skyvia. Specify the Organization . Additional Connection Parameteres Use Custom Fields Select this checkbox to make Azure DevOps custom fields available in Skyvia. Connector Specifics Object Peculiarities NotificationSubscriptions When you create an account in Azure DevOps, several default records are created automatically in the NotificationSubscriptions object. Such records contain an empty UpdatedDate field, and they cannot be replicated incrementally and synchronized.\nManually inserted records in this object contain the values in the UpdatedDate field. Such records can be replicated incrementally and synced. You must map the Filter_type field to perform the import to the NotificationSubscriptions object. \nOther fields may become required for mapping depending on the specified Filter_type value. Profile This object is read-only. To get data from this object, set the filter by the ProfileId field. To get data about the current authorized user, use the filter ProfileId = me . Make sure you have the right to access the specified profile. Otherwise, you get an error. WorkItems The WorkItems object supports the following operators for filtering: =, >, <, >=, <=. You can select a maximum of 20000 records by one query from this object. There are several work item types: Bug, Feature, Epic, Test Case, Impediment, Task, or Product Backlog Item. Work item types define the list of fields for mapping when importing to this object. Required fields for mapping may differ depending on the work item type. WorkItemProcessQueries and WorkItemProcessChildrenQueries WorkItemProcessQueries object stores parent folders and queries data only. Data from child folders and queries is stored in the Children array field.\nFor user convenience, the child array field values are stored in the separate WorkItemProcessChildrenQueries object. To insert a record into the WorkItemProcessChildrenQueries object, you must map the parent folder or query Id. WorkItemRevisions This object is read-only. It is related to Project and WorkItems objects as a child object. Due to Azure devOps API specifics, when you query the WorkItemRevisions object, Skyvia gets data from both parent objects first. Thus it takes time and additional API calls to query this object. We recommend using filters by the Project and/or WorkItemId field. Complex Structured Objects Some of the Azure DevOps objects have complex hierarchic structures. These objects are WorkItemTrackingComments , ProjectDashboardWidgets , and TeamDashboardWidgets . You can update these objects only via their parent objects. For example, to update ProjectDashboardWidgets , Azure DevOps API requires mapping the ProjectDashboards.Id field. Optionally you can map the name of the corresponding Project. To get TeamDashboardWidgets records, Azure DevOps API requires mapping the TeamDashboards.Id field. Optionally you can map the related Teams.Id . Skyvia does not require filtering by the parent object record Id when querying data from a child object or importing data to such an object. Without filtering by parent record Id, Skyvia first queries all the parent object records reading each record Id. Then Skyvia looks up child object records to each parent object record Id. This approach allows querying child objects without knowing their parents, but this method consumes time and API calls. It uses at least one API call for every parent object record. Thus, working with such objects may affect performance. We strongly recommend using filters on the parent object fields when querying data from child objects. For example, to update the WorkItemTrackingComments , map the Id field, and optionally add mappings for ProjectId and WorkItemId . In this case, Skyvia has parent ProjectId and WorkItemId values and retrieves the child WorkItemTrackingComments records directly without going through all records in the WorkItems object. Custom Fields The WorkItems and WorkItemRevisions objects support custom fields. Work items may differ for each project, thus custom fields are supported on the project level. When you add custom fields to the project, Skyvia creates additional objects containing custom fields for each project with the names in the following format: _WorkItems and _WorkItemRevisions . Custom fields added for a specific work item type are available for other work item types within the project. Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: AuditStreams, NotificationSubscriptions, Projects, TfvcShelvesets, TfvsLabels, WorkItemProcessChildrenQueries, WorkItemProcessQueries, WorkItems, WorkItemTrackingComments The TfvcShelvesets object contains the CreatedDate field and doesn\u2019t contain the UpdatedDate field. Thus only the new records can be processed by Incremental Replication. Synchronization is supported for the following objects: AuditStreams, NotificationSubscriptions, Projects, WorkItemProcessChildrenQueries, WorkItemProcessQueries, and WorkItemTrackingComments . DML Operations Support Skyvia supports the following DML operations for Azure DevOps objects: Operation Object INSERT, UPDATE, DELETE AuditStreams, NotificationSubscriptions, ProjectDashboardWidgets, Projects, TeamDashboardWidgets, Teams, WorkItemProcessChildrenQueries, WorkItemProcessQueries, WorkItems, WorkItemTrackingComments, WorkItemTrackingProcesses INSERT, DELETE ProjectDashboards, TeamDashboards DELETE WorkIterations Supported Actions Skyvia supports all the common actions for Azure DevOps." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/bamboohr_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources BambooHR [BambooHR](https://www.bamboohr.com/) is a all-in-one HR platform that provides employee database service and reporting, payroll, time, and benefits, hiring and onboarding solutions. Data integration : Skyvia supports importing data to and from BambooHR, exporting BambooHR data to CSV files, replicating BambooHR data to relational databases, and synchronizing BambooHR data with other cloud apps and relational databases. Backup : Skyvia Backup does not support BambooHR. Query : Skyvia Query supports BambooHR. Establishing Connection To create a connection to BambooHR, specify the Company Domain and authorize using your credentials or an API Key. Getting Credentials Company Domain Company domain is located at the address bar when you are logged in to BambooHR. The text just before the .bamboohr.com is your domain. API Key To locate an API Key, go to BambooHR and perform the following steps: Go to BambooHR and click the user icon. Click API Keys . Click Add New Key . Name the API Key and click Generate Key . The API Key is available only once while you create it. To access it again copy it and keep in a safe place. Creating API Key Connection To connect to BambooHR using the API Key, do the following. Specify the Company Domain . Select the API Key in the Authentication box. Enter the obtained API Key into the corresponding box in the Connection Editor. Creating OAuth Connection To connect to BambooHR using the OAuth 2.0, perform the following steps: Specify the Company Domain . Select the OAuth 2.0 in the Authentication box. Click Sign-in with BambooHR . Enter your email and password, and click Log In. Additional Connection Parameters Suppress Extended Requests BambooHR API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD EmployeeFiles Content CompanyFiles Contect EmployeePhotos Photo Applications ApplicantPhoneNumber, ApplicantAddressAddressLine1, ApplicantAddressAddressLine2, ApplicantAddressCity, ApplicantAddressState, ApplicantAddressZipcode, ApplicantAddressCountry, ApplicantLinkedinUrl, ApplicantTwitterUsername, ApplicantWebsiteUrl, ApplicantAvailableStartDate, ApplicantEducationInstitution, ApplicantEducationLevelId, ApplicantEducationLevelLabel, JobHiringLeadEmployeeId, JobHiringLeadFirstName, JobHiringLeadLastName, JobHiringLeadJobTitleId, JobHiringLeadJobTitleLabel, ResumeFileId, CoverLetterFileId, MovedTo, MovedFrom, AlsoConsideredForCount, DuplicateApplicationCount, ReferredBy, DesiredSalary, CommentCount, EmailCount, AttachmentCount, EventCount, QuestionsAndAnswers, Attachments, ApplicationReferences. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities EmployeeDependents When you select data from this object, the Country and State fields return country and state names respectively. However, if you import data to these fields, you have to specify the country and state ISO codes, not names. Filtering Specifics BambooHR API supports the following native filters: Object Fields and Operators Employees Id ( = ), UpdatedDate ( >= , > ) EmployeePhotos Id ( = ) EmployeeFiles Id ( = ) TimeOffRequests Id ( = ), EmployeeId ( = ), Status ( = , In), TimeOffTypeId ( = , In) EmployeeTimeOffPolicies EmployeeId ( = ) EstimateFutureTimeOffBalances EmployeeId ( = ) EmployeeDependents Id ( = ) EmployeeGoalComments EmployeeId ( = ), GoalId ( = ) AlignableGoalOptions EmployeeId ( = ) Applications StatusId ( = ), JobId ( = ), CreatedDate ( >= , > ) EmployeeTrainings EmployeeId ( = ) EmployeeProjects EmployeeId ( = ) TimesheetEntries EmployeeId , Id , ( = ), Date ( >= , > ) JobInfo, EmploymentStatus, Compensation, Dependents, Contacts, EmergencyContacts, Earnings, Bonus, Commission, EmployeeVisas, EmployeeEducation, EmployeePassports, EmployeeDriverLicenses, EmployeeCertifications, EmployeeStockOptions, EmployeeAssets, EmployeeCreditCards, EmployeeCovidTests, EmployeeCovidVaccinations, EmployeeCovidVaccinationExemptions, EmployeeCovidExposures, EmployeeEquityGrants, EmployeeProjectPayRates, CustomAssets EmployeeId ( = ) Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Nested Objects The following fields store complex structured data in JSON format. Object Field Nested Object TimeOffRequests Dates TimeOffRequestDatesType BenefitDeductionTypes SubTypes BenefitSubType EmployeeGoals SharedWithEmployeeIds IdType Milestones GoalMilestoneType Applications QuestionsAndAnswers ApplicationsQuestionsAndAnswersType EmployeeProjects Tasks EmployeeProjectTasksType You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. Custom Objects / Custom Fields Only the Employees object supports custom fields. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Employees, EmployeeFiles, CompanyFiles, and EmployeeGoalComments . Skyvia supports Synchronization for the objects\twhich support INSERT and UPDATE operations and contain the CreatedDate or UpdatedDate field. DML Operations Support Operation Object INSERT, UPDATE, DELETE CompanyFiles, EmployeeFiles, EmployeeGoals, EmployeeGoalComments, EmployeeTrainings, TrainingTypes, TrainingCategories INSERT, UPDATE Employees, EmployeeDependents, EmployeePhotos, TimeOffRequests INSERT EmployeeFileCategories, CompanyFileCategories Supported Actions Skyvia supports all the common actions for BambooHR." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/bigcommerce_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources BigCommerce [BigCommerce](https://www.bigcommerce.com) is the leading e-commerce platform for running an online store. Data integration: Skyvia supports importing data to and from BigCommerce, exporting BigCommerce data to CSV files, replicating BigCommerce data to relational databases, and synchronizing BigCommerce data with other cloud apps and relational databases. Backup: Skyvia Backup supports BigCommerce backup. Query: Skyvia Query supports BigCommerce. Establishing Connection Skyvia supports connecting to BigCommerce using legacy Basic authentication and OAuth authentication (using Store Credentials). Connections with OAuth authentication using App Credentials cannot be manually created by the user. Basic authentication is not supported for API v3 connections. BigCommerce strongly recommends using OAuth authentication. Basic authentication support is left in Skyvia for compatibility purposes for API v2 connections. Getting Credentials Parameters for OAuth (Store Credentials) Authentication To get parameters for OAuth authentication using Store Credentials, you must create an API user. For this, perform the following actions: Sign in to your BigCommerce Control Panel. In the menu on the left, click Settings . Scroll the settings page down and click API Accounts . You can see a list of API accounts displaying the assigned scopes. To create a new account, click + Create API Account . Required parameters are displayed only once when creating an API account. It\u2019s not possible to view these parameters for an already created account. If you do not have the parameters for an existing account stored anywhere, the only way to obtain them is to create a new API account. Specify the API user name and select the necessary OAuth scopes to allow Skyvia access. You can find available OAuth scopes in the [BigCommerce](https://developer.bigcommerce.com/docs/start/authentication) documentation. Click Save . The necessary connection parameters are displayed and automatically downloaded as a text file. In Skyvia, you need a client ID and access token from there. Be sure to store these parameters somewhere, as you won\u2019t be able to see them for this API account anymore. Parameters for Basic Authentication You can get parameters for basic authentication by viewing them for your legacy API accounts. To do it, perform the following steps: Sign in to your BigCommerce Control Panel. In the menu on the left, click Settings . Scroll the settings page down and click API Accounts . A list of API accounts displaying the assigned scopes opens. Scroll it down to the bottom. At the bottom of this list, click the Click here link in \u201cClick here to access existing legacy API accounts.\u201d. A list of legacy accounts is opened. Click the three-dotted button for the required account, and then click Edit . Copy the required API Path and API Token. Creating Connection OAuth Authentication To connect to BigCommerce, using OAuth authentication (Store Credentials), you need to set the following parameters: Select the API Version to use \u2014 v2 or v3 . Select Store Credentials for Authentication parameter to use the OAuth authentication. Specify the Store Id \u2014 store hash from the API Path. For example, when you log in to your BigCommerce store Control Panel, an URL similar to the following opens in your browser: https://store- kj3jh4c. mybigcommerce.com/manage/dashboard. The part of the URL highlighted in bold is the store Id required. Specify the Client Id \u2014 Client Id of your API account. Insert the Access Token \u2014 OAuth access token to log in with. Creating a connection using App Credentials authentication manually is not possible. Basic Authentication Basic Authentication is considered deprecated. It uses BigCommerce legacy API accounts that cannot be created in new BigCommerce stores created after July 31, 2018. To connect to BigCommerce, using basic authentication , you need to specify an URL to connect to and a user name and authentication token to log in with. You need to specify the following parameters for a basic authentication connection: Select API Version . Only API v2 supports basic authentication. Set Basic for the Authentication . Specify the URL . API Path (the path where all XML requests to BigCommerce should be sent). Specify the User Id \u2014 user name to log in with. Insert the Authentication Token \u2014 an automatically generated key that is used and that must be included in the API requests to BigCommerce. Additional Connection Parameters Use Custom Fields This checkbox is available only for BigCommerce API v3 connections. It determines whether you will be able to get product custom fields via the CustomFields field of the Products object through this connection. If this checkbox is selected, this field returns a JSON array containing information about custom fields and their values for products, if such are available. Otherwise, it always returns null values. This checkbox does not affect working with custom fields for Customers and CustomerAddresses , and it also does not affect access to Product custom fields via the ProductCustomFields object. Processing custom fields may take additional time and API calls, so it\u2019s recommended to select this checkbox only if you need to work with Product custom fields via this connection. Connector Specifics BigCommerce API Versions Support Skyvia supports both BigCommerce API v2 and API v3 connections. The lists of objects and their structure is different for BigCommerce API v2 and API v3. The main differences are connected with objects, storing data about products, their variants, options, prices, brands, etc. Here are the lists of objects that are supported via API v3 and, thus, have different data structures in API v2 and v3 connections: API v3 API v2 Brands Brands Categories Categories Customers Customers CustomerFormFieldValues CustomerCustomFields CustomerAddressFormFieldValues CustomerAddressCustomFields ProductsBulkPricingRules ProductsBulkPricingRules ProductCustomFields ProductCustomFields ProductImages ProductImages ProductReviews ProductReviews ProductComplexRules ProductRules ComplexRulesCondition Products Products ProductVariants ProductSkus ProductVideos ProductVideos ProductVariantOptionValues OptionSets ProductVariantOptions Options VariantOptionValues OptionSetsOptions OptionValues ProductOptions BrandMetafields ProductConfigurableFields CategoryMetafields ProductGoogleProductSearch The objects present in both columns have clear counterparts in API v3 and v2 and have just different internal structures. However, product options have a completely different structure, and the objects storing them do not have such clear counterparts. Besides, BigCommerce API v3 provides access to new objects not available via API v2: ProductMetafields, ProductVariantMetafields, PriceLists, PriceListRecords, ProductModifiers, ProductModifiersValues, CatalogSummary, CategoryTrees, Channels, CustomerFormFieldValues, CustomerAddressFormFieldValues, CustomerAttributes, Carts, CartLineItems, CartCustomItems, CartGiftCertificateItems, CartMetafields, SystemLogs, TaxRates, TaxZones, TaxProperties . Other BigCommerce objects are accessed via API v2 in both API v2 and API v3 connections. Object peculiarities Read-only Objects The following BigCommerce objects are read-only: BlogTags, Countries, CustomerAddressCustomFields, CustomerCustomFields, OrderCoupons, OrderMessages, OrderProductOptions, OrderProducts, OrderShippingAddresses, OrderStatuses, OrderTaxes, PaymentMethods, ProductConfigurableFields, ProductGoogleProductSearch, ProductOptions, ProductReviews, States, Store, SystemLogs, TaxClasses . Related Objects Some of the BigCommerce objects can be accessed only via their parent objects. For example, to query ProductCustomFields , BigCommerce API requires the Id value of the corresponding Product record. Skyvia does not require this Id of the parent object from the user. If you don\u2019t specify the Ids of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their Ids, and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes much time and consumes many API calls. It uses at least one API call for every parent object record (for example, product). Thus, working, for example, with ProductCustomFields, can be slow. Thus, it is recommended to use filters on the parent object fields when querying data from such child objects. This reduces the number of parent object records for which child object data must be queried. Complex Structured Data Some of the BigCommerce objects store complex structured data. These are the following objects: Orders, OrderShipments, Customers, CustomerAddresses, Carts, TaxRates and other objects, that have custom fields, and some other nested objects and arrays. For example, an order can have several lines and several shipping addresses. Skyvia represents this information in such objects as fields in the JSON format. In case of Orders , the fields are named Products and ShippingAddresses . Here is an example of the Products field value from the Orders object: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60 [ { \"Id\" : 37 , \"ProductId\" : 77 , \"Name\" : \"[Sample] Fog Linen Chambray Towel - Beige Stripe\" , \"Sku\" : \"SLCTBS-70D1759E\" , \"PriceExTax\" : 49 , \"PriceIncTax\" : 49 , \"Quantity\" : 1 , \"ProductOptions\" : [ { \"Id\" : 29 , \"OptionId\" : 18 , \"OrderProductId\" : 37 , \"ProductOptionId\" : 108 , \"DisplayName\" : \"Size\" , \"DisplayValue\" : \"L\" , \"Value\" : \"71\" , \"Type\" : \"Multiple choice\" , \"Name\" : \"Apparel sizes\" , \"DisplayStyle\" : \"Rectangle\" }, { \"Id\" : 30 , \"OptionId\" : 3 , \"OrderProductId\" : 37 , \"ProductOptionId\" : 133 , \"DisplayName\" : \"Color\" , \"DisplayValue\" : \"Yellow\" , \"Value\" : \"12\" , \"Type\" : \"Swatch\" , \"Name\" : \"Colors\" } ], \"OrderId\" : 128 , \"OrderAddressId\" : 36 , \"Type\" : \"physical\" , \"BasePrice\" : 49 , \"PriceTax\" : 0 , \"BaseTotal\" : 0 , \"TotalExTax\" : 49 , \"TotalIncTax\" : 49 , \"TotalTax\" : 0 , \"Weight\" : 0 , \"BaseCostPrice\" : 0 , \"CostPriceIncTax\" : 0 , \"CostPriceExTax\" : 0 , \"CostPriceTax\" : 0 , \"IsRefunded\" : false , \"RefundAmount\" : 0 , \"ReturnId\" : 0 , \"BaseWrappingCost\" : 0 , \"WrappingCostExTax\" : 0 , \"WrappingCostIncTax\" : 0 , \"WrappingCostTax\" : 0 , \"QuantityShipped\" : 0 , \"FixedShippingCost\" : 0 , \"OptionSetId\" : 14 } ] For user convenience, lines of these objects are also available as separate records via read-only objects with names containing the name of the corresponding parent object and its JSON fields: OrderShippingAddresses, OrderProducts, OrderProductOptions, CustomerCustomFields , etc. Since these objects (presenting JSON data in table form) are read-only, they are not available in import or synchronization integrations as a target. To modify lines or shipping addresses of orders, etc., you need to provide values in JSON format for the corresponding field of the corresponding main object \u2014 Orders, etc. These objects are available in Backup, but you cannot restore data to these objects because they are read-only. Since they store the same information as the corresponding field of the corresponding main objects, you don\u2019t actually need to back them up. All the information in such objects is present in the corresponding main objects, and you can back up and restore data in the main object only. Custom Template Association Objects CustomTemplateAssociations object stores all associations of Stencil theme\u02bcs custom templates to products, categories, brands, and pages. EntityId and EntityType fields store the IDs of the corresponding entities and their type: product , category , brand , or page . We strongly recommend filtering by both EntityId and EntityType fields when querying data from this object. There are additional objects in our BigCommerce connector, which were added for user convenience to reduce API call use when querying. These objects store template associations separated per type of associated records: ProductTemplateAssociations , CategoryTemplateAssociations , BrandTemplateAssociations , PageTemplateAssociations . Use them to get the template association of the specific type. These objects store the entity ID in the ProductId ,CategoryId , BrandId, PageId fields, respectively. Wishlists BigCommerce Wishlists object also has a similar field Items with a JSON array of wishlist items. When you create a new record in this object, you need to specify its value as a JSON array as well as for the previously mentioned objects. However, the object of its items - WishlistItems is not read-only, and you can update existing wishlists by adding or deleting records to the WishlistItems object. The Items field of the Wishlists object is not available for the UPDATE operation. Categories If you use the Multi-Storefront feature, you can add categories to the specific catalog tree. When mapping, specify the tree ID in the TreeId field. If you don\u2019t know the ID of the tree, you can find it in the CategoryTrees object. When adding the root category with no nested categories, set the ParentId field to 0. Channels When you import records to this object, you must specify the valid values from the list for the Type, Platform and Status fields. Field List of valid values Type pos, marketplace, storefront, marketing Platform clover, square, stripe, talech, vend, amazon, belk, catch, ebay,etsy,hudsons_bay, macys, mirakl, overstock, pinterest, target_plus, tiktok, wayfair, wish, walmart, google_shopping ,facebook, google, instagram,acquia, bigcommerce, bloomreach, deity, drupal, gatsby, next, vue, wordpress,custom Status active, prelaunch, inactive, connected, disconnected, archived, deleted, terminated When importing records to the Channels object, you must use valid type and platform combinations.\nFind the valid combinations in the table below. Type Supported Platforms pos clover, square, stripe, talech, vend, custom marketplace amazon, belk, catch, ebay, etsy, facebook, hudsons_bay, google, instagram, macys, mirakl, overstock, pinterest, target_plus, tiktok, wayfair, wish, walmart, custom marketing facebook, google, google_shopping (deprecated), instagram, custom storefront acquia, bigcommerce, bloomreach, deity, drupal, gatsby, next, vue, wordpress, custom Allowed values for a channel\u2019s status vary by channel type and platform. Use the valid combinations listed below. Type Platform Allowed Statuses storefront acquia, bigcommerce, bloomreach, deity, drupal, gatsby, next, vue, wordpress, custom prelaunch, active, inactive, archived, deleted marketing, marketplace, pos clover, square, stripe, talech, vend, amazon, belk, catch, ebay, etsy, facebook, hudsons_bay, google, instagram, macys, mirakl, overstock, pinterest, target_plus, tiktok, wayfair, wish, walmart, facebook, google, google_shopping (deprecated), instagram, custom connected, disconnected, archived, deleted Customers This object has a complex structure. The Channels, FormFields, Attributes , and Addresses fields contain nested objects. They are text fields in JSON Array format. You can insert not more than 10 array items into the Attributes and Addresses fields.\nUse the child CustomerAddresses and CustomerAttributes objects for importing more items once. CustomerAddresses This object has a complex structure.\nThe FormFields field contains the nested objects. It is a text field in JSON Array format. The AddressType field accepts the following valid values: residential and commercial . CustomerAttributeValues When updating records in this object, we recommend mapping the Id, CustomerId , and AttributeId fields. It will increase performance significantly. CustomerMetafields We recommend filtering by the CustomerId paired with Id field for better querying performance. The PermissionSet accepts the following valid values: app_only, read, write, read_and_sf_access, write_and_sf_access . Carts Use filter by Id to select data from the Carts object. The LineItems, GiftCertificates, CustomItems, Coupons , and Discounts contain the nested arrays. These fields are text fields in the JSON Array format. When you create a cart, it must not be empty. It requires mapping at least one of the following fields for successful data import. Map the LineItems field when adding a cart with products. Map the GiftCertificates field when adding gift certificates. Map the CustomItems field when adding custom items. You must specify at least the product ID and quantity in the LineItems field to create a record in the Carts object successfully. For example, [{ProductId:35018, Quantity:1}] . If the product demands choosing a variant (for instance, it requires selecting size or color before checkout), you must specify the product variant Id in the LineItems field. For instance, [{ProductId:35018, VariantId: 34963, Quantity:2}] . If you add custom items to the cart, you must specify at least the item SKU, name, quantity, and price in the CustomItems field. For example, [{Sku:\"Sku-custom-beer\", Name:\"Beer\", Quantity: 3, ListPrice: 5.5}] . When you add a cart with a gift certificate, you must specify the certificate name, theme, quantity, amount, sender name, email, recipient name, and recipient email. For example, [{Name:\"My Day\",Theme:\"Birthday\", Quantity:1, Amount: 25, SenderName: \"Jon\", SenderEmail: \"Jon@gmail.com\", RecipientName: \"Jim\", RecipientEmail: \"Jim@gmail.com\", Message: \"Hello\"}] . For user convenience, the data from the LineItems, GiftCertificates, CustomItems fields are represented in Skyvia as separate auxiliary objects: CartLineItems, CartGiftCertificateItems , and CartCustomItems . You can use these objects for data import. For example, you can add lines to the existing carts using them. CartLineItems Use filter by the CartId field to get data from this object successfully. The fields GiftWrappingTogether and GiftWrappingId are used for data import only. They return empty results by default when querying. The CartLineItems object has the Options field, representing an array of additional product options. Use this field \nfor cases when a product has options but not variants. It is useful when the product was created using API V2 or when there are additional options (product modifiers) from the seller, not the manufacturer. For example, the Options field looks like this [{\"NameId\":240,\"ValueId\":98,\"Name\":\"test2\",\"Value\":\"test2\"}] . The NameId indicates the option ID, and the ValueId indicates the option value ID. You can find these IDs in the ProductModifiersValues object. You can omit the Options field if the product has variants. When updating the CartLineItems object, you must map the ProductId and the required Id and CartId fields. CartCustomItems Use filter by the CartId field to get data from this object successfully. \nWe also recommend filtering by the Id field for better querying performance. To successfully insert data to this object, map the CartId, Sku, Name, Quantity and ListPrice when importing data. CartGiftCertificateItems Use filter by the CartId field to get data from this object successfully. \nWe also recommend filtering by the Id field for better querying performance. The Themes field accepts the following valid values: Birthday, Boy, Celebration, Christmas, General, Girl . To successfully insert or update data in this object, specify the following fields: CartId, Name, Theme, Quantity, Amount, SenderName, SenderEmail, RecipientName, RecipientEmail . CartMetafields Use filter by the CartId field to get data from this object successfully. \nWe also recommend filtering by the Id field for better querying performance. The PermissionSet accepts the following valid values: app_only, read, write, read_and_sf_access, write_and_sf_access . To successfully insert data in this object, specify the fields: CartId, PermissionSet, Namespace, Key, Value . The Key indicates the name of the field. The Value indicates the value of the field. TaxRates To successfully import data to the TaxRates , you must map the Name, TaxZoneId and ClassRates fields. This object has a complex structure. The ClassRates is a text in JSON Array format. \nThe ClassRates array contains two fields TaxClassId and Rate . For example, the ClassRates field value looks like this [{TaxClassId:0, Rate:4.5 }] . TaxZones This object has a complex structure. The Locations field is a text in JSON Array format. It contains the CountryCode and the array fields: SubdivisionCodes and PostalCodes .\nThe CustomerGroups field is an array of customer group IDs, for example [13279,13280] . ProductTaxProperties When you perform the DELETE operation, BigCommerce API allows deleting all tax property values from a product. It doesn\u2019t allow deleting single property value. TaxSettings BigCommerce API allows only updating the existing records in this object. You can\u2019t insert or delete records from this object. The InvoicePriceDisplayStrategy field accepts the following valid values: ZONE, INCLUSIVE, EXCLUSIVE . The FallbackStrategy field accepts the following valid values: FIXED, BASIC, DISABLE . Incremental Replication and Synchronization Replication with Incremental Updates enabled is supported for the following objects: BrandMetafields, Carts, CartLineItems, CartMetafields, CategoryMetafields, Channels, Customers, CustomerAttributes, CustomerAttributeValues, _OrderMessages, OrderShipments, OrderMetafields, Orders, PriceLists, PriceListRecords, ProductImages, ProductMetafields, ProductReviews, Products, ProductVariantMetafields, Subscribers, SystemLogs, TaxProperties, Transactions .\nThe OrderMessages, OrderShipments and Transactions objects have the CreatedDate field and do not have the DateModified field. It means that only new records can be replicated. Synchronization is supported for the following objects: BrandMetafields, Carts, CategoryMetafields, CartMetafields, Channels, Customers, CustomerAttributes, CustomerAttributeValues, OrderMetafields, Orders, PriceLists, PriceListRecords, ProductImages, ProductMetafields, ProductReviews, Products, ProductVariantMetafields, Subscribers, TaxProperties . DML Operations Support Skyvia supports the following DML Operation for BigCommerce: Operation Objects INSERT, UPDATE, DELETE BlogPost, BrandMetafields, Brands, Carts, CartLineItems, CartGiftCertificateItems, CartMetafields, Categories, CategoryTrees, CategoryMetafields, Coupons, CustomerAddresses, CustomerAttributes, CustomerAttributeValues, CustomerGroups, CustomerMetafields, Customers, GiftCertificates, OrderMetafields, Orders, OrderShipments, Pages, PriceLists, ProductComplexRules, ProductCustomFields, ProductImages, ProductMetafields, ProductModifiers, ProductModifiersValues, ProductReviews, Products, ProductsBulkPricingRules, ProductVariantMetafields, ProductVariantOptions, ProductVariantOptionsValues, ProductVariants, ProductVideos, Redirects, ShippingMethods, ShippingZones, Subscribers, TaxRates, TaxZones, TaxProperties, Wishlists INSERT, UPDATE Channels, CustomerFormFieldValues, CustomerAddressFormFieldValues, ProductTaxProperties INSERT, DELETE BrandTemplateAssociations , CartCustomItems , CategoryTemplateAssociations , CustomTemplateAssociations , PageTemplateAssociations , PriceListAssignments , ProductTemplateAssociations , WishListItems INSERT Refunds UPDATE TaxSettings DELETE PriceListRecords Supported Actions Skyvia supports all the common actions for BigCommerce." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/booqable_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Booqable [Booqable](https://booqable.com/) is an All-in-one rental software for order management, online booking and inventory. Data integration : Skyvia supports importing data to and from Booqable, exporting Booqable data to CSV files, replicating Booqable data to relational databases, and synchronizing Booqable data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Booqable. Query : Skyvia Query supports Booqable. Establishing Connection To create a connection to Booqable, specify the Subdomain and API Key. Getting Credentials Subdomain Your Booqable subdomain is the company name you specified while registering in Booqable. \nYou can find it in your Booqable URL. For example, if the URL is https://mysubdomain.booqable.com/ , the subdomain value is mysubdomain . API Key To locate an API key, perform the steps below: Go to Booqable. Click the user icon in the bottom left corner and select User Settings . Scroll down to the Authentication Methods block and click New authentication method . Choose Token and give it a name. Copy the generated API Key. Creating Connection To connect to Booqable, enter your Booqable subdomain and paste the obtained API key to the corresponding boxes in the connection editor. Connector Specifics Object Peculiarities Customers and ProductGroups When you delete records from these objects, these records become archived. The deleted records are available for querying. To query the deleted record, specify its ID in a filter. Customers The Properties field returns null values when querying. It is designed for INSERT and UPDATE operations. The Properties, Notes , and Orders fields display data only when you query specific records using a filter by Id . When you insert data to the Properties field, provide its values in JSON format. For example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 [\n {\n \"type\": \"Property::Address\",\n \"name\": \"Main\",\n \"address1\": \"Pieter stuyvesantweg 153\",\n \"address2\": \"Unit 504\",\n \"zipcode\": \"8937AH\",\n \"city\": \"Leeuwarden\",\n \"country\": \"Netherlands\"\n },\n {\n \"type\": \"Property::Phone\",\n \"name\": \"Phone\",\n \"value\": \"+315812345678\"\n }\n] Orders Every newly inserted order has a new status and it is unavailable when querying the Orders object. \nTo make it visible among the Orders records, use the SaveOrderAsConcept stored procedure. Custom Fields Booqable supports custom fields. You can add them via the Booqable UI. The Orders and Customers custom fields are listed in the PropertiesAttributes field. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Booqable objects. Skyvia supports Synchronization for the Customers, OrderLineItems , and ProductGroups objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Customers, OrderLineItems, ProductGroups INSERT, UPDATE Orders DELETE OrderPlannings Stored Procedures Skyvia represents part of the supported Booqable features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SaveOrderAsConcept To make a new order visible to everyone, use the command call SaveOrderAsConcept(:order_id) ReserveOrder The following command reserves an order and books all the products in it. call ReserveOrder(:order_id) This action is only allowed when the order status is either new or concept . Supported Actions Skyvia supports all the common actions for Booqable." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/calendly_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Calendly [Calendly](https://calendly.com/) is a powerful scheduling software to organize meetings and appointments between individuals and organizations. Calendly eliminates email back and forth and helps save time so businesses can be more efficient, provide better service and increase sales. Data integration : Skyvia supports importing data to and from Calendly, exporting Calendly data to CSV files, and replicating Calendly data to relational databases. Backup : Skyvia Backup does not support Calendly. Query : Skyvia Query supports Calendly. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Calendly, you need to enter a Calendly Token . Getting Credentials You can manage Calendly personal access tokens in the Calendly interface: Log in to [Calendly](https://calendly.com/) and click Integrations at the top. In the left menu, select API and connectors and click API and webhooks . Click Generate New Token . Click Copy token . Creating Connection In Skyvia connection editor, paste the obtained Calendly token. Suppress Extended Requests Calendly API returns only part of the fields for the ScheduledEventInviteesNoShow object when querying multiple records. To query the Invitee field values, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. Additional requests can decrease performance and significantly increase the number of API calls used. To reduce the number of API calls, select the Suppress Extended Requests checkbox. Some of the fields in such objects return empty values when querying, even if they have values in Calendly, because Calendly API does not return them without extended requests. Connector Specifics Object Peculiarities Users You must set a filter by the UUID field to get data from the Users object. The UUID value looks like this: 0a45ce25-50b8-4405-bb37-83ff7d75255f . UserEventTypes To get data from the UserEventTypes object, you must set a filter by the UserUUID field. OrganizationScheduledEvents The Update operation for the OrganizationScheduledEvents object cancels the specified event. Calendly API allows updating only the Cancellation_Reason field for events in the Active status or events which are not executed yet. ScheduledEventInviteesNoShow To add a record to the ScheduledEventInviteesNoShow object, you must map the URI record value from the ScheduledEventInvitees object in the Invitee field for the row you want to add. The Invitee field value looks like this: https://api.calendly.com/scheduled_events/a9a7b9cb-66b0-49a1-b60a-ae47ca2b968d/invitees/80ab1394-bb67-40f1-adde-3297204 Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Calendly objects: OrganizationEventTypes, OrganizationInvitations, OrganizationMemberships, OrganizationScheduledEvents, OrganizationWebhookSubscriptions, ScheduledEventInvitees, ScheduledEventInviteesNoShow . Skyvia does not support Synchronization for Calendly. DML Operation support Skyvia supports DML operations for such Calendly objects: OPERATION OBJECT INSERT, DELETE OrganizationInvitations, OrganizationWebhookSubscriptions, ScheduledEventInviteesNoShow UPDATE OrganizationScheduledEvents DELETE OrganizationMemberships Supported Actions and Actions Specifics Skyvia supports all the common actions for Calendly." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/callrail_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources CallRail [CallRail](https://app.callrail.com) is the call tracking and marketing analytics software. It helps match inbound calls, texts, forms, and live chats to marketing campaigns to find out what\u2019s working. Data integration : Skyvia supports importing data to and from CallRail, exporting CallRail data to CSV files, and replicating CallRail data to relational databases. Backup : Skyvia Backup does not support CallRail. Query : Skyvia Query supports CallRail. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to CallRail, you need to enter your CallRail API Key and Account Number. Getting Credentials Account Number \u2014 a nine-digit account number assigned to your CallRail account when created. You can read more about your CallRail account number [here](https://support.callrail.com/hc/en-us/articles/5711743926541) . API Key \u2014 REST API key used for connecting to CallRail. Perform the following steps to find the API Key: Sign in to [CallRail](https://app.callrail.com) . Click Settings in the menu on the left. Click Integrations in the toolbar and go to API Keys . Here you will find the list of created API Keys. Click Create API V3 Key and copy its value. API Key is available for full copying in your CallRail account only the first time when it is created. Later on, it will be half hidden. So paste your API Key to a safe place (word document, etc.) Creating Connection Paste your API Key . Paste your Account Number . Connector Specifics Object Peculiarities Tags If the TagLevel field is empty or mapped to the Company value, map the CompanyId field with the required fields to import data to this table. Companies When deleting records from the Companies object, the records are not deleted. Their status changes from Active to Disabled . Trackers The Type field value of the Trackers object defines the list of the required fields for mapping. When Type = \u2018Session\u2019, you must map the PoolSize and PoolNumbers fields in addition to the fields required by default. When Type = \u2018Source\u2019, you must map the TrackingNumbers field in addition to the fields required by default. Filtering by DateRange The DateRange field in the Calls, FormSubmissions, TextMessages objects is used for filtering only and does not return any data when querying. By default, the Calls, FormSubmissions, TextMessages objects display data for the last 30 days, including the current date. To display data for the other date ranges, you can set filters by the DateRange field. \nYou can use possible DateRange field values when filtering: Recent, Today, Yesterday, Last seven days, Last 30 days, This month, Last month, This year, Last year, All time . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such CallRail objects: Accounts, CallPageViews, Calls, Companies, OutboundCalls, Trackers, Users, and Tags . Skyvia does not support Synchronization for CallRail. DML Operations Support Operation Objects INSERT, UPDATE, DELETE Companies, Integrations, IntegrationTriggers, Notifications, SummaryEmails, Tags, Trackers, Users INSERT, DELETE OutboundCallerIds, TextMessages INSERT, UPDATE Calls INSERT FormSubmissions Supported Actions and Actions Specifics Skyvia supports all the common actions for CallRail." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/capsule_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Capsule [Capsule](https://capsulecrm.com/) is an online customer relation management system that builds stronger customer relationships, improves sales and saves time. Data integration : Skyvia supports importing data to and from Capsule, exporting Capsule data to CSV files, replicating Capsule data to relational databases, and synchronizing Capsule data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Capsule. Query : Skyvia Query supports Capsule. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , log in with your Capsule account and give your permission for Skyvia to access your account. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Skyvia does not store your credentials. Creating Connection To connect to Capsule, perform the following steps: Click Sign In with Capsule . In the opened window, enter your organization name to use in your Capsule account and click Continue . In the next window, enter your credentials to sign in. Finally, click Allow to approve access request. After the OAuth token has been generated, save the connection. Connector Specifics Object Peculiarities Parties The Parties object contains two types of records: person and organization . These types determine the fields required for mapping when importing data to this object. Person type requires the FirstName and LastName . Organization type requires the Name . Entries The Entries object contains three types of records: notes, tasks, emails . [Capsule API](https://developer.capsulecrm.com/v2/operations/Entry#createEntry) allows creating the Entries records of the note type only. When importing data to this object, you must map the Type field by a constant equal to Note value. \nAlso, all records from the Entries object should be related to one of the parent objects \u2014 Party, Opportunity or Case . Thus, when importing, you must map the Id of the Party, Opportunity or Case record the Note will be connected to. CaseTracks and OpportunityTracks When importing data to the CaseTracks and OpportunityTracks objects, filter by the Definition field. This field returns empty results when querying these objects. You can obtain information about track definitions from the TrackDefinitions object. Incremental Replication and Synchronization Skyvia supports Synchronization and Replication with Incremental Updates for the following objects: Cases, LostReasons, Entries, Milestones, Opportunities, Parties, Pipelines, Resthoks, Tasks, Teams, and TrackDefinitions . DML Operations support Skyvia supports the following import operations for Capsule objects: Operation Object INSERT, UPDATE, DELETE CaseCustomFields, Cases, CaseTags, CaseTracks, Categories, Entries, Milestones, Opportunities, OpportunityCustomFields, OpportunityTags, OpportunityTracks, Parties, PartyCustomFields, PartyTags, Tasks, LostReasons, ActivityTypes, CustomTitles INSERT, DELETE CaseParties, OpportunityParties, RestHooks UPDATE Users Supported Actions Skyvia supports all the common actions for Capsule." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/chargebee_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Chargebee [Chargebee](https://www.chargebee.com/) is a subscription billing and revenue management platform. It helps companies offering subscription services automate their billing procedures. Data integration : Skyvia supports importing data to and from Chargebee, exporting Chargebee data to CSV files, replicating Chargebee data to relational databases, and synchronizing Chargebee data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Chargebee backup. Query : Skyvia Query supports Chargebee. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) with Chargebee, you need to know the API key and Subdomain values. Getting credentials Subdomain Subdomain is a value that comes in the URL before chargebee.com. For example, if your Chargebee link looks like this: https:// myaccount .chargebee.com, use myaccount as your Subdomain. API Key API Key is a key that is used to authenticate Skyvia and control your access to the Chargebee. To locate your API Key, perform the following steps: Log in to your [Chargebee account](https://www.chargebee.com/) . Click the gear icon on the left to navigate to Settings -> select Configure Chargebee and scroll down to API Keys and Webhooks . Click API Keys . Copy the available API Key or create a new one. Creating Connection Paste the obtained Subdomain and API Key to the corresponding boxes of the Connection Editor in Skyvia. Connector Specifics Object Specifics Coupons To import records to the Coupons object, map either the DiscountAmount or DiscountPercentage field value. It depends on whether DiscountType is set to Fixed Amount or Percentage . If the ApplyOn value is set to Each Specified Item , you need to map either ItemConstraints or ItemConstraintCriteria . Otherwise, the Import generates an error \u201cConstraint should be passed for at least one item_type, or it has to be all/specific/criteria.\u201d CreditNotes To import records to the CreditNotes in addition to the required ReferenceInvoiceId field, you must map either Total or LineItems field. Also, depending on your account settings, you may need to map the CreateReasonCode if it is set as mandatory in Chargebee. Invoices To import records to the Invoices object, map either the CustomerId or SubscriptionId fields. Additionally, you must map either the Charges or LineItems array field. Find an entry with ItemType = Charge in the ItemPrices table, copy the value from the Id column, and paste it to the LineItems array as item_price_id . Quotes To import records to the Quotes table, you must map either the ItemPrices or the Charges additionally to the required CustomerId field. Otherwise, the Import generates the error \u201cAt least one nonrecurring addon or charge item should be present.\u201d UnbilledCharges To import records to the UnbilledCharges , besides the required SubscriptionId field, you also must map either the ItemPrices or the Charges field. Charges value example:\n[{\u201camount\u201d:77, \u201cdescription\u201d: \u201ctest import\u201d}] ItemPrices value example:\n[{\u201citem_price_id\u201d: \u201citem_pr_1\u201d}] VirtualBankAccounts Email field is not required for VirtualBankAccounts as its value is taken from the Customers object. However, if Email is not specified in Customers , you will receive an error \u201cvirtual_bank_account[email] : cannot be blank.\u201d Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: Comments, Coupons, CreditNotes, Customers, Gifts, InvoiceLinkedOrders, Invoices, ItemFamilies, ItemPrices, Items, Orders, PaymentSources, Quotes, Subscriptions, Transactions, UnbilledCharges, VirtualBankAccounts . Synchronization is supported for the following objects: Customers, Gifts, Invoices, PaymentSources, Quotes, Subscriptions . The Id value in Coupons, ItemFamilies, Items and ItemPrices objects is not autogenerated and should be set manually. Thus Synchronization of Coupons, ItemFamilies, Items and ItemPrices objects cause an error \u201cRequired column \u2018Id\u2019 is missing.\u201d DML Operations Skyvia supports the following DML operations for Chargebee objects: Operation Object INSERT, UPDATE, DELETE Coupons, CouponSets, CustomerContacts, Customers, Invoices, ItemFamilies, ItemPrices, Items, Orders, PaymentSources, Quotes, Subscriptions INSERT, DELETE Comments, CreditNotes, UnbilledCharges, VirtualBankAccounts INSERT, UPDATE Gifts Stored Procedures Skyvia represents part of the supported Chargebee features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CopyCoupon The following command copies a coupon from one site to another. Copying of archived coupons is not supported. call CopyCoupon(:from_site, :id_at_from_site) PARAMETER NAME DESCRIPTION From_site Your Chargebee site name having the coupon to be copied. Id_at_from_site Id of the coupon to be copied UnarchiveCoupon To unarchive a specific coupon using the coupon Id, use the command call UnarchiveCoupon(:coupon_id) DeleteUnusedCouponCodes The following command deletes all the unutilized coupon codes from a specific coupon set call DeleteUnusedCouponCodes(:coupon_set_id) ArchiveCouponCode To archive a coupon code thereby making it inactive. The archived coupon code cannot be applied to any subscription. call ArchiveCouponCode(:coupon_code) UpdatePaymentMethodForCustomer To update payment method details for a customer, use command call UpdatePaymentMethodForCustomer(:customer_id, :payment_method_type, :payment_method_reference_id) PARAMETER NAME DESCRIPTION Customer_id The Chargebee customer\u2019s identifier Payment_method_type Type of payment source (Enum). Valid values are card, paypal_express_checkout, amazon_payments, direct_debit, generic, alipay, unionpay, wechat_pay, ideal, google_pay, sofort, bancontact, giropay, dotpay, upi, netbanking_emandates Payment_method_reference_id The reference Id. In the case of Amazon and PayPal this will be the billing agreement Id. For GoCardless direct debit this will be \u2018mandate Id\u2019. In the case of card this will be the identifier provided by the gateway/card vault for the specific payment method resource. UpdateBillingInfoForCustomer Use the following command to [update the billing information](https://apidocs.chargebee.com/docs/api/customers?prod_cat_ver=2#update_billing_info_for_a_customer) for customer, including billing_address and vat_number attributes of the customer. call UpdateBillingInfoForCustomer(:customer_id, :vat_number, :vat_number_prefix, :entity_identifier_scheme, :entity_identifier_standard, :registered_for_gst, :business_customer_without_vat_number, :is_einvoice_enabled, :billing_address_first_name, :billing_address_last_name, :billing_address_email, :billing_address_company, :billing_address_phone, :billing_address_line1, :billing_address_line2, :billing_address_line3, :billing_address_city, :billing_address_state_code, :billing_address_state, :billing_address_zip, :billing_address_country, :billing_address_validation_status) Though, Customer_id is the only required parameter, it is necessary to fill out all the parameters due to stored procedure specifics. PARAMETER NAME DESCRIPTION Customer_id The Chargebee customer\u2019s identifier Vat_number The VAT/tax registration number for the customer Vat_number_prefix An overridden value for the first two characters of the full VAT number Entity_identifier_scheme The Peppol BIS scheme associated with the vat_number of the customer Entity_identifier_standard Currently only iso6523-actorid-upis is supported and is used by default when not provided Registered_for_gst Boolean. Confirms that a customer is registered under GST Business_customer_without_vat_number Boolean. Confirms that a customer is a valid business without an EU/UK VAT number Is_einvoice_enabled Boolean. Determines whether the customer is e-invoiced Billing_address_first_name, billing_address_last_name, billing_address_email, billing_address_company, billing_address_phone, billing_address_line1, billing_address_line2, billing_address_line3, billing_address_city, billing_address_state_code, billing_address_state, billing_address_zip, billing_address_country, billing_address_validation_status Parameters for billing_address More information about parameters is available [here](https://apidocs.chargebee.com/docs/api/customers?prod_cat_ver=2#update_billing_info_for_a_customer:~:text=customer_id%7D/update_billing_info-,Input%20Parameters,-vat_number) AssignPaymentRole The following command assigns primary or backup payment role or unassigns role for the payment source based on the preference for the payment collection. call AssignPaymentRole(:customer_id, :payment_source_id, :role) PARAMETER NAME DESCRIPTION Customer_id The Chargebee customer\u2019s identifier Payment_source_id Payment source id this role will be assigned to Role Payment source role. Valid values are primary, backup, none RecordExcessPaymentForCustomer The following command records any excess payments made by the customer, such as advance payments Such payments will be automatically applied to the future invoices. call RecordExcessPaymentForCustomer(:customer_id, :comment, :transaction.amount, :transaction.currency_code, :transaction.date, :transaction.payment_method, :transaction.reference_number) PARAMETER NAME DESCRIPTION Customer_id The Chargebee customer\u2019s identifier Comment \u0421omment for an operation transaction.amount The payment transaction amount in cents Transaction.currency_code The currency code (ISO 4217 format) for the transaction. Required if Multicurrency enabled Transaction.date timestamp(UTC) in seconds Transaction.payment_method Valid values: cash, check, bank_transfer, other, custom Transaction.reference_number Reference number if needed. For example, check number in case of check payment method RefundCreditNote Use the following command to refund a credit note to the payment source associated with the transaction. call RefundCreditNote(:credit_note_id, :refund_amount, :customer_notes, :refund_reason_code) PARAMETER NAME DESCRIPTION Credit_note_id Identifier of the credit note to be refunded Refund_amount The amount to be refunded. If not specified, the entire refundable amount for this credit_note is refunded Customer_notes The note for the operation. Available in customer-facing documents, for example credit note PDF Refund_reason_code Reason code for the refund. Must be one from a list of reason codes set in the Chargebee app in Settings > Configure Chargebee > Reason Codes > Credit Notes > Refund Credit Note ClaimGift Use the following command to switch the gift status from unclaimed to to claimed : call ClaimGift(:gift_id) CancelGift The following command cancels gift with specific Id. call CancelGift(:gift_id) You can cancel only gifts in scheduled and unclaimed states. StopDunningForInvoice The following command stops dunning for Payment Due invoices enabled for Auto Collection. call StopDunningForInvoice(:invoice_id, :comment) When dunning is stopped, the status of the invoice will be changed to Not Paid. ApplyPaymentsForInvoice To apply excess payments to an invoice, use the command call ApplyPaymentsForInvoice(:invoice_id, :comment, :transactions) PARAMETER NAME DESCRIPTION Invoice_id The identifier of the invoice to apply payment to Comment Internal \u0441omment for the action Transactions Optional. Transactions Id or multiple Ids in the array format. Excess payments available with the customer will be applied against this invoice if this parameter is not passed. ApplyCreditsForInvoice To apply available credits to an invoice, use the command call ApplyCreditsForInvoice(:invoice_id, :comment, :credit_notes) PARAMETER NAME DESCRIPTION Invoice_id The identifier of the invoice to apply credit to Comment Internal comment for the action Credit_notes Optional. Credit Note number or numbers in the array format. Available refundable credits with the customer will be applied against this invoice if this paramenter is not passed ClosePendingInvoice To finalize a pending invoice, use command call ClosePendingInvoice(:invoice_id, :comment, :invoice_note, :remove_general_note, :invoice_date, :notes_to_remove) PARAMETER NAME DESCRIPTION Invoice_id The identifier of the invoice to close Comment Internal comment for the action. It is not displayed in any document Invoice_note A note for this particular invoice Remove_general_note Boolean. Removes the general note from this invoice if set to true Invoice_date Timestamp(UTC) in seconds. Must lie between the date when the invoice was generated and current date Notes_to_remove Note Id or multiple Ids in the array format CollectPaymentForInvoice To collect payments for payment_due and not_paid invoices, use the command call CollectPaymentForInvoice(:invoice_id, :amount, :authorization_transaction_id, :payment_source_id, :comment) PARAMETER NAME DESCRIPTION Invoice_id The identifier of the invoice to collect payment for Amount Amount to be collected in cents. If this parameter is not passed then the entire amount due will be collected Authorization_transaction_id Authorization transaction to be captured Payment_source_id Payment source to be used for this payment Comment Internal comment for the action. Is not displayed in any document RefundInvoice To refund invoice, use the command call RefundInvoice(:invoice, :refund_amount, :comment, :customer_notes, :credit_note_reason_code, :credit_note_create_reason_code) PARAMETER NAME DESCRIPTION Invoice The identifier of the invoice to be refunded Refund_amount The amount to be refunded in cents Comment Internal comment for the action Customer_notes Customer notes to be filled in Credit Note Credit_note_reason_code Valid values: *product_unsatisfactory, service_unsatisfactory, order_change, order_cancellation, waive, other Credit_note_create_reason_code Reason code for creating the credit note. Must be one from a list of reason codes set in the Chargebee app in Settings > Configure Chargebee > Reason Codes > Credit Notes > Create Credit Note AssignOrderNumber The following command assigns order number to the order based on the settings, if not already assigned. call AssignOrderNumber(:order_id) CancelOrder To cancel order and create a refundable credit note for the order, use the command call CancelOrder(:order_id, :cancellation_reason, :customer_notes, :comment, :cancelled_at, :credit_note_total) PARAMETER NAME DESCRIPTION Order_id The Id of the order to be canceled Cancellation_reason Valid values: shipping_cut_off_passed, product_unsatisfactory, third_party_cancellation, product_not_required, delivery_date_missed, alternative_found, invoice_written_off, invoice_voided, fraudulent_transaction, payment_declined, subscription_cancelled, product_not_available, others, order_resent Customer_notes Customer notes to be filled in Credit Note Comment Internal comment for the action Canceled_at The time at which the order was cancelled Credit_note_total Credit Note amount in cents CreateAuthorizationPayment The following command authorizes a specific amount in a customer\u2019s Credit card, which can be collected within a period. call CreateAuthorizationPayment(:customer_id, :payment_source_id, :currency_code, :amount) PARAMETER NAME DESCRIPTION Customer_id Identifier of the customer Payment_source_id Payment source to be used for authorizing the transaction Currency_code The currency code (ISO 4217 format) of the transaction amount Amount The amount to be blocked VoidAuthorizationTransaction The following command voids the specific authorization transaction to release blocked funds from the customer\u2019s card. call VoidAuthorizationTransaction(:transaction_id) RefundPayment The following command refunds an online payment. Applicable only for transactions of the payment type. You can only refund a transaction with a success status. call RefundPayment(:transaction_id, :amount, :comment) For more information about the parameters, please refer to Chargebee [API documentation](https://apidocs.chargebee.com/docs/api?prod_cat_ver=2) . Supported Actions Skyvia supports all the common actions for Chargebee." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/chargeover_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ChargeOver [ChargeOver](https://chargeover.com/) is a cloud-based billing tool that automates payment acceptance, invoicing, and contacting customers. Data integration : Skyvia supports importing data to and from ChargeOver, exporting ChargeOver data to CSV files, replicating ChargeOver data to relational databases, and synchronizing ChargeOver data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ChargeOver. Query : Skyvia Query supports ChargeOver. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to ChargeOver in Skyvia, you must specify the Subdomain, Publi\u0441 Key, Private Key, and Source Timezone. Getting Credentials Subdomain You can obtain ChargeOver Subdomain from the URL of the ChargeOver page you work with. For example, if the URL is https:// yoursubdomain .chargeover.com/, you need only the yoursubdomain part. Public Key and Private Key To obtain the Public Key and Private Key, perform the following steps: Sign in with ChargeOver and click on the gear symbol in the top right corner of the ChargeOver page. On the left menu click Developer \u2013> More Dev Tools . Select the REST API block and click Get Started . Enable the REST API by selecting Yes from the drop-down list. Copy the Public and Private Keys values, store them somewhere and click Save on the bottom. Once you save your keys, you will no longer be able to see or retrieve the API private key/password. Creating Connection To connect to ChargeOver in Skyvia, paste the obtained keys to the corresponding fields in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For some objects, ChargeOver API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The list of additional fields is the following: OBJECT FIELD Customers Tags Subscriptions LineItems Invoices LineItems, Sent, Schedule Quotes LineItems To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Use Custom Fields Select this checkbox to make ChargeOver custom fields available in Skyvia. Connector Specifics Object Peculiarities The Pricemodel field of the Items table is a JSON array. If you insert data into this field, you must specify the values in the JSON array format.\nFor example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 [\n {\n \"currency_iso4217\": \"USD\",\n \"base\": 195.95,\n \"paycycle\": \"mon\",\n \"pricemodel\": \"fla\"\n },\n {\n \"currency_iso4217\": \"CAD\",\n \"base\": 195.95,\n \"paycycle\": \"mon\",\n \"pricemodel\": \"fla\"\n }\n ] Filtering Specifics ChargeOver objects support filters with AND operator. Int32, DateTime, Date and Decimal fields support operators = , > , >= , < , <= . Strings fields support the = operator. \nUse filters by the fields below to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Object Fields Invoices InvoiceId, CustomerId, BillState, ShipState, BillCountry, ShipCountry, Date, InvoiceStatusStr, InvoiceStatusState, PackageId, CurrencyId, Token, CreatedDate, UpdatedDate and the first three custom fields Transactions TransactionId, CustomerId, GatewayId, CurrencyId, Token, ExternalKey, GatewayStatus, GatewayTransid, Amount, Fee, TransactionType, TransactionMethod, TransactionDetail, CreatedDate, TransactionStatusStr, TransactionStatusState Quotes QuoteId, CustomerId, BrandId, CurrencyId, Token, ExternalKey, Date, BillState, ShipState, BillCountry, ShipCountry, QuoteStatusStr, QuoteStatusState CreditCards CreditCardId, CustomerId, ExternalKey, Token Items ItemId, Name, ExternalKey, AccountingSku, ItemType Subscriptions PackageId, CustomerId, Token, BrandId, Paymethod, PayCycle, Nickname, BillState, ShipState, PackageStatusStr, PackageStatusState, ExternalKey, CreatedDate, UpdatedDate, CancelDatetime Contacts UserId, ExternalKey, FirstName, LastName, Email, Phone, UserTypeId, Username Customers CustomerId, ParentCustomerId, BillState, ShipState, BillCountry, ShipCountry, ExternalKey, Token, Company, SuperuserId, SuperuserEmail, CreatedDate, UpdatedDate and the first five custom fields AdminWorkers AdminId, ExternalKey, Email, FirstName, LastName Custom Fields Skyvia supports the custom fields in the AdminWorkers, Contacts, Customers, Invoices, InvoiceLineItems, Items, Quotes, QuoteLineItems, Subscriptions, SubscriptionLineItems and Transactions objects. Custom fields support the INSERT and UPDATE operations. You can add custom fields of the following types. ChargeOver DbType Text (single line) String Text (multi line) String Dropdown String Date Date Number Decimal Url String Rich text/HTML String Due to ChargeOver API specifics, if you delete a custom field it remains visible in Skyvia. If you try to insert or update data in the deleted custom field, you will get an error \u201cYou must provide a valid [custom_XX] value\u201d. Nested Objects The following fields store complex structured data in JSON format. These objects are read-only. Object Field Invoices LineItems, Schedule Subscriptions, Quotes LineItems You can replicate these objects into databases or data warehouses using our new replication runtime. For this, select the Use New Replication Runtime in the Replication, enable the Unwind Nested Objects and select Separate Tables. Incremental Replication and Synchronization Incremental Replication is supported for the following objects: Customers, Contacts, Subscriptions, Items, ItemTierSets, CreditCards, Coupons, AdminWorkers, Invoices, Quotes, Transactions . The CreditCards, Coupons, Transactions , and Quotes have only the CreatedDate field and do not have the UpdatedDate . It means that the Incremental Replication detects only the new records for these objects. Synchronization is supported for the following objects: Customers, Subscriptions, Items, Contacts . DML Operations Support Skyvia supports the following DML operations for ChargeOver objects: Operation Objects INSERT, UPDATE, DELETE Contacts, Customers, Items, Subscriptions INSERT, DELETE CreditCards INSERT, UPDATE Invoices, Quotes INSERT Transactions Stored Procedures Skyvia represents part of the supported ChargeOver features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CancelSubscription To cancel the subscription, use the command call CancelSubscription(:Id,:Comments,:CancelReasonId) SuspendSubscription To pause the subscription for the undefined period of time, use command call SuspendSubscription(:Id,:Suspendfrom_datetime,:Suspendto_datetime) UnsuspendSubscription To renew the suspended subscription, use command call UnsuspendSubscription(:Id) Supported Actions Skyvia supports all the common actions for ChargeOver." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/chargify_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Maxio Advanced Billing [Maxio Advanced Billing](https://www.maxio.com/) (formerly, Chargify) is subscription management software for B2B SaaS. The software is built for the evolving needs of fast-growth companies. Data integration : Skyvia supports importing data to and from Maxio, exporting Maxio data to CSV files, replicating Maxio data to relational databases, and synchronizing Maxio data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Maxio Advanced Billing. Query : Skyvia Query supports Maxio Advanced Billing. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Maxio Advanced Billing, you simply need to specify your subdomain and API Key. To start creating a connection, follow the below steps: Getting Credentials Subdomain \u2014 the subdomain of your Maxio site \u2014 the fragment of its URL after \u2018https://\u2019 before the first dot. For example, if the URL looks like https:// your_subdomain .chargify.com/, use the your_subdomain part of the URL as a Subdomain value. API Key \u2014 Maxio API key. Read more on creating and managing API Keys in the [Maxio documentation](https://docs.maxio.com/hc/en-us/articles/24294819360525-Advanced-Billing-API-Keys) . Creating Connection Enter Subdomain and API key to the corresponding fields in Skyvia Connection Editor. Additional connection parameters Suppress Extended Requests Maxio API returns only part of the fields for the Invoices object when querying multiple records. To query the values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. The list of such fields is the following: OBJECT FIELD Invoices Discounts, Taxes, Credits, Refunds, Payments, CustomFields To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Coupons When performing a DELETE operation, the record is not deleted. The record is archived (the date is specified in the ArchivedAt column). Subscriptions Maxio API does not provide a fixed list of the required fields for successful data import to this object. Thus, Skyvia doesn\u2019t mark any fields as required for mapping in import. However, you must map at least Product, Customer and credit card details depending on the specified product.\nYou can specify a product by mapping the ProductId or ProductHandle fields. You can specify a customer in two ways: You map the CustomerId by Constant to existing CustomerId value. You map the records from the objects with the Customer suffix in the name. You can specify the Payment profile (credit card details) using the records from the objects with the CreditCard or BankAccount suffixes in the names. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Maxio objects: Coupons, Customers, Events, Invoices, ProductPricePoints, Products, Subscriptions, and ProductFamilies . Skyvia supports Synchronization for such Maxio objects: Coupons, Customers, ProductPricePoints, Products, and Subscriptions . DML Operations support Skyvia supports the following DML operations for Maxio Advanced Billing objects: Operation Object INSERT, UPDATE, DELETE Coupons, Customers, ProductPricePoints, Products INSERT Invoices, ProductFamilies INSERT, UPDATE Subscriptions Supported Actions Skyvia supports all the common actions for Maxio Advanced Billing." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/chartmogul_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ChartMogul [ChartMogul](https://chartmogul.com/) is a real-time analytics and reporting tool for businesses with subscription billing. Data integration : Skyvia supports importing data to and from ChartMogul, exporting ChartMogul data to CSV files, replicating ChartMogul data to relational databases, and synchronizing ChartMogul data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ChartMogul. Query : Skyvia Query supports ChartMogul. Establishing Connection To create a connection to ChartMogul, you need to specify the API Key. Getting Credentials To get the ChartMogul API Key, perform the following steps: Log in to ChartMogul. Click on the user icon in the bottom left corner and select Admin . Select the ChartMogul owner in the users list and scroll down to the API Keys block. Copy the API Key. Creating Connection Enter the obtained API Key to the corresponding box in the Connection Editor. Connector Specifics Object Peculiarities PlanGroups The PlanIds values are not returned in the query results by default. However, when importing data to the PlanGroups object, you must map the PlanIds field. You must provide the PlanIds values in the array format: [\"id1\",\"id2\"] , where the Id1 and Id2 are the Uuid values of the Plans object. Customers You can add custom fields in the Customers object. Custom field names and values are stored in JSON format in the Attributes_Custom field. To successfully insert data to the custom fields, you must map the Attributes_Custom field and pass its value in JSON format. You must specify custom field type, name and value: type - the custom field data type (valid data types for ChartMogul are String, Integer, Decimal, Timestamp \u0438 Boolean). key - custom field name. If the field name was not used before, the new custom field would be created automatically. value - the custom field value. For example, [{\"type\": \"String\", \"key\": \"field_name\", \"value\": \"custom_field_value\"}] , To successfully update custom field records, you must map the Attributes_CustomUpdate field. You must specify the custom field name to update and its value in JSON format. \nFor example, {\"field_name\":\"updated_custom_field_value\"} . You can update only the records where the custom fields already exist. Custom fields can be added, updated, or deleted via stored procedures . Contacts To successfully insert and update custom fields in the Contacts object, you must map the Custom field in JSON format. \nFor example, [{\"key\":\"field_name\",\"value\":\"field_value\"}] . You can insert and update records in the existing fields already created in ChartMogul UI. SubscriptionEvents To successfully insert data to the SubscriptionEvents object, you must map the required fields CustomerExternalId, EventType, EventDate , EffectiveDate and additional fields depending on the EventType value: SubscriptionId field is required for all event types except Retracted . RetractedEventId is required for the Retracted event type. Plan, Currency, and Amount are required for the Started, Start(Scheduled), Updated and Updated(Scheduled) event types. Invoices Invoices object stores complex structured data. Invoices can have several lines stored as nested JSON objects in the LineItems and Transactions fields. For user convenience, invoice lines are also available as separate records in InvoiceLineItems and InvoiceTransactions objects. They allow you to view these lines in a tabular form with Query, export them to CSV with Export, import them from ChartMogul to a cloud application or database, where these lines should be stored in a separate table, etc. To successfully import data to the nested invoice lines, you can use our Nested Objects mapping feature in Import. For this, you need to select the Nested Objects checkbox in the integration. Then, in the mapping settings, you can map the fields of invoice lines or transaction items. Metrics All the objects from the Metrics API group ( AllKeyMetrics, \nMonthlyRecurringRevenue, AnnualizedRunRate, AverageRevenuePerAccount, AverageSalePrice, CustomerCount, \nCustomerChurnRate, MRRChurnRate, and CustomerLifetimeValue ) have StartDate and EndDate fields, which can be used for filtering. If you don\u2019t use filters by dates when querying, then the StartDate is set to 01 January 2020, and the EndDate is set to the current date by default. You can also use the Interval field for filtering. If you don\u2019t use the filter by Interval when querying, its value is set to month by default. \nYou can set another filter for this field depending on the object: AllKeyMetrics object \u2014 day, week or month . MonthlyRecurringRevenue, AnnualizedRunRate, AverageRevenuePerAccount, CustomerCount objects \u2014 day, week, month or quarter . AverageSalePrice \u2014 month or quarter . We recommend using filters to limit the data volume for saving API calls number. DML Operations Support Operation Object INSERT, UPDATE, DELETE Customers, PlanGroups, Plans, SubscriptionEvents INSERT, DELETE DataSources, Invoices INSERT InvoiceTransactions Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for DataSources and SubscriptionEvents objects. \nReplication tracks only the new records for the DataSources object. Skyvia supports Synchronization for the SubscriptionEvents object. Stored procedures Skyvia represents part of the supported ChartMogul features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddCustomAttributesToCustomer To add a new custom field into the existing records in the Customers object, use the command call AddCustomAttributesToCustomer(:customer_uuid, :type,:key,:value) PARAMETER NAME DESCRIPTION Customer_uuid The ChartMogul UUID of the customer to add custom field to Type Custom field data type (valid values are string, integer, decimal, timestamp, and boolean) Key Custom field name Value Custom field value UpdateCustomAttributesOfCustomer To update custom fields and their values, use the command call UpdateCustomAttributesOfCustomer(:customer_uuid, :custom) PARAMETER NAME DESCRIPTION Customer_uuid The ChartMogul UUID of the customer to add custom field to Custom Existing field name and its new value in the {\u201ckey\u201d:\u201dvalue\u201d} format RemoveCustomAttributesFromCustomer To remove custom field from the Customers record, use command call RemoveCustomAttributesFromCustomer(:customer_uuid, :custom) PARAMETER NAME DESCRIPTION Customer_uuid The ChartMogul UUID of the customer to add custom field to Custom Custom field name to be deleted in the following format: [\"field_name\"] To remove several fields in a row, specify their names separated by commas, for example [\"field1\",\"field2\"...] . Supported Actions Skyvia supports all the common actions for ChartMogul." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/cleverreach_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources CleverReach [CleverReach](https://www.cleverreach.com/en-de/) is an email marketing tool that makes sending newsletters fast and easy. Data integration : Skyvia supports importing data to and from CleverReach, exporting CleverReach data to CSV files, replicating CleverReach data to relational databases. Backup : Skyvia Backup does not support CleverReach. Query : Skyvia Query supports CleverReach. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , log in with CleverReach account. Creating Connection In the Connection Editor click Sign In with CleverReach . Click Log in with my existing CleverReach account and connect if you already have an account. Enter your CleverReach credentials and click Log in now and connect Skyvia! Additional Connection Parameters Suppress Extended Requests CleverReach API returns only part of the fields for the GroupReceivers object when querying multiple records. To query the values of lacking fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. The list of such additional fields is the following OBJECT FIELD GroupReceivers Tags, Events and Order To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in CleverReach because its API does not return them without extended requests. Connector Specifics Object Peculiarities ReceiverOrders and GroupReceiverOrders The ReceiverOrders object displays the Orders related to the specific receiver regardless of the list this receiver belongs to. If one receiver belongs to two lists, there will be two records with the different GroupId values and equal values of all other fields in the ReceiverOrders table. The GroupReceiverOrders object displays the orders related to the specific receiver and specific list, which this receiver belongs to. DML Operations Support Skyvia supports DML operations for such CleverReach objects: Operation Object INSERT, UPDATE, DELETE Blacklists, GlobalAttributes, Groups, GroupAttributes, GroupReceivers, GroupReceiverOrders, ReceiverOrders UPDATE GroupReceiverAttributes Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates only for the Blacklists table. This table has the CreatedDate field only, there is no UpdatedDate field, it means that only newly created records could be replicated incrementally. Updated and deleted records do not take part in Incremental Replication. Skyvia does not support Synchronization for CleverReach. Stored Procedures Skyvia represents part of the supported CleverReach features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ActivateReceiver To update the Activated field value, use command call ActivateReceiver(:group_id,:Id) The value in the Deactivated field will be nulled if it wasn\u2019t null. PARAMETER DESCRIPTION Group_id The receiver group Id Id The receiver Id or email DeactivateReceiver To update the Deactivated field value, use command call DeactivateReceiver(:group_id,:Id) PARAMETER DESCRIPTION Group_id The receiver group Id Id The receiver Id or email RemoveReceiverTags To delete the specified tag in the Tags field, use command call RemoveReceiverTags(:Id,:Tags) PARAMETER DESCRIPTION Id The receiver Id or email Tags The list of tags for removal in the array format For example, if the specific record contains the array of tags [\u201cTag1\u201d,\u201dTag2\u201d,\u201dTag3\u201d] and you need to delete Tag2 and Tag3, then the Tags value should look like \u2018Tag2, Tag3\u2019 when you call the procedure. Supported Actions Skyvia supports all the common actions for CleverReach." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/clicksend_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ClickSend [ClickSend](https://www.clicksend.com) is a cloud business communication provider which helps you get and stay closer to your customers and staff using SMS, Email and Direct Mail via web app or API. Data integration : Skyvia supports importing data to and from ClickSend, exporting ClickSend data to CSV files, replicating ClickSend data to relational databases, and synchronizing ClickSend data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ClickSend backup. Query : Skyvia Query supports ClickSend. Establishing Connection To create a connection with ClickSend, you must specify the Username value and an API Key. Getting Credentials To obtain the credentials, perform the following steps: Sign-in to ClickSend and click on the key image near your user icon in the top right corner of your ClickSend page. Copy the credentials from the appeared dialog box. Creating Connection To create a connection between Skyvia and ClickSend, do the following: Enter a username. Enter your API Key. Click Create Connection . Connector Specifics Object Peculiarities When importing data to the Contacts table, you must map the required ListId field and one of the following fields: PhoneNumber, FaxNumber or Email . Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: Contacts, DeliveryIssues, EmailAddresses, EmailCampaigns, FaxHistory, LettersHistory, MasterEmailTemplates, MMSCampaigns, PostcardsHistory, SMSCampaigns, TransactionalEmailHistory, VoiceHistory . The Contacts object has both CreatedDate and UpdatedDate fields, thus, Incremental Replication detects both new records and updated records. For all other objects, the Incremental Replication detects only the new records because all these objects have the CreatedDate field and do not have the UpdatedDate field required for tracking the updates. Synchronization is supported for the Contacts object only. DML Operations Skyvia supports the following DML operations for ClickSend objects: Operation Object INSERT, UPDATE, DELETE ContactLists, Contacts, EmailDeliveryReceiptRules, FaxDeliveryReceiptRules, InboundFaxRules, InboundSMSRules, ReturnAddresses, SMSDeliveryReceiptRules, SMSTemplates, StrippedStringRules, Subaccounts, UserEmailTemplates, VoiceDeliveryReceiptRules INSERT, UPDATE EmailCampaigns, MMSCampaigns, ResellerAccounts, SMSCampaigns INSERT, DELETE EmailAddresses INSERT Accounts, DeliveryIssues, EmailToSMS, InboundSMS, SMSReceipts Stored Procedures Skyvia represents part of the supported ClickSend features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendAccountVerification To send the account for the verification, use the command call SendAccountVerification(:country, :user_phone, :type) PARAMETER NAME DESCRIPTION Country Two-letter country code (ISO3166) User_phone User\u2019s phone number Type Type of verification. For example sms VerifyAccount To verify account with the activation token, use the command call VerifyAccount(:activation_token) ForgotUsername The following command sends the email with instruction for user who forgot their username call ForgotUsername(:email) ForgotPassword The following command sends an email with a code for password reset to a user who forgot the password. call ForgotPassword(:username) After sending the code, execute the VerifyForgotPassword procedure and use this code as activation_token parameter. VerifyForgotPassword To set a new password, use the following command call VerifyForgotPassword(:subaccount_id, :activation_token, :password) PARAMETER NAME DESCRIPTION Subaccount_id ID of subaccount Activation_token The code received by user as a result of the ForgotPassword procedure Password New password PurchaseRechargePackage To purchase a specific recharge package, use the command call PurchaseRechargePackage(:package_id) CopyContactToList To copy the contact from one list to another, use the command call CopyContactToList(:from_list_id, :contact_id, :to_list_id) The copied contact gets new Id in the new list after that. TransferContactToList To move the contact from one list to another, use the command call TransferContactToList(:from_list_id, :contact_id, :to_list_id) The Id of the moved contact remains the same after executing the procedure. RemoveOptedOutContacts To move all the opted-out contacts from the specified list to the opt-out list, use the command call RemoveOptedOutContacts(:list_id, :opt_out_list_id) SendEmailVerificationToken To send verification message to validate the specified email, use the command call SendEmailVerificationToken(:email_address_id) VerifyAllowedEmailAddress To confirm the email with activation token, use the command call VerifyAllowedEmailAddress(:email_address_id, :activation_token) CancelEmailCampaign To cancel the email campaign, use the command call CancelEmailCampaign(:email_campaign_id) CancelMMSCampaign To cancel the MMS campaign, use the command call CancelMMSCampaign(:mms_campaign_id) PurchaseDedicatedNumber To buy the dedicated number, use the command call PurchaseDedicatedNumber(:dedicated_number) CancelScheduledLetter To cancel the scheduled letter, use the command call CancelScheduledLetter(:message_id) CancelScheduledPostcard To cancel the postcard sending, use the command call CancelScheduledPostcard(:message_id) ResellerTransferCredit To transfer the balance to another account, use command call ResellerTransferCredit(:client_user_id, :balance, :currency) SendSMS To send an SMS, use the command call SendSMS(:from, :body, :to, :source) PARAMETER NAME DESCRIPTION From Sender Body The message text To Message receiver Source Method of sending e.g. \u2018wordpress\u2019, \u2018php\u2019, \u2018c#\u2019 All the parameters are required. SendSMSWithAllParameters Use the following command to send SMS with the full set of parameters call SendSMSWithAllParameters(:from, :body, :to, :source, :schedule, :custom_string, :list_id, :country, :from_email) PARAMETER NAME DESCRIPTION From Sender Body The message text To Message receiver Source Method of sending e.g. \u2018wordpress\u2019, \u2018php\u2019, \u2018c#\u2019 Schedule Time in unix format (integer). Leave blank for immediate delivery Custom_string Your reference List_id Your list ID if sending to a whole list. Can be used instead of \u2018to\u2019 Country Recipient\u2019s country From_email An email address where the reply should be emailed to. If omitted, the reply will be emailed back to the user who sent the outgoing SMS MarkSMSReceiptAsRead To marks all the receipts dated before the specified date as read, use the command call MarkSMSReceiptAsRead(:date_before) MarkInboundSMSAsRead To marks all the SMS dated before the specified date as read, use the command call MarkInboundSMSAsRead(:date_before) MarkSpecificInboundSMSAsRead To mark specific inboud SMS as read, use the command call MarkSpecificInboundSMSAsRead(:message_id) CancelSMS To cancel the specific message sending, use the command call CancelSMS(:message_id) CancelSMSCampaign To cancel the specific SMS campaign, use command call CancelSMSCampaign(:sms_campaign_id) GenerateNewAPIKey To generate a new API Key for the specified subaccount, use command call GenerateNewAPIKey(:subaccount_id) New API Key returns as a response. SendVoiceMessage To send TTS (Text-to-speech) voice calls, use the command call SendVoiceMessage(:body, :to, :voice, :custom_string, :country, :source) PARAMETER NAME DESCRIPTION Body Message content To Recipient phone number Voice Male or female Custom string Your reference. Will be passed back with all replies and delivery reports Country The country of the recipient Source Method of sending e.g. \u2018wordpress\u2019, \u2018php\u2019, \u2018c#\u2019 SendVoiceMessageWithAllParameters Use the following command to send TTS (Text-to-speech) voice calls, with the full set of parameters. call SendVoiceMessageWithAllParameters(:body, :to, :voice, :custom_string, :country, :source, :list_id, :lang, :schedule, :required_input, :machine_detection) PARAMETER NAME DESCRIPTION From Sender Body The message text To Message receiver Voice Male or female Custom_string Your reference Country Recipient\u2019s country Source Method of sending e.g. \u2018wordpress\u2019, \u2018php\u2019, \u2018c#\u2019 List_id Your list ID if sending to a whole list. Can be used instead of \u2018to\u2019 Lang Message language. Valid values are available [here](https://developers.clicksend.com/docs/rest/v3/#Send-Voice-Message) Schedule Time in unix format (integer). Leave blank for immediate delivery Required_input Recieve a keypress from the recipient. Flag value must be 1 for yes or 0 for no. Machine_detection Detect answering machine or voicemail and leave a message. Flag value must be 1 for yes or 0 for no. CancelVoiceMessage To cancel sending the specific voice message, use the command call CancelVoiceMessage(:message_id) Supported Actions Skyvia supports all the common actions for ClickSend." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/clickup_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ClickUp [ClickUp](https://clickup.com/) is a cloud solution offering project management and collaboration tools such as task and time tracking, team chat, project whiteboard, etc. Data integration : Skyvia supports importing data to and from ClickUp, exporting ClickUp data to CSV files, replicating ClickUp data to relational databases, and synchronizing ClickUp data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ClickUp. Query : Skyvia Query supports ClickUp. Establishing Connection Sign in using your credentials to create a connection with ClickUp. Creating Connection To connect to ClickUp, perform the following steps: In the Connection Editor, click Sign In with ClickUp . Enter your email and password in the corresponding boxes and click Log In . Select the needed ClickUp workspace and click Connect Workspace . Additional Connection Parameters Suppress Extended Requests ClickUp API returns only part of the fields for some objects when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Skyvia performs such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD FolderlessGoals FolderName, History, PrettyUrl FolderGoals FolderName, History \u0438 PrettyUrl Lists Statuses FolderlessLists Statuses Connector Specifics Object Peculiarities Querying Deleted Records The deleted records are unavailable when you query all records in an object. However, ClickUp objects support soft delete operations. If an object supports filtering by ID, you can query the deleted record using filter by its ID. TimeEntries When you query time entries, you get the records created for the last 30 days for the current user. \nTo get time entries for another team member or all team members, use filter by the Assignee field. To query time entries for multiple assignees, specify their IDs separated by comma, for example Assignee = 1234, 5678 . The Assignees field is designed for filtering only. If you query this field without filter by this field, it will return empty values when querying. Teams_UserGroups The Members field stores a list of IDs in array format. The Members_Add and Members_Rem fields do not store data and return empty values when querying. They exist only for adding or removing members\u2019 list items. When inserting records to the Teams_UserGroups object, map the Members field in array format, for example [123, 789] . To add a new member to the existing group, use the UPDATE or UPSERT operation and map the Members_Add field specifying the user IDs you want to add. For example, to add a user with Id = 456 to the existing list [123, 789] , map the Members_Add field to [456] value. As a result the Members array will look like this [123, 456, 789] . To remove a user from the list, use the UPDATE or UPSERT operation and map the Members_Rem field specifying the user IDs you want to remove. For example, to remove a user with Id = 789 from the list [123, 456, 789] , map the Members_Rem field to [789] value. As a result, the Members array will look like this [123, 456] . FolderlessGoals When inserting records to the FolderlessGoals , map the Owners field in JSON format, for example, [12345678, 23456789] . To add a new owner to the existing owners\u2019 list, use the UPDATE or UPSERT operation and map the Owners field specifying the user IDs you want to add. For example, to add a new owner with Id = 34567890 to the existing owners list [12345678, 23456789] , map the Owners field to the [34567890] value. As a result, the owners\u2019 list will look like this: [12345678, 23456789, 34567890] . To remove an owner from the list, use the UPDATE or UPSERT operation and map the OwnersRemove field specifying the user IDs you want to remove. For example, to remove an owner with Id = 12345678 from the existing list [12345678, 23456789, 34567890] , map the OwnersRemove field to the [12345678] value. As a result, the owners\u2019 list will look like [23456789, 34567890] . TaskComments and ListComments These objects display the last 25 comments for each task or list within a query. \nTo get earlier comments, filter by the Date and StartId fields. The StartId field doesn\u2019t return data when querying. It is used for filtering only. To increase query performance, you may filter by the TaskId field for the TaskComments object and the ListId field for the ListComments object. Filtering Specifics ClickUp API supports the following native filters: Object Operator Field TeamTasks = Status_Status, Parent,ListId, Project_Id, SpaceId, Archived IN Status_Status, Project_Id, SpaceId, ListId < , <= , > , >= CreatedDate, UpdatedDate, DateDone, DueDate TimeEntries > , >= Start < , <= End Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Nested Objects The following fields store complex structured data in JSON format. \nThese objects are read-only. Object Field Lists Statuses FolderlessLists Statuses Workspaces Members You can replicate these objects into databases or data warehouses using our new replication runtime. For this, select the Use New Replication Runtime in the Replication, enable the Unwind Nested Objects and select Separate Tables . Filtering Specifics Skyvia supports the following native filters for ClickUp objects. Object Field Operator TeamTasks = Status_Status, Parent,ListId, Project_Id, SpaceId TeamTasks < , <= , > , >= CreatedDate, UpdatedDate, DateDone, DueDate TimeEntries > , >= Start TimeEntries < , <= End Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Custom Fields The CustomFields field in the TeamTasks object stores the custom fields related to specific task. You can add values to this field using the INSERT operation. \nTo update or delete values from the CustomFields values, use stored procedures . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the FolderlessGoals, FolderGoals, TeamTasks, and Teams_UserGroups objects. Skyvia detects only new records for the Teams_UserGroups object. Skyvia supports Synchronization for the FolderlessGoals and TeamTasks objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE FolderlessGoals, FolderlessLists, Folders, ListComments, Lists, Spaces, Tags, TaskComments, Team_UserGroups, TeamTasks, TimeEntries UPDATE, DELETE FolderGoals Stored Procedures UpdateTaskCustomField To update value in the TeamTasks object CustomFields field, use the following command. UpdateTaskCustomField(:task_id, :field_id, :valueArray, :valueObject, :valueString, :valueInt, :value_options) PARAMETER NAME DESCRIPTION Task_id Required. ID of the task you want to update field_id Required. Universal unique identifier (UUID) of the Custom Field you want to set ValueArray Parameter for JSON Array value update ValueObject Parameter for JSON Object value update ValueString Parameter for String value update ValueInt Parameter for numeric value update Value_options Parameter for drop-down menu option update RemoveTaskCustomField To delete custom field, use the following command RemoveTaskCustomField(:task_id,:field_id) Supported Actions Skyvia supports all the common actions for ClickUp." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/clockify_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Clockify [Clockify](https://clockify.me) is a time tracking service that lets you track teams activity and work hours across projects. Data integration : Skyvia supports importing data to and from Clockify, exporting Clockify data to CSV files, and replicating Clockify data to relational databases. Backup : Skyvia Backup does not support Clockify. Query : Skyvia Query supports Clockify. Establishing Connection To create a connection to Clockify, specify the API Key. Getting Credentials API Key To locate the API key, do the following. Go to [Clockify](https://app.clockify.me/) . Click the user icon and choose Preferences . Switch to the Advanced tab and click Generate . Click Generate again and copy the API key. The API key is available only once when you generate it. Store it in a safe place to access it later. If you generate a new API key instead of an old one, the old API key becomes invalid. Creating Connection To connect to Clockify, enter the obtained API key into the API Key box in the connection editor. Additional Connection Parameters Suppress Extended Requests Clockify API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Invoices BillFrom, ClientAddress, CompanyId, ContainsImportedExpenses, ContainsImportedTimes, Discount, DiscountAmount, Items, Note, Subject, Subtotal, Tax, Tax2, Tax2Amount, TaxAmount, UserId, VisibleZeroFields Expenses File You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Customized Workspace Objects Use this parameter to enable custom fields specific for each workspace. Connector Specifics Object Peculiarities Assignments When you query this object, the result includes data from 01.01.2024 00:00:00 to the current moment by default. To query data for another period, use filters by the Start and End fields. If you filter by the Start field only, query will return data from the Start data till now. If you filter by the End field only, query will return data from 01.01.2024 00:00:00 till the date specified in the filter. Holidays To insert records to this object, map the UserGroupIds or UserIds in array format. If you mapped the UserIds , also map the UserContains and UserStatus fields. If you mapped the UserGroupIds , also map the UserGroupContains and UserGroupStatus fields. There are valid values for the fields to consider when importing data to this object: Object Operation Value UserContains, UserGroupContains INSERT Contains , Does_Not_Contain UserContains, UserGroupContains UPDATE Contains , Does_Not_Contain , Contains_Only UserStatus, UserGroupStatus INSERT ALL, ACTIVE, INACTIVE UserStatus, UserGroupStatus UPDATE ALL, ACTIVE, INACTIVE, PENDING, DECLINED Policies You can get only records with an active status when querying this object. To get the archived records, use filter by the Status field. Set Status = ARCHIVED , to get the archived records, set Status = ALL to get all existing records. \nWhen you query specific recods using filter by the Id , you can get only active records. If you query the archived record by its Id , the query will return no records. The INSERT operation requires mapping the UserGroupIds or UserIds fields in array format, for example [\"66963721c855372fa3918cc0\"] . If you mapped the UserIds , also map the UserContains and UserStatus fields. If you mapped the UserGroupIds , also map the UserGroupContains and UserGroupStatus fields. There are valid values for the fields to consider when importing data to this object: Object Operation Value UserContains, UserGroupContains INSERT Contains , Does_Not_Contain UserContains, UserGroupContains UPDATE Contains , Does_Not_Contain , Contains_Only UserStatus, UserGroupStatus INSERT ALL, ACTIVE, INACTIVE UserStatus, UserGroupStatus UPDATE ALL, ACTIVE, INACTIVE, PENDING, DECLINED TotalCapacityForUser The TotalhoursPerDay field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. When you query this object, the result includes data from 01.01.2024 00:00:00 to the current moment by default. To query data for another period, use filters by the Start and End fields. If you filter by the Start field only, query will return data from the Start data till now. If you filter by the End field only, query will return data from 01.01.2024 00:00:00 till the date specified in the filter. Custom Fields Custom fields are available for the Projects, Users, TimeEntryForUsers, and TimeEntryInProgress . Clockify supports the following custom field types: Clockify DbType Text String Number Double Link String Switch Boolean Select String Select Multiple String Every Clockify workspace may have separate set of custom fields. If you have custom fields, Skyvia adds an additional object with Clockify workspace name as a prefix. For example, there are Text and Checkbox fields in the Projects object in Client Workspace . Skyvia creates the additional Client Workspace Projects object which repeats the Projects object content also containing the Text and Checkbox custom fields. The Projects custom fields are read-only. If you use them in the Returning operation, Skyvia performs additional API call to get their values due to Clockify API specifics. The Users custom fields are read-only. Incremental Replication and Synchronization Skyvia doesn\u2019t support Replication with Incremental Updates and Synchronization for the Clockify objects. They don\u2019t have the CreatedDate and UpdatedDate fields. DML Operations Support Clockify objects support the following operations. Operation Object INSERT, UPDATE, DELETE Clients, ExpenseCategories, Expenses, Holidays, Invoices, Policies, Projects, Projects, Tags, Tasks, TimeEntryForUsers, TimeEntryForUsers, TimeEntryInProgress, TimeEntryInProgress, UserGroups INSERT, DELETE Policies INSERT Assignments, Payments, Users, Workspaces UPDATE MemberProfile Stored Procedures Skyvia represents part of the supported Clockify features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateWorkspaceCostRate To update the workspace cost rate, use the command: call UpdateWorkspaceCostRate(:workspaceId,:amount,:since) PARAMETER NAME DESCRIPTION WorkspaceId Workspace identifier across the system Amount Integer cost rate amount Since Date and time in yyyy-MM-ddThh:mm:ssZ format UpdateWorkspaceBillableRate To update the workspace billable rate, use the command: call UpdateWorkspaceBillableRate(:workspaceId,:amount,:currency,:since) PARAMETER NAME DESCRIPTION WorkspaceId Workspace identifier across the system Amount Integer cost rate amount Currency Currency code. Default value is USD Since Date and time in yyyy-MM-ddThh:mm:ssZ format ArchiveExpenseCategory To archive the expense category, use the command: call ArchiveExpenseCategory(:workspaceId,:categoryId,:archived) PARAMETER NAME DESCRIPTION WorkspaceId Workspace identifier across the system CategoryId Category identifier across the system Archived True or False flag whether to archive the expense category or not AddUserToGroup To add a specific user to a group, use the following command. call AddUserToGroup(:workspaceId,:userGroupId,:userId) RemoveUserFromGroup To remove a specific user from the group, use the command: call RemoveUserFromGroup(:workspaceId,:userGroupId,:userId) Supported Actions Skyvia supports all the common actions for Clockify." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/close_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Close [Close](https://www.close.com/product) is an all-in-one high-performance CRM for growing sales teams and turning more leads into revenue. Data integration : Skyvia supports importing data to and from Close, exporting Close data to CSV files, replicating Close data to relational databases, and synchronizing Close data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Close backup. Query : Skyvia Query supports Close. Establishing Connection To create a connection to Close, select the authentication type and provide the required credentials. Getting Credentials API Key Log in to Close and click Settings on the left. Select API Keys . Click + New API Key . Enter the API Key name and click Create API Key . Copy the newly created API Key into the clipboard and click OK . The API Key is available only once when you create it. \nCopy the displayed API Key and store it somewhere safe. Creating API Key Connection Select the API Key authentication type. Enter the obtained API Key. Creating OAuth Connection Select the OAuth2 authentication type and click Sign In with Close . Enter your email and password. Allow Skyvia access your Close organization. Additional Connection parameters Use Custom Fields Select this checkbox to make Close custom fields available in Skyvia. Connector Specifics Object Peculiarities CustomActivityInstances Close users can create custom activities of the specific type. The list of custom user activities is stored in the CustomActivityInstances object. \nSkyvia creates a separate object for every custom activity type, because every activity type has its own set of custom fields. \nFor example, if the user has two custom activity types ActivityType1 and ActivityType2 Skyvia will create the ActivityType1_CustomActivityInstances and ActivityType2_CustomActivityInstances objects. These objects will contain fields available in the CustomActivityInstances and custom fields unique for every specific activity type. For better performance, use filters by the CustomActivityTypeId and LeadId fields when querying. \nMore information about the custom fields is available below . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Close objects except: ConnectedAccounts, Groups, LeadStatuses, OpportunityAggregateValues, OpportunityStatuses, Pipelines, ReportActivityMetrics, SendAs, SequenceSchedules . Skyvia supports Synchronization for such Close objects as ActivityCustomFields, ActivityEmails, ActivityNotes, ActivitySMS, ContactCustomFields, Contacts,CustomActivityInstances, CustomActivityTypes, EmailSequences, EmailTemplates, IntegrationLinks, LeadCustomFields, Leads, Opportunities, OpportunityCustomFields, Pipelines, Roles, SequenceSubscriptions, SharedCustomFields, SmartViews, Tasks . DML Operations Support Skyvia supports the following DML operations for Close objects: Operation Object INSERT, UPDATE, DELETE ActivityCustomFields, ActivityEmails, ActivityNotes, ActivitySMS, Contacts, ContactCustomFields, CustomActivityInstances, CustomActivityTypes, EmailSequences, EmailTemplates,Groups, IntegrationLinks, LeadCustomFields, Leads, LeadStatuses, Opportunities, OpportunityCustomFields, OpportunityStatuses, Pipelines, Roles, SharedCustomFields, SmartViews, Tasks INSERT, UPDATE SequenceSubscriptions INSERT, DELETE ActivityCalls INSERT BulkDelete, BulkEdit, BulkEmails, BulkSequenceSubscriptions, Export, SendAs DELETE ActivityEmailThreads, ActivityMeetings, ActivityTaskCompleted UPDATE PhoneNumbers Stored Procedures Skyvia represents part of the supported Close features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddUserToGroup To add a specific user to a group, use the command call AddUserToGroup (:group_id, :user_id) DeleteUserFromGroup To remove a specific user from a group, use the command call DeleteUserFromGroup (:group_id, :user_id) Custom Fields The following Close objects contain custom fields: Contacts, Leads, Opportunities , *CustomActivityInstances . Custom fields, support the INSERT and UPDATE operations. The following custom field types are available in our Close connector. Close Type DbType Text String. The default length is 1000 characters. The length increases to 4000 characters, if the field name is memo or note , contains the description, comment, notes, address , or ends with url, reason , or keywords . If the name contains content or html , its length increases to 2147483647 charachters. Textarea String Dropdown String. Enum field. Single or Multiple entries allowed. Number Double Date Date DateTime DateTime Contact (Single) String. Refers to the Contacts object. User(Single) String. Refers to the Users object. Contact (Multiple) Array of strings. Includes the Ids of related contacts. User(Multiple) Array of strings. Includes the Ids of related users. Textarea HTML markup field, for example

value

. This type is available in the *CustomActivityInstances . Contact custom fields (Single and Multiple) in the Leads object don\u2019t support the INSERT operation. Supported Actions Skyvia supports all the common actions for Close. Troubleshooting The skip you set is larger than the maximum skip for this resource (max_skip = N ) Close API allows to read a limited number of records in some objects. This limit may vary in different Close objects. If you exceed this limit when running an integration, it fails with the error of type: \n\u201cIntegration failed: The skip you set is larger than the maximum skip for this resource (max_skip = N ). Please refine your search query\u201d. N in the error message is the maximum allowed number of records depending on the object. You can reduce the number of read records using filters with the following operators: OBJECT FIELD AND SUPPORTED OPERATOR Activities LeadId (=), CreatedDate (>=, <) ActivityCreated, ActivityCalls, ActivityEmails, ActivityEmailThreads, ActivityNotes, ActivityLeadStatusChanges, ActivityOpportunityStatusChanges, ActivityTaskCompleted, ActivitySMS LeadId (=), UserId (=), CreatedDate (>=, <)" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/confluence_cloud_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Confluence Cloud [Confluence](https://www.atlassian.com/software/confluence) is Atlassian\u2019s corporate knowledge management collaboration software (wiki), written in Java. Skyvia supports the cloud version of Confluence. Data integration : Skyvia supports importing data to and from Confluence, exporting data from Confluence to CSV files, replicating data from Confluence to relational databases, and synchronizing Confluence data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Confluence. Query : Skyvia Query supports Confluence. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , specify the site URL and log in with a Confluence account. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Skyvia does not store your credentials. Creating Connection To connect to Confluence, perform the following steps: Enter your Confluence site URL in the Site box, which you can find in your Jira account profile. Click Sign In with Confluence Cloud . Enter your Confluence credentials and click Log in . Select the site to connect and click Accept . Connector Specifics Object Peculiarities BlogPosts and Pages Status Field When you select data from BlogPosts and Pages , Confluence API returns only the records with the Current status. \nTo get the deleted blog posts, use filter Status = Trashed or Status = Deleted . To get the archived pages, use the filter by Status = Archived . If you insert data to BlogPosts and Pages , the valid Status values are Current and Draft . If you update data in BlogPosts and Pages , the valid Status values are Current, Draft, Archived , and Deleted . Body Field Format Confluence API allows getting the Body field values in Storage or Atlas Doc Format. By default, you get the Body value in the Storage format when querying.\nWhen you import data to BlogPosts and Pages , you can pass the Body values in the available formats: Storage, Wiki, or Atlas Doc Format. To do that, just map the BodyRepresentation to the corresponding format value and map the Body itself. The BodyRepresentation field is required for mapping to successfully import the Body values. If you don\u2019t map the Body field, the integration will create an empty blog post or page. Version_Number field To successfully update the BlogPosts and Pages records, you must map the Version_Number field. \nIts value has to be equal to the current version value + 1. \nFor example, if the current version of the Pages record is 3, map the Version_Number to 4. BlogPostLabels, PageLabels, SpaceLabels The BlogPostLabels, PageLabels , and SpaceLabels are related to the BlogPosts, Pages, and Spaces by foreign keys.\nWhen you load data to the BlogPostLabels, PageLabels and SpaceLabels , the Ids of imported records are not displayed in the integration log. -Properties Objects To successfully update the BlogPostAttachmentProperties, BlogPostProperties, PageAttachmentProperties, PageProperties , and SpacePropertiesrecords , you must map the Version_Number field. Its value has to be equal to the current version value + 1. \nFor example, if the current version of the BlogPostProperties record is 3, map the Version_Number to 4. BlogPostVersions and PageVersions When you delete records in the BlogPostVersions and PageVersions , the remaining parent records Version_Number values in the BlogPost and Pages shift. For example, the Pages record has versions 1, 2, and 3. If you delete record number 2 from the PageVersions , the remaining versions 1 and 3 of the Pages record will shift to 1 and 2. -InlineComments Objects Inline comments are the comments added when users highlight the text directly in the content.\nTo add such a comment via API (using data integration), you must map comment Body , content Id ( PageId or BlogPostId ), and the following fields: InlineCommentProperties_TextSelection \u2014 text fragment which you add a comment for. InlineCommentProperties_TextSelectionMatchCount \u2014 number of the specified fragment occurrences. For example, 1 if the fragment is met only once on the page. InlineCommentProperties_TextSelectionMatchIndex \u2014 text fragment number, indexes starting from 0. If the fragment occurs only once, you have to specify 0. If there are three occurrences of the \u201chello world\u201d fragment on a page, and you want to highlight the second occurrence, you should map 1 for InlineCommentProperties_TextSelectionMatchIndex and 3 for InlineCommentProperties_TextSelectionMatchCount . -FooterComments Objects Confluence Cloud allows users to reply to comments and other replies, which creates several nesting levels of replies. \nSkyvia reads the replies of the first nesting level (replies to comments) by default.\nTo get the replies of deeper nesting levels, use the filter by the ParentCommentId . Incremental Replication and Synchronization Skyvia supports Synchronization for the following objects: BlogPosts, Pages, and SpaceProperties . Synchronization tracks only new records. Skyvia supports Replication with Incremental Updates for the following objects: BlogPostAttachmentVersions, BlogPostFooterCommentVersions, BlogPostInlineCommentVersions, BlogPosts, BlogPostVersions, PageAttachmentVersions, PageFooterCommentVersions, PageInlineCommentVersions, Pages, PageVerisons, SpaceProperties, Tasks objects. Skyvia tracks only new records and doesn\u2019t track the updated records for all the objects except the Tasks . DML Operations Skyvia supports the following DML operations for the Confluence objects: Operation Object INSERT, UPDATE, DELETE BlogPostAttachmentProperties, BlogPostChildrenFooterComments, BlogPostChildrenInlineComments, BlogPostFooterComments, BlogPostInlineComments, BlogPostProperties, BlogPosts, PageAttachmentProperties, PageChildrenFooterComments, PageChildrenInlineComments, PageFooterComments, PageInlineComments, PageProperties, Pages, SpaceProperties, Spaces INSERT, DELETE BlogPostLabels, Groups, PageLabels, SpaceLabels DELETE BlogPostVersions, GroupMembers, PageVersions, SpaceWatchers UPDATE Tasks Stored Procedures Skyvia represents part of the supported Confluence features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddMemberToGroup This command adds the existing user to a specific group. call AddMemberToGroup(:groupId, :accountId) Supported Actions Skyvia supports all the common actions for Confluence Cloud." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/constantcontact_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Constant Contact [Constant Contact](https://www.constantcontact.com/) is an easy-to-use email marketing service that helps create effective email marketing and other online marketing campaigns to meet customers\u2019 business goals. Data integration : Skyvia supports importing data to and from Constant Contact, exporting Constant Contact data to CSV files, replicating Constant Contact data to relational databases, and synchronizing Constant Contact data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Constant Contact backup. Query : Skyvia Query supports Constant Contact. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , log in with a Constant Contact account. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Skyvia does not store your credentials. Creating Connection To create a Constant Contact connection, perform the following steps: Click Sign In with Constant Contact . Enter your Constant Contact credentials and click Sign in . Give your consent to the list of requested permissions. Additional Connection Parameters Suppress Extended Requests Constant Contact API returns only part of the fields for some objects when querying multiple records. To query the values of additional fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. The list of such additional fields is the following: OBJECT FIELD Campaigns CampaignActivities To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some fields in such objects are unavailable in Skyvia (return empty values) even if they have values in Constant Contact because its API does not return them without extended requests. Use Custom Fields Select this checkbox to make Constant Contact custom fields available in Skyvia. Connector Specifics Object Specifics Campaigns The Campaigns object stores data about campaign activities as a JSON array. If you need to load data into the Campaigns object, you must map the Name , and Activities fields and provide values for the Activities field as JSON arrays, for example: [{ \"format_type\": 5, \"from_name\": \"NAME\", \"from_email\": \"email@test.com\", \"reply_to_email\": \"email@test.com\", \"subject\": \"SUBJECT\", \"html_content\": \"CONTENT\" }] Where NAME , email@test.com , SUBJECT , and CONTENT should be replaced with the respective values. You may also add the preheader parameter and physical_address_in_footer and document_properties objects to records in the JSON array. Campaign activity data, stored in the Activity field of the Campaigns object is also available via the CampaignActivities object. Related Objects You can access some Constant Contact objects only via their parent objects. \nFor example, to query CampaignActivities , Constant Contact API requires the ID of the corresponding Campaigns record. To get records from a number of other objects, Constant Contact API requires the ID of the corresponding CampaignActivity . These are the following objects: EmailLinksReport , EmailDidNotOpensReport , EmailBouncesReport , EmailBouncesReport , EmailForwardsReport , EmailForwardsReport , EmailOpensReport , EmailOptoutsReport , EmailSendsReport , EmailUniqueOpensReport , EmailPreviews , CampaignActivitySchedule , CampaignActivitySendHistory . Skyvia does not require the ID of the parent object from users. If you don\u2019t specify the IDs of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their IDs, and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes time and consumes many API calls. It uses at least one API call for every parent object record. Thus, working with child objects without filtering them on their parents can be very slow and use additional API calls. We strongly recommend using filters by the parent object fields when querying data from child objects. This reduces the number of parent object records for which child object data must be queried. DML Operations Support Operation Object INSERT, UPDATE, DELETE Campaigns, ContactCustomFields, ContactLists, Contacts, Segments INSERT AccountEmails, CampaignActivitySchedule Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following Constant Contact objects: Campaigns, ContactCustomFields, ContactLists, Contacts, EmailBouncesReport, EmailDidNotOpensReport, EmailForwardsReport, EmailOpensReport, EmailOptoutsReport, EmailSendsReport, EmailUniqueOpensReport, Segments . Skyvia detects only the newly created records for the objects with the *Report suffix in their names. Skyvia supports Synchronization for the following ConstantContact objects: Campaigns, ContactCustomFields, ContactLists, Contacts, Segments . Custom Fields Constant Contact API supports custom fields for the Contacts object. You can add custom fields of the following types. Constant Contact Type DbType Text String Date Date Stored Procedures For user convenience Skyvia represents part of the supported Constant Contact features as stored procedures. Procedures help to optimize connector performance and avoid exceeding API limits. You can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . GetContacts Use this command to select all the existing contacts. call GetContacts() GetContactsByIds This command selects the particular contacts by their Ids. call GetContactsByIds(:Id) The Id parameter can be a single ID value or a JSON array of ID values. For example, use the following format to select a single contact call GetContactsByIds('2b1ee4d2-44fb-11ef-ade7-fa163eb6961b') . Use the following format to select multiple contacts call GetContactsByIds('[\"2b1ee4d2-44fb-11ef-ade7-fa163eb6961b\",\"770b9666-4540-11ef-b0e3-fa163e504c74\"]') . GetContactsByListIds This command selects the particular contacts by their list Ids. call GetContactsByListIds(:ListId) The ListId parameter can be a single ID value or a JSON array of ID values. For example, use the following format to select contacts from a single list call GetContactsByListIds('4894476a-4a81-11ef-ab7d-fa163edfbcfb') . Use the following format to select contacts from multiple lists call GetContactsByListIds('[\"4894476a-4a81-11ef-ab7d-fa163edfbcfb\",\"6764a04e-5bb2-11ef-a45b-fa163edfbcfb\"]') . GetContactsBySegmentId Use the following command to select contacts by the segment Id. You can find out the segment Id value by selecting the Segmants object. call GetContactsBySegmentId(:SegmentId) Supported Actions Skyvia supports all the common actions for Constant Contact." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/convertkit_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Kit [Kit](https://convertkit.com/) (formerly ConvertKit) is the hub for audience growth, marketing automation, and digital products selling. Data integration : Skyvia supports importing data to and from Kit, exporting Kit data to CSV files, and replicating Kit data to relational databases. Backup : Skyvia Backup does not support Kit backup. Query : Skyvia Query supports Kit. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Kit in Skyvia, you must specify the API secret. Getting Credentials To obtain the API secret, you have to log in to Kit and perform the following actions: Click the username in the top right corner of the page and select Settings . In the left menu, select Developer . Click Show in the API Secret box. Copy the API Secret. Creating Connection Paste the obtained value into the API secret box in Skyvia. Additional Connection Parameters Suppress Extended Requests Kit API returns only part of the fields for some objects when querying multiple records. To query the values of additional fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls.\nThe list of additional fields is the following: Object Field Broadcasts Description, Content, Public, PublishedAt, SendAt, ThumbnailALT, ThumbnailUrl, EmailAddress, EmailLayoutTemplate Sequences SendTime, SendTimeZone, RecipientRules_LandingPages, RecipientRules_Courses, RecipientRules_Tags, RecipientRules_Lists, Mon, Tue, Wed, Thr, Fri, Sat, Sun, SendTimeZoneAbbr, EmailTemplates The SequencesEmailTemplates object is read through the Sequence object. It means that Skyvia reads the Sequence object first obtaining the EmailTemplates field using the extended requests, and then gets the existing SequencesEmailTemplates records for each SequenceId . To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in Kit because its API does not return them without extended requests. Use Custom Fields Select this checkbox to make Kit custom fields available in Skyvia. Connector Specifics Filtering Specifics Kit API supports the following native filters: Object Operator Field Subscribers = EmailAddress CancelledSubscribers = EmailAddress Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Object Peculiarities Subscribers Subscribers object stores the list of confirmed (active) subscribers. You can find canceled subscribers in the CancelledSubscribers object. \nYou can select the unsubscribed users\u2019 records from CancelledSubscribers . \nYou can get the specific unsubscribed users\u2019 records from the Subscriptions object only using the filter by Id. Such records are displayed with the canceled state value. Purchases The TranzactionId field is required for INSERT operation. \nSkyvia updates the record if the mapped TranzactionId value already exists in the object. If the TranzactionId value is absent in the object,\nSkyvia inserts the record. Broadcasts To import into the Broadcasts object, at least one of the fields Subject, Description or Content must be mapped. TagSubscriptions, FormSubscriptions, and SequenceSubscriptions Due to Kit API specifics, the Insert operation for the TagSubscriptions, FormSubscriptions, and SequenceSubscriptions objects works the following way. \nIf a subscriber already exists for a particular tag, form or sequence, the Insert operation will update the existing record in the TagSubscriptions, FormSubscriptions, or SequenceSubscriptions . Custom Fields The following Kit objects support custom fields: Subscribers, CancelledSubscribers, TagSubscriptions, FormSubscriptions, SequenceSubscriptions . All custom fields are strings. The Subscribers and CancelledSubscribers custom fields support the UPDATE operation. The TagSubscriptions, FormSubscriptions, and SequenceSubscriptions object support the INSERT operation. Custom fields in the TagSubscriptions, FormSubscriptions and SequenceSubscriptions objects have a Subscriber_* prefix in their names. Incremental Replication and Synchronization support Replication with Incremental Updates is supported for the following objects: Broadcasts, CancelledSubscribers, Forms, FormSubscriptions, Sequences, SequenceSubscriptions, Subscribers, SubscriberTags, Tags, TagSubscriptions. Incremental Replication detects only the new records because all the objects contain only the CreatedDate field, and there is no UpdatedDate field, which is required for replicating the updated records. Skyvia does not support Synchronization for Kit. DML Operations Supports Operation Object INSERT, UPDATE, DELETE Broadcasts, CustomFields INSERT, DELETE TagSubscriptions UPDATE Subscribers INSERT FormSubscriptions, Purchases, SequenceSubscriptions, Tags Stored Procedures Skyvia represents part of the supported Kit features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UnsubscribeSubscriber The following command unsubscribes an email address from all forms and sequences. call UnsubscribeSubscriber(:Email) DestroyWebhook To delete webhook, use the command call DestroyWebhook(:id) CreateWebhook The following command creates a webhook that will be called when a subscriber event occurs. call CreateWebhook(:target_url,:event) PARAMETER NAME DESCRIPTION Target_url The URL that will receive subscriber data when the event is triggered Event JSON object that includes the trigger name and extra information about the event, for example { \"name\": \"subscriber.subscriber_activate\" } These are the available event types: \u201csubscriber.subscriber_activate\u201d \u201csubscriber.subscriber_unsubscribe\u201d \u201csubscriber.form_subscribe\u201d, required parameter :form_id [Integer] \u201csubscriber.course_subscribe\u201d, required parameter :course_id [Integer] \u201csubscriber.course_complete\u201d, required parameter :course_id [Integer] \u201csubscriber.link_click\u201d, required parameter :initiator_value [String] as a link URL \u201csubscriber.product_purchase\u201d, required parameter :product_id [Integer] \u201csubscriber.tag_add\u201d, required parameter :tag_id [Integer] \u201csubscriber.tag_remove\u201d, required parameter :tag_id [Integer] \u201cpurchase.purchase_create\u201d For example call CreateWebhook('http://example.com/incoming','{\"name\": \"subscriber.subscriber_activate\"}') Supported Actions Skyvia supports all the common actions for Kit." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/customerio_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Customer.io [Customer.io](https://customer.io) is an automated cloud messaging platform for marketing automation that allows crafting and sending data-driven emails, push notifications, and SMS messages. Skyvia uses Customer.io App API. Data integration : Skyvia supports importing data to and from Customer.io, exporting Customer.io data to CSV files, replicating Customer.io data to relational databases, and synchronizing Customer.io data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Customer.io backup. Query : Skyvia Query supports Customer.io. Establishing Connection To establish a connection, you need to select your account region and specify your app API key, Tracking API key, and Site ID. Skyvia uses Tracking API for working with the Customers object data, and APP API - for other objects. Getting Credentials You can manage your API keys in your [Customer.io account settings](https://fly.customer.io/settings/api_credentials) . See [Customer.io documentation](https://customer.io/docs/managing-credentials/) for more information. Note that you need both Tracking API Key and App API Key. Note that you cannot view an existing App API key. If you don\u2019t have one, you will need to create a new API Key. To create a new app API key perform the following steps: Sign in to Customer.io and open the API Credentials section of Customer.io Account Settings. You can either use this [link](https://fly.customer.io/settings/api_credentials?keyType=app) or click Settings -> Account Settings and then click API Credentials . On the App API Keys tab, click Create App API Key . Enter the API Key Name , for example, Skyvia and select Workspace . Click the Create App API Key button in the corresponding row of the App API Keys grid. The created API key is displayed. Copy and save your API Key. Note that this is the only time you can obtain it, and if you lose it, you will need to create a new one. To create a new tracking API key, switch to the Tracking API tab and perform the following steps: Click Create Tracking API Key . Enter the API Key Name , for example, Skyvia and select Workspace . Copy and save your tracking API Key and Site Id . Note that this is the only time you can obtain it, and if you lose it, you will need to create a new one. Customer.io allows limiting API access by IP addresses. If you want to allow only Skyvia IP addresses, see [How to Configure Local Database Server to Access It from Skyvia](https://docs.skyvia.com/connections/how-to-configure-local-database-server-to-access-it-from-skyvia.html) for the list of Skyvia IP addresses to add. Creating Connection You need to specify the following parameters while creating a Customer.io connection : Select the Account Region if you use a non-US region. Enter your App API Key , Tracking API Key and Site Id . Connector Specifics Object Peculiarities Some Customer.io objects can be accessed only via their parent objects. For example, to query CustomerSegments , CustomerMessages , or CustomerActivities , Customer.io API requires the ID of the corresponding Customer (Skyvia uses the CioId field). Skyvia does not require the ID of the parent object from the user. If you don\u2019t specify the IDs of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their IDs, and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes much time and consumes many API calls. It uses at least one API call for every parent object record. Thus, working with child records can be slow. Because of this, it is recommended to use filters on the parent object fields when querying data from such child objects. This reduces the number of parent object records, for which child object data must be queried. Customers Object When performing INSERT into the Customers object, you must specify a value either for the PersonId or for Email field. Update and Delete operation is performed using the CioId field for identifying a record to update or delete. Note that the PersonId and Email fields must have unique values. If you try assigning values already present in the Customers object to PersonId or Email when inserting or updating a record, you will get an error. Custom attributes of the Customers object are available via the CustomersAttributes object. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: BroadcastActions, BroadcastMessages, Broadcasts, BroadcastTriggers, CampaignActions, Campaigns, Collections, CustomerMessages, Customers, Exports, Messages, NewsletterMessages, Newsletters, Snippets, TransactionalMessageDeliveries, TransactionalMessages . Besides, the following objects have only the CreateDate field, and thus, replication with Incremental Updates detects only the new records, but not updates to existing records: BroadcastMessages, BroadcastTriggers, CustomerMessages, Customers, Messages, NewsletterMessages, TransactionalMessageDeliveries . Only the Collections object supports synchronization. DML Operations Support Skyvia supports the following DML operations for Customer.io objects: Operations Objects INSERT, UPDATE, DELETE Collections, ReportingWebhooks, Customers UPDATE, DELETE Snippets INSERT, DELETE Segments UPDATE NewsletterVariants, CampaignActions Stored Procedures Skyvia represents part of the supported Customer.io features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendTransactionalEmailWithTemplate and SendTransactionalEmailWithoutTemplate Use the following command to send email with template: call SendTransactionalEmailWithTemplate(:TransactionalMessageId, :To, :Identifiers_Email, :Body, :Subject, :From, :MessageData, :SendAt, :Bcc, :FakeBcc, :ReplyTo, :Preheader, :PlaintextBody, :Attachments, :Headers, :DisableMessageRetention, :SendToUnsubscribed, :Tracked, :QueueDraft, :DisableCssPreprocessing); Use this command to send email without template call SendTransactionalEmailWithoutTemplate(:Body, :Subject, :From, :To, :Identifiers_Email, :MessageData, :SendAt, :Bcc, :FakeBcc, :ReplyTo, :Preheader, :PlaintextBody, :Attachments, :Headers, :DisableMessageRetention, :SendToUnsubscribed, :Tracked, :QueueDraft, :DisableCssPreprocessing); PARAMETER NAME DESCRIPTION TransactionalMessageId The id of the transactional message that you want to use as the template To Recipients\u2019 email addresses, separated by commas. Can contain up to 15 recipients Identifiers_Email Identifies the person represented by your transactional message by email Body The body of the email message to send. For the SendTransactionalEmailWithTemplate procedure, it overrides the template body Subject The subject of the email message to send. For the SendTransactionalEmailWithTemplate procedure, it overrides the template subject From The address that your email is from. This address must be verified by Customer.io. You can include a display-friendly name in the format Person . For the SendTransactionalEmailWithTemplate procedure, it overrides the template from address MessageData An object containing the key-value pairs referenced using liquid in your message SendAt A timestamp to send the email message at, in unix timestamp format Bcc Blind copy message recipients. Supports multiple email addresses separated by commas. Can contain up to 15 email addresses FakeBcc If true, instead of sending true copies to BCC addresses, Customer.io sends a copy of the message with the subject line containing the recipient address(es) ReplyTo The address that recipients can reply to if it needs to be different from the From address Preheader The block of preview text that users see next to, or underneath, the subject line in their inbox PlaintextBody You can use this parameter to override the automatically generated plaintext version of your message body Attachments A JSON dictionary of attachments with filenames as keys and base64-encoded file contents as values. The total size of all attachments must not exceed 2 MB. Some filetype extensions are restricted Headers An array of header JSON objects. Headers must be strings and cannot contain any non-ASCII characters or empty spaces. Some headers are reserved and cannot be overwritten DisableMessageRetention If true, the message body is not retained in the delivery history. For the SendTransactionalEmailWithTemplate procedure, this parameter overrides the corresponding template setting. The default value is false SendToUnsubscribed If true, the message is sent to unsubscribed recipients. For the SendTransactionalEmailWithTemplate procedure, this parameter overrides the corresponding template setting. The default value is true Tracked Enables tracking of email message opens and link clicks. The default value is false QueueDraft If true, the email message is held as a draft in Customer.io and is not sent directly to recipients. To send your message, go to the Deliveries and Drafts page DisableCssPreprocessing Disables CSS preprocessing. For the SendTransactionalEmailWithTemplate procedure, this parameter overrides the corresponding template setting. The default value is false IdentifyObject To create a record in the custom object, use the command call IdentifyObject(:ObjectTypeId, :ObjectId, :Attributes, :CioRelationships) PARAMETER NAME DESCRIPTION ObjectTypeId The ID of your custom object. Every custom object type has its ID. All the records in custom object refer to the ObjecttypeId value ObjectId The unique identifier of the custom object record for example, vehicle_1 Attributes record attributes in the JSON object format CioRelationships People you want to associate with the record. Each item in the array represents a person. For example, you can relate the record with Customer by Id For example, call IdentifyObject('1', 'vehicle_5', 'identify', '{\"Make\":\"Volkswagen\",\"Model\":\"Golf R\"}', '[{\"identifiers\": {\"id\": \"baf708000001\"}}]') AddRelationship To add or change the associations of the custom object record with specific customers, use the command call AddRelationship(:ObjectTypeId, :ObjectId, :CioRelationships) PARAMETER NAME DESCRIPTION ObjectTypeId The ID of your custom object. Every custom object type has its ID. All the records in custom object refer to the ObjectTypeId value ObjectId The unique identifier of the custom object record for example, vehicle_1 CioRelationships People you want to associate with the record. Each item in the array represents a person. For example, you can relate the record with Customer by Id call AddRelationship('1', 'vehicle_4', '[{\"identifiers\": {\"id\": \"baf708000001\"}}]') DeleteObject To delete the record in the custom object, use the command call DeleteObject(:ObjectTypeId, :ObjectId) PARAMETER NAME DESCRIPTION ObjectTypeId The ID of your custom object. Every custom object type has its ID. All the records in custom object refer to the ObjectTypeId value ObjectId The unique identifier of the custom object record for example, vehicle_1 RemoveRelationship The following command removes the association between the custom object record and person. call RemoveRelationship(:ObjectTypeId, :ObjectId, :CioRelationships) PARAMETER NAME DESCRIPTION ObjectTypeId The ID of your custom object. Every custom object type has its ID. All the records in custom object refer to the ObjectTypeId value ObjectId The unique identifier of the custom object record for example, vehicle_1 CioRelationships People you want to remove from the association with the record. Each item in the array represents a person. For example, you can relate the record with Customer by Id For example, call RemoveRelationship('1', 'vehicle_4', '[{\"identifiers\": {\"id\": \"baf708000001\"}}]') Supported Actions Skyvia supports all the common actions for Customer.io." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/d365businesscentral_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Dynamics 365 Business Central [Dynamics 365 Business Central](https://www.microsoft.com/en-us/dynamics-365/products/business-central) is an ERP solution that provides extensive financial management tools for accurate expense and inventory tracking and budget management. Data integration : Skyvia supports importing data to and from Dynamics 365 Business Central, exporting Dynamics 365 Business Central data to CSV files, replicating Dynamics 365 Business Central data to relational databases, and synchronizing Dynamics 365 Business Central data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Dynamics 365 Business Central. Query : Skyvia Query supports Dynamics 365 Business Central. Establishing Connection Skyvia supports connection to Dynamics 365 Business Central Online and On-Premises.\nTo create a connection to Dynamics 365 Business Central, choose the authentication type and provide credentials accordingly. Getting Credentials Web Server Access Key To locate the Web Server Access Key, perform the following actions: Go to your Dynamics 365 Business Central instance web client. Click Search , type Users and select the Users page. Choose the needed user and click its user name. Copy the existing Web Server Access Key or generate a new one. Server URL To find your server URL, do the following: Open the Business Central Administration Shell on your server. The PublicOdataBaseURL value is your Server URL. Creating Connection (OAuth) To connect to Dynamics 365 Business Central using OAuth authentication, perform the following steps: Click Sign In with Microsoft Enter your email and password. Allow Skyvia to access your data. Select Environment from the drop-down list. Choose Company . Creating Connection (Basic) Make sure the [API access](https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/api-reference/v2.0/enabling-apis-for-dynamics-nav) is enabled on your Dynamics 365 Business Central server. \nTo connect to Dynamics 365 Business Central using Basic authentication, perform the following steps: Specify your Dynamics 365 Business Central server URL. Enter the User . Enter the Web Server Access Key . Select Company from the drop-down list. Additional Connection Parameters Metadata Cache You can specify the period after which Metadata Cache expires. Connector Specifics Object Peculiarities Objects with Binary Data Our Dynamics 365 Business Central connector contains objects with binary fields. Such objects are related to other objects. Their names consist of the parent object name and the object type. There are four types of such objects: Attachment, PdfDocument, Picture, DimensionSetLine. For example, there are ContactPicture, PurchaseInvoiceAttachment , and SalesCreditMemoPdfDocument objects. In this case, the Contact* , PurchaseInvoice* and SalesCreditMemo* parts correspond to the parent object names. And the *Picture , *Attachment , and *PdfDocument parts define the object types. The DocumentAttachment object is an exeption. It is a separate object. Read-only objects The following Dynamics 365 Business Central objects are read-only: Account, AccountingPeriod, AgedAccountsPayable, AgedAccountsReceivable, Apicategoryroutes, BalanceSheet, CashFlowStatement, Company, ContactInformation, CurrencyExchangeRate, CustomerFinancialDetail, CustomerSale, Dimension, DimensionValue, EntityMetadata, Externalbusinesseventdefinitions, GeneralLedgerEntry, GeneralLedgerSetup, GeneralProductPostingGroup, IncomeStatement, InventoryPostingGroup, ItemLedgerEntry, PurchaseCreditMemoPdfDocument, PurchaseInvoicePdfDocument, PurchaseReceipt, PurchaseReceiptLine, RetainedEarningsStatement, SalesCreditMemoPdfDocument, SalesInvoicePdfDocument, SalesQuotePdfDocument, SalesShipment, SalesShipmentLine, TrialBalance, VendorPurchase . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for objects with either the CreatedAt or UpdatedAt field.\t\nSkyvia supports Synchronization for objects with either the CreatedAt or UpdatedAt fields and supports INSERT and UPDATE operations. DML Operations Support Skyvia supports the following DML operations for Dynamics 365 Business Central objects. Operation Object INSERT, UPDATE, DELETE BankAccount, Contact, CountryRegion, Currency, Customer, CustomerPayment, CustomerPaymentDimensionSetLine, CustomerPaymentJournal, CustomerReturnReason, DefaultDimension, DocumentAttachment, Employee, Externaleventsubscriptions, FixedAsset, FixedAssetLocation, GeneralLedgerEntryAttachment, GeneralLedgerEntryDimensionSetLine, Item, ItemCategory, ItemVariant, Journal, JournalLine, JournalLineAttachment, JournalLineDimensionSetLine, Location, Opportunity, PaymentMethod, PaymentTerm, Project, PurchaseCreditMemo, PurchaseCreditMemoAttachment, PurchaseCreditMemoDimensionSetLine, PurchaseCreditMemoLine, PurchaseCreditMemoLineDimensionSetLine, PurchaseInvoice, PurchaseInvoiceAttachment, PurchaseInvoiceDimensionSetLine, PurchaseInvoiceLine, PurchaseInvoiceLineDimensionSetLine, PurchaseOrder, PurchaseOrderAttachment, PurchaseOrderDimensionSetLine, PurchaseOrderLine, PurchaseOrderLineDimensionSetLine, PurchaseReceiptDimensionSetLine, PurchaseReceiptLineDimensionSetLine, SalesCreditMemo, SalesCreditMemoAttachment, SalesCreditMemoDimensionSetLine, SalesCreditMemoLine, SalesCreditMemoLineDimensionSetLine, SalesInvoice, SalesInvoiceAttachment, SalesInvoiceDimensionSetLine, SalesInvoiceLine, SalesInvoiceLineDimensionSetLine, SalesOrder, SalesOrderAttachment, SalesOrderDimensionSetLine, SalesOrderLine, SalesOrderLineDimensionSetLine, SalesQuote, SalesQuoteAttachment, SalesQuoteDimensionSetLine, SalesQuoteLine, SalesQuoteLineDimensionSetLine, SalesShipmentDimensionSetLine, SalesShipmentLineDimensionSetLine, ShipmentMethod, TaxArea, TaxGroup, TimeRegistrationEntry, TimeRegistrationEntryDimensionSetLine, UnitOfMeasure, Vendor, VendorPayment, VendorPaymentDimensionSetLine, VendorPaymentJournal UPDATE, DELETE ItemPicture, CustomerPicture, ContactPicture, VendorPicture, EmployeePicture UPDATE CompanyInformation, ApplyVendorEntry Supported Actions Skyvia supports all the common actions for" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/dearinventory_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Cin7 Core Inventory [Cin7 Core Inventory](https://www.cin7.com/) (formerly, DEAR Inventory) is an online accounting and inventory management system designed to improve and boost operational efficiency and productivity of business. Data integration : Skyvia supports importing data to and from Cin7 Core Inventory, exporting Cin7 Core Inventory data to CSV files, and replicating Cin7 Core Inventory data to relational databases, and synchronizing Cin7 Core Inventory data with other cloud apps and relational databases. Backup : Skyvia Backup supports Cin7 Core Inventory backup. Query : Skyvia Query supports Cin7 Core Inventory. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) with Cin7 Core Inventory, you need to specify the Account ID and API Key . Getting Credentials To get Account ID and API Key, perform the following steps: Open the following page: [https://inventory.dearsystems.com/ExternalAPI](https://inventory.dearsystems.com/ExternalAPI) (sign in to Cin7 Core Inventory, if you haven\u2019t signed in yet). Click Integrations on the left of the page. Click API -> + . Specify the integration name, for example, Skyvia , and then click Create . Copy the displayed Account ID and API Application Key . See [https://dearinventory.docs.apiary.io/#introduction/api-introduction](https://dearinventory.docs.apiary.io/#introduction/api-introduction) for more details. Creating Connection Paste your Account ID and API Key to the corresponding boxes in the connection editor. Connector Specifics Nested Objects The following objects store complex structured data in JSON format: Object Field Leads Contacts, Addresses Purchase Order_Lines, Order_AdditionalCharges, Order_Prepayments Sale Fulfilments, Invoices, CreditNotes SaleOrders Lines, AdditionalCharges Workflows Steps You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can also replicate the nested objects using the Unwind Nested Objects option. Select JSON Columns to replicate nested object fields as columns with JSON data into the target table or select Separate Tables to replicate nested object fields into additional tables in the database. Incremental Replication and Synchronization Synchronization and Replication with Incremental Updates enabled are not supported for objects without UpdatedDate field. Thus, only Customers, Products, ProductFamilies, Suppliers objects support Synchronization and Replication with Incremental Updates. DML Operations Support Skyvia supports the following DML operations for Cin7 Core Inventory objects: Operations Objects INSERT, UPDATE, DELETE AdvancedPurchase, AdvancedPurchasePayments, AdvancedPurchaseStockReceived, AttributeSets, BankTransfer, Brands, Carrier, InventoryWriteOffs, Journals, Leads, Locations, MoneyOperation, Opportunities, PaymentTerms, ProductCategories, Purchase, Sale, SalePayments, StockAdjustments, StockTakes, StockTransfers, Tasks, TaskCategories, UnitsOfMeasure, Webhooks, Workflows INSERT, UPDATE Customers, FixedAssetTypes, ProductFamilies, Products, Suppliers, Tax INSERT, DELETE Disassembly, FinishedGoods INSERT AdvancedPurchaseCreditNote, AdvancedPurchaseInvoices, AdvancedPurchaseManualJournals, AdvancedPurchasePutAway, DissasemblyOrders, FinishedGoodsOrders,FinishedGoodsPicks, PurchaseOrders, SaleCreditNoteTasks, SaleFulfilments, SaleInvoicesTasks, SaleManualJournals, SaleOrders, SaleQuotes, StockTransferOrders Supported Actions Skyvia supports all the common actions for Cin7 Core Inventory." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/delighted_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Delighted [Delighted](https://delighted.com) is a cloud-based customer experience (CX) management solution intended to help small to large-sized enterprises gather and analyze feedback from customers and target audiences by conducting surveys across a variety of platforms. Data integration : Skyvia supports importing data to and from Delighted, exporting Delighted data to CSV files, replicating Delighted data to relational databases, and synchronizing Delighted data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Delighted. Query : Skyvia Query supports Delighted. Establishing Connection To create a connection to Delighted, you need to specify your API Key \u2014 an automatically generated REST API key used for connecting to Delighted. Obtaining Credentials The API Key is displayed in the [Delighted API documentation](https://app.delighted.com/docs/api) in the Introduction topic if you are signed in to Delighted. Creating Connection Paste the obtained API key to the API Key field in the connection editor. Connector Specifics Object Peculiarities Responses When importing data to the Responses object, the value for the PersonProperties field should be provided as a JSON array of objects, like the following: {\"Customer ID\":\"1234567\",\"Location\":\"USA\"} People When importing data to the People object, the required fields are not defined in the mapping since they depend on the selected Channel type. If you set Channel to Email , you must map the Email field. If you set Channel to SMS , you must map the PhoneNumber field. Metrics Objects Metrics* objects return aggregated information from survey responses as a single row. The set of metrics differs depending on the survey type. Therefore, a separate Metrics object with its own columns has been added for each type of survey. You need to use the Metrics object that matches the survey type in your account. The following Metrics* objects are supported in Skyvia: Metrics_5StarRating , Metrics_CES , Metrics_CSAT , Metrics_NPS , Metrics_ENPS , Metrics_PMF , Metrics_Smileys , Metrics_Thumbs . Incremental Replication and Synchronization Skyvia supports Synchronization for objects that contain the CreatedDate and UpdatedDate fields. Thus, only such objects as EmailAutopilotPeople and SMSAutopilotPeople can be fully synchronized. Please note that although such objects as SMSAutopilotConfiguration and EmailAutopilotConfiguration have CreatedDate/UpdatedDate fields, they are read-only. According to the Delighted documentation, they do not support data inserting/updating. Skyvia supports Replication with Incremental Updates for objects that contain either the CreatedDate or UpdatedDate fields. Such objects as Responses , EmailAutopilotPeople , SMSAutopilotPeople , SMSAutopilotConfiguration , and EmailAutopilotConfiguration have both CreatedDate and UpdatedDate fields; thus, they can be fully replicated with incremental updates. The People object contains only CreatedDate , so Skyvia detects only new records in it. DML Operations Support Operation Object INSERT, UPDATE, DELETE EmailAutopilotPeople , SMSAutopilotPeople INSERT, DELETE People INSERT Responses , UnsubscribedPeople Supported Actions Skyvia supports all the common actions for Delighted." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/digitalocean_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources DigitalOcean [DigitalOcean](https://www.digitalocean.com/) is an infrastructure as a service (IaaS) provider that simplifies cloud computing for developers. Data integration : Skyvia supports importing data to and from DigitalOcean, exporting DigitalOcean data to CSV files, and replicating DigitalOcean data to relational databases. Backup : Skyvia Backup does not support DigitalOcean. Query : Skyvia Query supports DigitalOcean. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) authorize with API token. Getting Credentials To locate the API token, do the following. Go to DigitalOcean Click API on the left and click Generate New Token . Name your token, set the expiration time and select the scopes. Copy the token and save it somewhere. The API token is available only once during creation. Creating Connection Enter the obtained API token into the box in the Connection Editor. Connector Specifics Object Peculiarities FloatingIPs and ReservedIPs Either Droplet_Id or Region_Slug + ProjectId fields are required for mapping while performing Insert operation. FloatingIPActions and ReservedIPActions Select operation over Type field returns assign_ip \u0438 reserveip values. However, when you Insert data to FloatingIPActions or ReservedIPActions objects Type fields must contain either assign or unassign values. Otherwise, you will receive an error. If the field in Source that you map to the Type field in Target contains other or no values. You can erither use an expression or constant mapping. Firewalls In addition to the required Name field, atleast InboundRules or OutboundRules field is required for mapping while performing Insert operation. Nested Types The following DigitalOcean objects store the nested JSON objects: Object Field Apps Spec_Services, Spec_Domains, Spec_StaticSites, Spec_Jobs, Spec_Workers, Spec_Functions, Spec_Databases Firewalls InboundRules, OutboundRules Stored Procedures AddDropletsToFirewall To add droplets to a firewall, use the following command: AddDropletsToFirewall(:firewall_id,:droplet_ids) RemoveDropletsFromFirewall To remove droplets from a firewall, use the following command: RemoveDropletsFromFirewall(:firewall_id,:droplet_ids) AddTagsToFirewall To add tags to a firewall, use the following command: AddTagsToFirewall(:firewall_id,:tags) RemoveTagsFromFirewall To remove tags from a firewall, use the following command: RemoveTagsFromFirewall(:firewall_id,:tags) Incremental Replication and Synchronization Replication with Incremental Updates is fully supported for the Apps and Projects objects. For the Droplets, DropletBackups, DropletSnapshots, DropletFirewalls, Firewalls , and Snapshots objects incremental replication only checks for new records only as these objects lack the UpdatedDate column. Synchronization is supported for the Apps and Projects objects. DML Operations Skyvia supports DML operations for such DigitalOcean objects: Operation Object INSERT, UPDATE, DELETE Apps, Firewalls, Projects, SSHKeys INSERT, DELETE Domains, Droplets, FloatingIPs, ReservedIPs, Tags INSERT FloatingIPActions, ReservedIPActions DELETE Snapshots Supported Actions Skyvia supports all the common actions for DigitalOcean." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/discourse_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Discourse [Discourse](https://www.discourse.org/) is a powerful open-source forum software. Data integration : Skyvia supports importing data to and from Discourse, exporting Discourse data to CSV files, replicating Discourse data to relational databases, and synchronizing Discourse data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Discourse. Query : Skyvia Query supports Discourse. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Discourse, you must provide the Host URL, API Key, and User Name. Getting Credentials Host URL \u2014 Login to Discourse and copy the URL value of the Discourse. Note that it must include the protocol part (\u2018https://\u2019), and must not include the slash (/) at the end. For example: https://mycorp.discourse.group User Name \u2014 the name of the user you log in to Discourse with. You can get it on the User tab of the Admin panel. API Key \u2014 To get the API Key, perform the following steps: Sign in to Discourse Click the button with three strips, then select Admin , and switch to the API tab. Click +New API Key . Specify the Description , the User level , and Scope . We recommend selecting the most comprehensive possible scopes. Click Save . Copy the displayed Key . Note that this is the only time it is displayed. If you ever need an API key again, you will need to create a new one. Creating Connection Copy the obtained credentials to the corresponding boxes in the Skyvia connection editor. Connector Specifics Object peculiarities The DELETE operation against the Posts and LatestTopics tables performs soft delete. The deleted records remain in the table with the filled DeletedAt value.\nWhen selecting all records from these tables, the records with not empty DeletedAt value are not displayed.\nWhen selecting specific records by their ids, the deleted records are displayed. Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: Backups , Notifications , Posts , RepliestToPost , LatestTopics , ActiveUsers , UserActions . Note that only newly created objects are detected of the Notifications , LatestTopics , UserActions , and ActiveUsers objects because they have only the CreatedDate field, and don\u2019t have the UpdatedDate field. Only the Posts object supports synchronization. DML Operations support Operation Object INSERT, UPDATE, DELETE Badges, Groups, LatestTopics, Posts INSERT, DELETE ActiveUsers INSERT, UPDATE Categories, TagGroups INSERT PrivateMessagesSentForUser Stored procedures Skyvia represents part of the supported Discourse features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SuspendUser To suspend the user activity, use the command call SuspendUser(:UserId,:SuspendUntil,:Reason,:Message,:PostAction) After executing command, the suspended users records are stored in the ActiveUsers and the SuspendedUsers objects. PARAMETER NAME DESCRIPTION UserId The Id of the user whose activity has to be suspended SuspendUntil The date the user is suspended until Reason The explanation why the user is suspended Message The message to send to the suspended user PostAction String For example, call SuspendUser(7,'2022-12-15','some reason','some message','delete') SilenceUser To change use status to Silence , use the following command call SilenceUser(:UserId,:SilencedTill,:Reason,:Message,:PostAction) PARAMETER NAME DESCRIPTION UserId The Id of the user whose activity has to be silenced SilencedTill The date the user is silenced til Reason The explanation for the action Message The message to send to the silenced user PostAction String AnonymizeUser To change user status to Anonim , use the command call AnonymizeUser(:UserId) UpdateUsername To update the user name, use the command call UpdateUsername(:Username,:NewUsername) updates the Username value. PARAMETER NAME DESCRIPTION Username Current Username value NewUsername New Username value AddGroupMembers To add the user to the specific group, use the command call AddGroupMembers(:groupId,:username) RemoveGroupMembers To remove the user from the specific group, use the command call RemoveGroupMembers(:groupId,:username) MarkNotificationsAsRead To mark the existing notification as read, use the command call MarkNotificationsAsRead(:NotificationId) Supported Actions Skyvia supports all the common actions for Discourse." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/drip_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Drip [Drip](https://www.drip.com/) is a marketing automation platform built for e-commerce. Drip helps you gather data, create better customer experiences, bump up long-term loyalty and drive more revenue. Data integration : Skyvia supports importing data to and from Drip, exporting Drip data to CSV files, and replicating Drip data to relational databases. Backup : Skyvia Backup does not support Drip backup. Query : Skyvia Query supports Drip. Establishing Connection To create connection with Drip, log in to your Drip account. Creating Connection To create a Drip connection, perform the following steps: Click Sign In with Drip . Enter the email address used when registering in Drip and click the Continue button. In the next window, enter your password and click Submit . Skyvia will request permission to access your Drip account. Click Authorize to give permissions. Connector Specifics Object Peculiarities When you import records into the CampaignSubscribers object, records are actually inserted into the Subscribers object, i.e. a new subscriber is added, but he/she is not connected to the campaign. That happens due to Drip API limitations, which is why you have to add subscribers to the campaign separately. Not Supported Objects Skyvia does not support the following Drip objects: CustomFields , Events , Tags . DML Operations Support Skyvia supports the following DML operations for Drip objects: Operations Objects INSERT, UPDATE, DELETE Subscribers INSERT, DELETE Webhooks INSERT CampaignSubscribers Incremental Replication and Synchronization Synchronization is not supported for Drip. Skyvia supports replication with Incremental Updates for the following Drip objects: Accounts , Broadcasts , Campaigns , CampaignSubscribers , Conversions , Forms , Subscribers , Workflows , Webhooks . Note that updates of the existing records are not detected because Drip objects don\u2019t have the UpdatedDate field. Only the new records are detected after the first replication. Supported Actions Skyvia supports all the common actions for Drip." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/dynamics_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Dynamics 365 [Dynamics 365](https://www.microsoft.com/en-us/dynamics-365) is a product line of ERP and CRM applications from Microsoft. Skyvia supports the CRM part of Dynamics 365. Data integration : Skyvia supports importing data to and from Dynamics 365, exporting Dynamics 365 data to CSV files, replicating Dynamics 365 data to relational databases, and synchronizing Dynamics 365 data with other cloud apps and relational databases. Backup : Skyvia Backup supports Dynamics 365. Query : Skyvia Query supports Dynamics 365. Establishing Connection To create a connection to Dynamics 365, authenticate with one of the options: OAuth 2.0 or User name & Password . Creating OAuth Connection To connect to Dynamics 365 using OAuth 2.0, perform the following steps: Enter the URL of your Dynamics CRM. Click Sign In with Dynamics CRM . Sign in to your Dynamics CRM account. Click Accept on the Permission Access window. Creating Username & Password Connection To connect to Dynamics 365 using User name & Password, specify the instance URL , Username and Password . Additional Connection Parameters Metadata Cache You can specify the period after which [Metadata Cache](https://docs.skyvia.com/connections/metadata-cache.html) expires. Batch Size The Batch Size specifies the max number of records to send to Dynamics 365 in one batch. Default value is set to 500. Max possible value supported by Dynamics 365 is 1000. If you experience timeout errors when loading data to Dynamics 365, decrease the Batch Size value. Dynamics 365 may need time to process records depending on the number of fields in a record, triggers, etc. Connector Specifics Filtering Specifics When you use Not equals operator in query or data integrations, result includes records matching the filter conditions and the records with null values. Object Relations In Dynamics 365, certain fields can reference different types of objects. Skyvia displays an additional field with the same name as the reference field, followed by a $type suffix to specify the object type. For example, the parentcustomerid field in the Contact object can link either to another Contact or to an Account : 1\n2 parentcustomerid = '{FBC26792-C828-482F-86D9-DAC2814918FA}'\nparentcustomerid$type = 'Contact' If the reference field is marked as required, map the *$type field too. Skyvia detects related fields for polymorphic relations, but has certain limitations when working with them. You can\u2019t join related objects in a single export task. You can\u2019t add an object referenced by such field when performing one-to-many import or synchronization. Skyvia can restore such relations only when the reference field is not required and is updatable. \nYou can use Lookup component in Data Flow to load data from the referenced objects. Object Peculiarities Connections To avoid creating duplicates, restore only records where IsMaster field equals true . Alternatively, filter out records without IsMaster set to true in the backup settings. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for objects that have UpdatedDate or CreatedDate fields. Skyvia supports Synchronization for objects that support the INSERT and UPDATE operations and have the UpdatedDate or CreatedDate fields. Skyvia will remove records deleted in Dynamics 365 from the other source only if auditing is enabled in Dynamics 365. To enable auditing, follow the steps in the [Dynamics 365 documentation](https://learn.microsoft.com/en-us/power-platform/admin/enable-use-comprehensive-auditing#enable-auditing) . Supported Actions Skyvia supports all the common actions for Dynamics 365." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/elasticsearch_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Elasticsearch [Elasticsearch](https://www.elastic.co/) is a powerful search and analytics engine for all types of structured and unstructured data. Data integration : Skyvia supports importing data to and from Elasticsearch, exporting Elasticsearch data to CSV files, replicating Elasticsearch data to relational databases and replicating other cloud apps\u2019 data to Elasticsearch. Backup : Skyvia Backup does not support Elasticsearch. Query : Skyvia Query supports Elasticsearch. Establishing Connection You can [create a connection](https://docs.skyvia.com/connections/#creating-connections) to both cloud and on-premises Elasticsearch servers using API Key or Basic authentication. Use API Key authentication for cloud Elasticsearch servers and on-premise servers with Kibana. If there is no Kibana on your on-premise server, connect using basic authentication. Make sure your server is available for external connections. Skyvia will access your server from the following IPs: 40.118.246.204, 13.86.253.112, and 52.190.252.0. Getting Credentials Domain The Elasticsearch domain is the base address of the server where your Elasticsearch is deployed. For the on-premises servers, domain means a server domain name or IP address. To get Domain value for the cloud Elasticsearch server, do the following: Log in to Elasticsearch Cloud. Select the needed Deployment from the Deployments list. Click Copy endpoint near the Elasticsearch label. API Key To obtain the API Key, perform the following actions: For the cloud server, log in to the Elasticsearch Cloud server and select the needed Deployment. For the on-premise server, log in to your Kibana. Click on the menu icon on the left and select Stack Management in the Management drop-down list. In the Security block select API Keys . Click + Create API Key on the right. Name the API Key and click Create API Key . Copy the created API Key. The API key is visible only once when you create it. Keep it somewhere safe to be able to access it later. Creating Connection Connection Using API Key To create a connection with Elasticsearch using API Key authentication, perform the following steps: Insert your Elasticsearch server address into the Domain box. Select API Key authentication type. Insert API Key Connection Using Basic Authentication To create a connection with Elasticsearch using a basic authentication type, perform the following steps: Insert your Elasticsearch server address into the Domain box. Select Basic authentication type. Enter your Elasticsearch Username and Password . Additional Connection Parameters Port By default, the port value for the HTTP connection is 9200, and 9243 for the HTTPS connection. Specify your custom port value if it differs from the default value. SearchKeepAliveTimeout Number of seconds during which the source object state is \u201cfrozen\u201d so that query results stay consistent if someone adds/deletes data in the source object. \nWhen the value equals \u20180\u2019, possible changes made to the source data may be included in the search results. \nBy default, the value is set to \u20180\u2019. ExtraArrayFields Enable this parameter when your data contain values with nested arrays and objects. \nThis allows reading fields of basic data types and complex array fields. When this checkbox is selected, Skyvia creates additional JSON array fields with the -array suffix in the name for each existing field. When this checkbox is not selected, Skyvia reads all the array fields the same way as basic data type fields. \nIf you query Elasticsearch data containing arrays with ExtraArrayFields disabled, integration or query fails with an error. By default, this checkbox value is set to true . For example, we have a field f_long , which data type is long . When we set the ExtraArrayFields parameter as true , two fields are created in the connector: f_long (long) and f_long_array (JsonArray). Case 1: The f_long field of a record contains a single value: ExtraArrayFields = true : the f_long field returns the single long value, and the f_long_array field returns an array with single value. ExtraArrayFields = false : the f_long field returns its long value. Case 2: The field f_long contains an array of two or more values. ExtraArrayFields = true : the f_long field returns null, and the f_long_array returns the array. ExtraArrayFields = false : the f_long field returns an error because it cannot convert an array to a number. Connector Specifics Replication Target You can replicate cloud apps data to Elasticsearch. If you try to replicate objects with the composite primary key, the integration will return an error. The primary key field in a target object always has the name \u201cId\u201d after replication, even if it had another name in the source object (for example, Account Id or Internal Id). Object Peculiarities All Elasticsearch objects are custom. DML Operations Support All Elasticsearch objects support INSERT, UPDATE and DELETE operations. Incremental Replication and Synchronization Skyvia does not support Incremental Replication (as a Source) and Synchronization because none of the Elasticsearch objects has creation and modification timestamp fields. Skyvia supports Incremental Replication with Elasticsearch as a Target. Supported Actions Skyvia supports all the common actions for Elasticsearch." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/emailoctopus_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources EmailOctopus [EmailOctopus](https://www.emailoctopus.com) is an easy-to-use and inexpensive email marketing platform for anyone. Data integration : Skyvia supports importing data to and from EmailOctopus, exporting EmailOctopus data to CSV files, and replicating EmailOctopus data to relational databases. Backup : Skyvia Backup supports EmailOctopus backup. Query : Skyvia Query supports EmailOctopus. Establishing Connection To create a connection with EmailOctopus, you need to specify an API Key. Getting Credentials To locate the API Key, perform the following steps: Sign in with your EmailOctopus account. Click the user icon and select Integrations & API . Click Manage near the API. Click Create key . Enter the API Key name. Copy or download the API Key. The API Key is available only once when you create it. We recommend downloading the API Key and keeping it somewhere safe. Creating Connection Specify the obtained API Key in the corresponding box in the Connection Editor. Connector Specifics Objects Peculiarities Contacts EmailOctopus Contacts object is related to the Lists object. You can query EmailOctopus contacts only by specifying the list that stores these contacts. EmailOctopus API requires a ListID value to query contacts from a list, for example, using a filter by ListID . Skyvia does not require specifying ListID values for querying contacts. \nIf you don\u2019t specify ListID values, Skyvia queries all the Lists records first, takes their IDs, and then queries Contacts records for each Lists record. \nThis method takes more time and consumes more API calls. It uses at least one API call for every Lists record. ListFields Skyvia does not support the ListFields object. Lists fields are stored as a JSON arrays in the Fields column of the List object. Read-Only Objects You can only import data into the Lists , ListTags and Contacts objects. Other objects are read-only. You cannot import data to them or sync or restore their data from the backup. Custom Fields The Contacts object stores custom field values in the Fields field in JSON object format. You can insert and update data in this object. \nFor user convenience, the FirstName and LastName values are also available in the separate corresponding fields. Each contact list may contain custom fields particular to this list only. Contacts from different lists may have separate sets of custom fields. Incremental Replication and Synchronization Skyvia does not support Synchronization for EmailOctopus. Since none of the EmailOctopus objects has the UpdatedDate field, Replication with Incremental Updates cannot detect the updated records. \nOnly Lists and Contacts objects support Incremental Replication. These objects have the CreatedDate field. Thus Incremental Replication can detect only new records for these objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Contact, Lists INSERT ListTags Supported Actions Skyvia supports all the common actions for EmailOctopus." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/everhour_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Everhour [Everhour](https://everhour.com/) is a powerful time-tracking software with hassle-free integrations. It is an easy and accurate time tracker for budgeting, client invoicing, and painless payroll. Data integration : Skyvia supports importing data to and from Everhour, exporting Everhour data to CSV files, and replicating Everhour data to relational databases. Backup : Skyvia Backup does not support Everhour. Query : Skyvia Query supports Everhour. Establishing Connection To create a connection to Everhour, you need to enter your API Token. Getting Credentials To locate the API Token, perform the following steps: Sign in to Everhour. Click Settings in the left menu. Click My Profile . Copy your Api Token under the Application Access section. Creating Connection To create a connection, insert the obtained API Token into the corresponding box in the Connection Editor. Connector Specifics Object Peculiarities Invoices When performing the Insert operation to the Invoice object, in addition to the client_id field, at least one more field from the available ones must be mapped. DML Operations Support Operation Object INSERT, UPDATE, DELETE Expenses, ExpensesCategories, Invoices, Projects, ProjectSections, ProjectTasks, Schedule, TimeRecords INSERT, UPDATE Clients Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Everhour objects: Clients, InvoiceItems, Invoices, ProjectTasks, Schedule, TimeRecords , and Users . Skyvia does not support Synchronization for Everhour. Stored Procedures Skyvia represents part of the supported Everhour features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateClientBudget call UpdateClientBudget(:client_id, :type, :budget, :period, :appliedFrom, :disallowOverbudget, :excludeUnbillableTime, :excludeExpenses,:threshold) PARAMETER NAME DESCRIPTION Client_id The Id of the client whose budget you update Type Budget type, valid values: money or time Budget Budget value in cents for money or seconds for time Period Budget periodicity, valid values: general, monthly, weekly, daily AppliedFrom The date from which the budget is active in the \u2018Y-m-d\u2019 format DisallowOverbudget Boolean ExcludeUnbillableTime Boolean ExcludeExpenses Boolean Threshold Number 0 - 100. Email admins when threshold reached For example, the command to update the client budget may look like this: call UpdateClientBudget(7320280,'money',100,'monthly','2023-4-1',false,false,false,0) DeleteClientBudget Resets the field values in the Budget block call DeleteClientBudget(:clientId) UpdateProjectBilling The following command updates a set of Budget fields in the Projects object. call UpdateProjectBilling(:project_id, :Billing_Type, :Billing_Fee, :Rate_Type, :Rate_Rate, :Rate_UserRateOverrides, :Rate_UserCostOverrides, :Budget_Type,:Budget_Budget, :Budget_Period, :Budget_AppliedFrom, :Budget_DisallowOverbudget, :Budget_ExcludeUnbillableTime,:Budget_ExcludeExpenses, :Budget_ShowToUsers,:Budget_Threshold) PARAMETER NAME DESCRIPTION Project_id The id of the project, which billing you update Billing_Type Project type, valid values: *non_billable, hourly, fixed_fee Billing_Fee Project fixed fee in cents. Required only for fixed_fee billing type Rate_Type Rates configuration for billing or budget progress, valid values *project_rate, user_rate, user_cost Rate_Rate Flat-rate in cents (e.g. 10000 means $100.00). Available only for project_rate type Rate_UserRateOverrides Override user rates in JSON object format where you specify user_Id and rate in cents. Available only for the user_rate type Rate_UserCostOverrides Override user cost rates in JSON object format where you specify user_Id and rate in cents. Available only for the user_cost type Budget_Type Budget type, valid values: money or time Budget_Budget Budget value in cents for money or seconds for time Budget_Period Budget periodicity, valid values: general, monthly, weekly, daily Budget_AppliedFrom The date from which the budget is active in the \u2018Y-m-d\u2019 format Budget_DisallowOverbudget Boolean Budget_ExcludeUnbillableTime Boolean Budget_ExcludeExpenses Boolean Budget_ShowToUsers Boolean Budget_Threshold Number 0-100. Email admins when threshold reached For example, the command to update the project billing from May the 1st may look like this: call UpdateProjectBilling('ev:179497308508805', 'fixed_fee', 10000, 'user_rate', 10000, '{\"1247853\":10000}', '{\"1247853\":10000}', 'money', 10000, 'general', '2023-5-1',false,false,false,true,0) UpdateTaskEstimate To update the field values in the Estimate block In the ProjectTasks object, use the command call UpdateTaskEstimate(:task_id,:Total,:Type,:Estimate_Users) DeleteTaskEstimate To reset the field values in the Estimate block In the ProjectTasks object, use the command call DeleteTaskEstimate(:task_id) Supported Actions Skyvia supports all the common actions for Everhour." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/exactonline_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Exact Online [Exact Online](https://www.exact.com/us/software/exact-online) is a cloud-based platform that supports all major business processes such as production, logistics, finance & administration, HR, sales and marketing. Data integration : Skyvia supports importing data to and from Exact Online, exporting Exact Online data to CSV files, replicating Exact Online data to relational databases, and synchronizing Exact Online data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Exact Online. Query : Skyvia Query supports Exact Online. Establishing Connection To create a connection to Exact Online, select the region and sign in with Exact Online. Creating Connection To connect to Exact Online, perform the following steps: Select the region from the list. Click Sign in with Exact Online . Enter the user name and password. Additional Connection Parameters Division The name of the company you want to access. Metadata Cache You can specify the time when the Metadata Cache expires. Exact Online provides only one refresh token at a time. If someone logs in with the same credentials and gets their own set of tokens, the existing ones will be invalid. Connector Specifics API Limits Consider the following API limits when querying Exact Online objects: Minutely limit: 60 API calls, per company, per minute. Daily limit: 5,000 API calls, per company, per day. No more than 10 errors per API key (code 400, 401, 403, and 404) - The block will automatically be lifted after one hour and will gradually increase when you continue making these errors. A single HTTP POST request to the Exact Online API has a size limit of 10 MB. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for objects that have UpdatedDate or CreatedDate fields. Skyvia supports Synchronization for objects that support the INSERT and UPDATE operations and have the UpdatedDate or CreatedDate fields. DML Operations Support Operation Object INSERT, UPDATE, DELETE AccountInvolvedAccounts, AccountOwners, InvolvedUserRoles, InvolvedUsers, SolutionLinks, TaskTypes, DepreciationMethods, DirectDebitMandates, Accounts, Addresses, BankAccounts, Contacts, Opportunities, QuotationLines, QuotationOrderChargeLines, Quotations, DocumentFolders, Documents, DocumentTypeFolders, ExchangeRates, AccountClassificationMappings, GLAccounts, Journals, Costcenters, Costunits, AssemblyBillOfMaterialHeader, AssemblyBillOfMaterialMaterials, BatchQuantitiesPerWarehouses, ItemWarehouses, StockBatchNumbers, StockCountLines, StockCounts, StockSerialNumbers, Warehouses, WarehouseTransferLines, WarehouseTransfers, CustomerItems, Items, SalesItemPrices, SupplierItem, Mailboxes, BillOfMaterialMaterials, BillOfMaterialRoutings, BillOfMaterialVersions, EmployeeWorkcenters, OperationResources, Operations, ProductionAreas, ShopOrderMaterialPlans, ShopOrderRoutingStepPlans, ShopOrders, TimedTimeTransactions, ManufacturingTimeTransactions, Workcenters, CostTransactions, EmployeeRestrictionItems, InvoiceTerms, ProjectAccountMutations, ProjectClassifications, ProjectHourBudgets, ProjectPlanning, ProjectPlanningRecurring, ProjectRestrictionEmployeeItems, ProjectRestrictionEmployees, ProjectRestrictionItems, ProjectRestrictionRebillings, Projects, TimeCorrections, TimeTransactions, WBSActivities, WBSDeliverables, WBSExpenses, PurchaseInvoices, PurchaseEntries, PurchaseEntryLines, PurchaseOrderLines, PurchaseOrders, SalesChannels, SalesEntries, SalesEntryLines, SalesInvoiceLines, SalesInvoiceOrderChargeLines, SalesInvoices, PlannedSalesReturns, SalesOrders, SalesOrderLines, SalesOrderOrderChargeLines, Subscriptions, SubscriptionLines, VATCodes, WebhookSubscriptions INSERT, UPDATE MailMessages, VariableMutations, GoodsReceipts, PurchaseReturns, DropShipments, GoodsDeliveries UPDATE, DELETE PlannedSalesReturnLines INSERT CommunicationNotes, Complaints, Events, ServiceRequests, Tasks, PaymentConditions, OfficialReturns, BankEntryLines, CashEntryLines, GeneralJournalEntryLines, MailMessageAttachments, MailMessagesSent, ByProductReceipts, ByProductReversals, MaterialIssues, MaterialReversals, ShopOrderReceipts, ShopOrderReversals, StageForDeliveryReceipts, StageForDeliveryReversals, SubOrderReceipts, SubOrderReversals UPDATE Payments, Receivables, ShopOrderPriorities, PurchaseReturnLines, DropShipmentLines, GoodsDeliveryLines DELETE Divisions Stored Procedures Skyvia represents part of the supported Exact Online features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UserHasRights To view what actions a user can perform against the object, use the command call UserHasRights(:object) For example, the call UserHasRights('SalesOrders') displays the operations, the user can run against the SalesOrder object. To see if a particular action is available for a user, use the command call UserHasRights(:object, :action) For example, the call UserHasRights('SalesOrders', 'POST') displays if the user can run the POST action against the SalesOrder object. GetTablesInfo The following command displays the general information about all Exact Online objects: object name, supported operations, endpoint. call GetTablesInfo The following command displays the general information about the specific object. call GetTablesInfo(:object) Supported Actions Skyvia supports all the common actions for Exact Online." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/exacttarget_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Marketing Cloud [Salesforce Marketing Cloud](https://www.salesforce.com/products/engagement-marketing/?d=www.exacttarget.com/&internal=true) (formerly ExactTarget) is a provider of digital marketing automation and analytics software and services. Data integration : Skyvia supports importing data to and from Salesforce Marketing Cloud, exporting Salesforce Marketing Cloud data to CSV files, replicating Salesforce Marketing Cloud data to relational databases, and synchronizing Salesforce Marketing Cloud data with other cloud apps and relational databases. Backup : Skyvia Backup supports Salesforce Marketing Cloud backup except for objects with composite primary keys. Query : Skyvia Query supports Salesforce Marketing Cloud. Establishing Connection To create a connection to Salesforce Marketing Cloud, select the authentication type and provide the credentials accordingly. Skyvia supports three authentication types for Salesforce Marketing Cloud: User Name & Password , Legacy App Center Client, and Server-to-Server . The Legacy App Center Client authentication is deprecated. It is available for compatibility purposes. Creating User Name & Password Connection For User Name & Password authentication, do the following. Specify the User . Enter your Password . Specify the Marketing Cloud URL. For example, https://YOUR_SUBDOMAIN.soap.marketingcloudapis.com/Service.asmx . Creating Server-to-Server Connection For Server-to-Server authentication, do the following. Enter your Marketing Cloud subdomain. Your subdomain is a 28-character string starting with the letters \u201cmc,\u201d for example, mc563885gzs27c5t9-63k636ttgm . You can find your subdomain in your browser\u2019s URL after logging in to Salesforce Marketing Cloud. Specify the App Client Id . Specify the App Client Secret . Creating Legacy App Center Client Connection You can use this authentication type only if a legacy package was created before August 1, 2019. Legacy App Center Client authentication is deprecated and available only for legacy packages. Since August 1, 2019, Marketing Cloud has removed the ability to create legacy packages, so any new packages are enhanced packages, not legacy packages, and they cannot use the Legacy App Center Client authentication. For Legacy App Center Client authentication, do the following. Select the Environment : Production or Sandbox . Enter the App Client Id . Enter the App Client Secret . Additional Parameters You also may set the following additional parameters for your Salesforce Marketing Cloud connections: Partner IDs A list of specific partner accounts or business units for retrieving requests. Use Extension Objects This parameter determines whether Salesforce Marketing Cloud Data Extensions objects are processed as user-defined Salesforce Marketing Cloud objects, allowing Skyvia to read and edit their data. Metadata Cache You can specify the time when the Metadata Cache expires. Connector Specifics Salesforce Marketing Cloud Data Extensions Since Salesforce Marketing Cloud Data Extensions have neither autogenerated key fields nor fields storing record creation or modification time, you can\u2019t use Data Extensions in Synchronization, Incremental Replication, and as a target in Import with the Upsert operation. Incremental Replication and Synchronization Skyvia supports Synchronization for Marketing Cloud objects, which support the INSERT and UPDATE operation and have either creation or modification timestamp fields. Skyvia supports Incremental Replication for objects with creation or modification timestamp fields. Supported Actions Skyvia supports all the common actions for Salesforce Marketing Cloud." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/excelonline_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Excel Online [Excel Online](https://www.microsoft.com/en-us/microsoft-365/excel) is a cloud version of a robust spreadsheet editor for various tasks, from simple data entry and basic calculations to complex financial modeling and statistical analysis. Data integration : Skyvia supports importing data to and from Excel Online, exporting Excel Online data to CSV files, and replicating Excel Online data to relational databases. Backup : Skyvia Backup does not support Excel Online. Query : Skyvia Query supports Excel Online. Establishing Connection To create a connection to Excel Online, sign in with your Microsoft account and select the workbook. Creating Connection To connect to Excel Online, perform the following steps: In the Connection Editor, click Sign In with Microsoft . Enter your Microsoft credentials. Allow Skyvia to access your data. Additional Connection Parameters Use Sheet Tables This parameter determines whether to use tables on worksheets as separate objects in the connector. Use Header To Detect Column Names This parameter determines how to read the first row of a sheet. Skyvia detects the data in the first row as column headers if enabled. \nIf the spreadsheet does not have column names, Skyvia displays standard Excel column names (A-Z). Cell Max Length This parameter defines the maximum text length in a cell. Minimum Column Count This parameter determines the minimum count of columns with data on a sheet. It works when the Use Header To Detect Column Names parameter is disabled, or the sheet has no headers. The default value is 26 (A-Z range). Connector Specifics Data Structure Each Excel Online connection represents data from a single workbook. You must select a specific workbook in the Connection Editor. \nThe Excel Online connector has two types of objects: worksheets and tables.\nThe primary key is the RowNo field. It stores a number of a row on a sheet. Sheets A sheet object is an active range of rows and columns that contain data. The active range determines the number of rows on a sheet.\nInsert operation adds new records below the existing records.\nDelete operation clears data in a row. It doesn\u2019t remove the row itself. Tables Table objects are available only with the Use Sheet Tables parameter enabled in the connection. \nInsert operation adds new records below the existing records. \nTable objects don\u2019t support the Delete operation. Incremental Replication and Synchronization Skyvia does not support Synchronization and Incremental Replication for Excel Online. Supported Actions Skyvia supports all the common actions for Excel Online." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/facebookads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Facebook Ads [Facebook Ads](https://www.facebook.com/business/ads) is a service promoting customer products, services, websites, etc. from Facebook. Data integration : Skyvia supports importing data to and from Facebook Ads, exporting Facebook Ads data to CSV files, replicating Facebook Ads data to relational databases, and synchronizing Facebook Ads data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Facebook Ads backup. Query : Skyvia Query supports Facebook Ads. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) with Facebook Ads, sign in with Facebook Ads and specify the Ad Account Id. Getting Credentials You can find the Ad Account Id in your Facebook Ads Manager. The Ad Account Id value is located above the search and filter bar, in the account dropdown menu or in the page URL. Creating Connection To connect to Facebook Ads, perform the following steps: Click Sign In with Facebook . Enter your facebook credentials and click Log In . Enter your Ad Account Id Connector Specifics Object Peculiarities There are two types of objects in Facebook Ads: standard objects and *Insights objects. Insights Objects Objects with *Insights suffix in their names are the metrics for the random period from DateStart to DateStop .\nObjects with Monthly*\u2026*Insights names are the metrics for the completed calendar months.\nObjects with Daily*\u2026*Insights names are the metrics for the completed days. All these objects are read-only. They include only the data for the completed periods in the query results by default. If you query the Daily*\u2026*Insights objects, current date data is not included in the query results, because the day is not ended yet. When you query Monthly*\u2026*Insights objects, current month data is not included in the query results, because the current month is not ended yet. Campaigns Campaigns in the Draft status are not displayed in the query results by default. Filtering Specifics The *Insights objects natively support the following logical operators for filtering: = , != , > , >= , < , <= , IN , NOT_IN , IN_RANGE , NOT_IN_RANGE . Facebook Ads API supports native filters by the following fields: Object Field Campaigns Id, Name, Objective, BuyingType, SpendCap, CreatedTime, EffectiveStatus, UpdatedTime AdSets Id, Name, CampaignId, BillingEvent, OptimizationGoal, BidAmount, LifetimeBudget, DailyBudget, CreatedTime, UpdatedTime Ads Id, Name, CampaignId, AdSetId, BidAmount, CreatedTime, UpdatedTime AdCreatives, AdVideos, AdImages Id DML Operations Support Operation Object INSERT, UPDATE, DELETE Campaigns, AdSets, Ads INSERT, DELETE AdImages, AdVideos INSERT LeadgenForms Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following Facebook Ads objects: AccountInsights, AccountInsightsByCountry, AdImages, AdInsights, AdInsightsByCountry, Ads, AdSetInsights, AdSetInsightsByCountry, AdSets, AdVideos, CampaignInsights, CampaignInsightsByCountry, Campaigns, DailyAccountInsights, DailyAccountInsightsByCountry, DailyAdInsights, DailyAdInsightsByCountry, DailyAdSetInsights, DailyAdSetInsightsByCountry, DailyCampaignInsights, DailyCampaignInsightsByCountry, LeadgenForms, MonthlyAccountInsights, MonthlyAccountInsightsByCountry, MonthlyAdInsights, MonthlyAdInsightsByCountry, MonthlyAdSetInsights, MonthlyAdSetInsightsByCountry, MonthlyCampaignInsights, MonthlyCampaignInsightsByCountry . Skyvia detects only new records and doesn\u2019t track the updated records for the LeadgenForms object. Skyvia tracks the object changes using the DateStart and DateStop fields. By default, Skyvia replicates Monthly*\u2026*Insights data for the last 36 months and Daily*\u2026*Insights data \u2014 for the last 6 months during the initial replication run. \nTo replicate reports for earlier periods, you can set the LastSyncTime parameter to the needed start date before the initial replication run. Skyvia supports Synchronization for the following Facebook Ads objects: Ads, AdSets, Campaigns . Supported Actions Skyvia supports all the common actions for Facebook Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/facebookpages_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Facebook Pages [Facebook Pages](https://www.facebook.com/pages) is a Facebook account for a business, organization, or institution that allows users to advertise and track performance. Data integration : Skyvia supports importing data from Facebook Pages, exporting Facebook Pages data to CSV files, and replicating Facebook Pages data to relational databases. Backup : Skyvia Backup does not support Facebook Pages. Query : Skyvia Query supports Facebook Pages. Establishing Connection To create a connection to Facebook Pages, sign in with Facebook. Creating Connection To connect to Facebook Pages, perform the following steps: Click Sign In with Facebook in the Connection Editor. Enter your email and password. Connector Specifics Skyvia cannot write data to Facebook Pages, its data is read-only. Object Peculiarities Object Relations All Facebook Pages object are related. Skyvia queries all the parent object records first, takes their IDs, and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes much time and consumes many API calls. It uses at least one API call for every parent object record and can be slow. The Pages object is parent for all objects, and its child objects are parents for other objects. Pages PageDailyInsights PostAttachments Posts Comments PostLifetimeInsights Videos VideoLifetimeInsights We recommend using filters on the parent object fields when querying data from such child objects. This reduces the number of parent object records for which child object data must be queried. PostAttachments The MediaImageSrc field contains a link to the attached image for image attachments. For video attachments, this field contains the autogenerated video frames. The MediaSouce field contains links to video files. When you query the Content field, Skyvia performs additional requests to the Facebook API to read the binary content. It affects the query performance and API calls usage. Due to Facebook Pages API specifics, the query result doesn\u2019t contain binary content if you query the shared attachments. The MediaSouce field may contain a link to the YouTube video. The MediaImageSrc field contains the shared video preview. *Insights Objects The PageDailyInsights, PostLifetimeInsights , and VideoLifetimeInsights are the metrics for the last two years. \nThe PageDailyInsights object contains daily metrics. The PostLifetimeInsights and VideoLifetimeInsights are metrics for the last 2 years with breakdown by posts and videos correspondingly. The Date field in the PageDailyInsights contains date (from 00:00 Pacific Time(UTC-7) of the current day till 00:00 Pacific Time(UTC-7) of the next day. You can apply filters to the Date field to select data for less than two years. The report doesn\u2019t include data for the current date. Data for the previous date is incomplete. Filtering Specifics Facebook Pages API supports the following native filters: Object Fields and Operators Comments PageId ( = ), PostId ( = ) PageDailyInsights PageId ( = ), Date ( >= , > , <= , < , = ) Pages Id ( = ) PostAttachments Id ( = ), PageId ( = ) PostLifetimeInsights PageId ( = ), PostId ( = ) Posts Id ( = ), PageId ( = ) VideoLifetimeInsights PageId ( = ) Videos PageId ( = ) Use these filters to improve performance and save API calls. You can use filters with other fields or operators, which may increase API call usage. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Comments, PageDailyInsights, Posts, and Videos . Replication only tracks new records for the Comments and PageDailyInsights objects. Skyvia doesn\u2019t support Synchronization for Facebook Pages objects. Due to Facebook Pages API specifics, the data for the previous date in the PageDailyInsights object may be incomplete. The API updates them in 24 hours. To prevent data loss during replication, we enabled the retrospective updates with the AttributionWindow parameter set to 1. Stored Procedures Skyvia represents part of the supported Facebook Pages features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . Nested Objects Some Facebook Pages fields store complex structured data in JSON format. You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. Object Field Nested Object Pages CategoryList IdNameType DeliveryAndPickupOptionInfo, Emails, MessengerAdsDefaultIcebreakers, MessengerAdsDefaultQuickReplies ValueType Posts Actions PostActionsType Attachments StoryAttachmentType MessageTags, StoryTags TagsType To PageFeedToType Comments MessageTags, AttachmentDescriptionTags TagsType Videos Format VideoFormatType AdBreaks, CustomLabels ValueType ContentTags IdType PageDailyInsights PageFansLocale, PageFansCity, PageFansCountry KeyValueType PostLifetimeInsights PostVideoRetentionGraph, PostVideoRetentionGraphClickedToPlay, PostVideoRetentionGraphAutoplayed, PostVideoViewTimeByAgeBucketAndGender, PostVideoViewTimeByRegionId, PostVideoViewTimeByCountryId KeyValueType VideoLifetimeInsights TotalVideoRetentionGraph, TotalVideoRetentionGraphAutoplayed, TotalVideoRetentionGraphClickedToPlay, TotalVideoViewTimeByAgeBucketAndGender, TotalVideoViewTimeByRegionId KeyValueType StoryAttachmentType DescriptionTags TagsType SubattachmentsData StorySubattachmentType Supported Actions Skyvia supports all the common actions for Facebook Pages." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/firefliesai_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Fireflies.ai [Fireflies.ai](https://fireflies.ai/) is an AI tool that helps to transcribe, summarize, search, and analyze team conversations. Data integration : Skyvia supports importing data to and from Fireflies.ai, exporting Fireflies.ai data to CSV files, and replicating Fireflies.ai data to relational databases. Backup : Skyvia Backup does not support Fireflies.ai. Query : Skyvia Query supports Fireflies.ai. Establishing Connection To create a connection to Fireflies.ai, specify the API Key. Getting Credentials To obtain the API Key, perform the following steps: Go to Fireflies.ai. Click Settings on the left. Select Developer Settings . Creating Connection To connect to Fireflies.ai, enter the obtained API Key to the corresponding box in connection Editor. Connector Specifics Object Peculiarities Transcripts You can significantly increase query performance if you exclude the Sentenses field from query. TranscriptSentences This objects represents the complex structured Transcripts object field Sentences as a separate object. Filtering Specifics Fireflies.ai API supports the following native filters: Object Fields and Operators AppOutputs Id ( = ), TranscriptId ( = , IN) TeamBites Id ( = ), TranscriptId ( = ) Transcripts Id ( = ), Title ( = ), Date ( <= , >= , > , < , = ), OrganizerEmail ( = ), UserId ( = ) Users Id ( = ) Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Nested Objects Some Fireflies.ai fields stores complex structured data in JSON format. You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. Object Field Nested Object Transcripts Speakers SpeakerType Participants ParticipantType MeetingAttendees MeetingAttendeeType FirefliesUsers ValueType Sentences SentenceType AppsPreview AppOutputType TeamBites Sources MediaSourceType Captions BiteCaptionType Bites Sources MediaSourceType Captions BiteCaptionType Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the AppOutputs and TeamBites objects. It tracks only new records for these objects. Skyvia doesn\u2019t support Synchronization for Fireflies.ai objects. DML Operations Support Operation Object INSERT TeamBites DELETE Transcripts Supported Actions Skyvia supports common actions custom actions for Fireflies.ai." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/firefliesai_connections/add-to-live-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Fireflies.ai AddToLive This action adds the Fireflies.ai bot to an ongoing meeting. It\u2019s API rate limit is 3 requests per 20 minutes. Action Settings Setting Description Meeting_link String. A valid http URL for the meeting link, i.e. Gooogle Meet, Zoom, etc. Title String, optional. Title or name of the meeting to identify the transcribed file. If title is not provided, a default title will be set automatically Meeting_password String, optional. Password for the meeting, if applicable. Duration Int32, optional. Meeting duration in minutes. Valid values are more than 15 and less than 120 minutes. If omited,it is set to 60 minutes by default Language String, optional. The meeting language code. If omitted, it is set to English by default. For a complete list of language codes, please view [Language Codes](https://docs.fireflies.ai/miscellaneous/language-codes) Attendees String, optional. The array of [Attendees](https://docs.fireflies.ai/schema/input/attendee#param-display-name) for expected meeting participants. Action Parameters AddToLive action parameters correspond to the fields of the target object. You must map at least the required parameters. Result The Fireflies.ai bot to an ongoing meeting." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/firefliesai_connections/set-user-role-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Fireflies.ai SetUserRole This action updates a user\u2019s role within a team. Action Settings Setting Description User_id The identifier of the user Role The Role to be assigned to the user. Valid roles are admin and user . Action Parameters SetUserRole action parameters correspond to the fields of the target object. You must map both parameters. Result A new role is assigned to a specified user in the target." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/firefliesai_connections/upload-audio-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Fireflies.ai UploadAudio This action uploads audio files to Fireflies.ai for transcription. Action Settings Setting Description Url String. The URL of media file to be transcribed. Make sure the URL is a valid https string and that the file is publicly accessible. The valid media files formats are mp3, mp4, wav, m4a, and ogg. Title String, optional. Title or name of the meeting to identify the transcribed file. Webhook String, optional. URL for the webhook that receives notifications when transcription is completed. Custom_language String, optional. The meeting language code. If omitted, it is set to English by default. For a complete list of language codes, please view [Language Codes](https://docs.fireflies.ai/miscellaneous/language-codes) . Save_video Boolean, optional. Specify whether the video should be saved or not. Attendees String, optional. The array of [Attendees](https://docs.fireflies.ai/schema/input/attendee#param-display-name) for expected meeting participants. Client_reference_id String, optional. Custom identifier set by the user during upload used to identify your uploads in your webhook events. Action Parameters UploadAudio action parameters correspond to the fields of the target object. You must map at least the required parameters. Result A file is uploaded to Fireflies.ai for transcription." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/float_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Float [Float](https://www.float.com/) is a resource management, planning and scheduling software platform to plan a team\u2019s work. Data integration : Skyvia supports importing data to and from Float, exporting Float data to CSV files, replicating Float data to relational databases and synchronizing Float data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Float. Query : Skyvia Query supports Float. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Float, you must enter your API Key. Getting Credentials To obtain the API Key, do the following: Log in to Float. Click the gear icon on the left and select Team Settings . Click Integrations and scroll the screen down to find the API Key. Copy the API Key. Creating Connection To connect to Float, insert the obtained API Key to the corresponding box in the Connection Editor. Connector Specifics Object Peculiarities Tasks You must map either the PeopleId or PeopleIds field in addition to the required fields to import data to the Tasks object successfully. The Tasks object has the TaskDays field, returning the JSON array of all the dates for recurring tasks, like the following: [\"2025-01-02\",\"2025-01-09\",\"2025-01-16\",\"2025-01-23\"] . By default, it returns no data. To obtain data for this field, you need to use filters on all of the following fields: FilterStartDate , FilterEndDate , and FilterExpand . Use FilterStartDate and FilterEndDate to define period, for which to obtain data, and the FilterExpand field must equal to \u2018TaskDays\u2019. PeopleReports and ProjectReports To get records from the PeopleReports and ProjectReports objects, you need to filter data by StartDate and EndDate fields. The received data will correspond to the period between the specified dates. PublicHolidays This object displays records for the current year by default when querying. To get data for another period, use filters by Region, Year, StartDate , and EndDate with = operator. The fields Year, StartDate, and EndDate return null results when querying. These fields are designed for filtering only and don\u2019t store any data. Filtering Specifics Float API supports the following native filters: Object Operator Field Tasks = ProjectId, RepeatState, Status, TaskMetaId, PeopleId, FilterStartDate, FilterEndDate, FilterExpand Tasks = , >= UpdatedDate PublicHolidays = Region, Year, StartDate, EndDate ProjectReports = ProjectId, StartDate, EndDate PeopleReports = PeopleId, StartDate, EndDate LoggedTime = PeopleId, ProjectId, StartDate, EndDate TimeOffs = Status, FullDay Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. DML Operations Support Operation Object INSERT, UPDATE, DELETE Clients, Departments, LoggedTime, Milestones, People, Phases, Projects, ProjectTasks, Tasks, TeamHolidays, TimeOffs INSERT, UPDATE TimeOffTypes Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Float objects: Accounts, LoggedTime, People, Phases, Projects, ProjectTasks, Tasks, and TimeOffs . Skyvia supports Synchronization for the following Float objects: LoggedTime, People, Phases, Projects, ProjectTasks, Tasks, and TimeOffs . Supported Actions Skyvia supports all the common actions for Float." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/followupboss_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Follow Up Boss [Follow Up Boss](https://www.followupboss.com/) is a CRM platform designed for real estate business management. Data integration : Skyvia supports importing data to and from Follow Up Boss, exporting Follow Up Boss data to CSV files, replicating Follow Up Boss data to relational databases, and synchronizing Follow Up Boss data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Follow Up Boss. Query : Skyvia Query supports Follow Up Boss. Establishing Connection To create a connection to Follow Up Boss, specify the API Key. Getting Credentials To locate the API Key, do the following. Go to Follow Up Boss. Click Admin at the top, select API , and click Create API Key . Name your API Key and click Create API Key . Copy the API Key and save it in a safe place. The API Key is available only once during creation. Save it to access it again. Creating Connection To connect to the Follow Up Boss, enter the obtained API Key in the API Key box. Additional Connection Parameters Use Custom Fields Enable this parameter to make Follow Up Boss custom fields available in Skyvia. Connector Specifics Object Peculiarities Deals The IncludeDeleted and IncludeArchived fields return null values when querying. These fields are designed for filtering only. Events Due to Follow Up Boss API specifics, when you import data to the Events object, Skyvia log displays the ID of a person, not event. \nDepending on the data specified in the import, it may be the ID of a new person or an existing one. Filter Specifics Use the following filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. The CreatedDate and UpdatedDate fields support the following operators: > , >= , < , <= . Object Field Operator Deals Status, PipelineId, IncludeDeleted, IncludeArchived = EmailMarketingCampaigns Origin, OriginId = EmailMarketingEvents Type, PersonId = People Name, FirstName, LastName, Stage, Source, AssignedLenderId, AssignedLenderName, AssignedUserId, AssignedTo, Contacted = PeopleRelationships PersonId, Name, FirstName, LastName = Tasks PersonId, AssignedTo, AssignedUserId, Name, Type, IsCompleted = Custom Fields Custom fields are available for the People and Deals objects. Custom fields support the INSERT and UPDATE operations. Follow Up Boss supports the following types of custom fields: Follow Up Boss Data Type DbType Text Boolean Date Date Dropdown String Number Int64 Nested Objects The Emails, Addresses, and Phones fields in the People and PeopleRelationships objects store complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the ActionPlans, ActionPlansPeople, Appointments, Calls, Deals, EmailMarketingEvents, Events, People, PeopleRelationships, SmartLists, Tasks, Templates, Users objects. Skyvia supports Synchronization for the ActionPlansPeople, Appointments, Calls, Deals, People, PeopleRelationships, Tasks, Templatesobjects . Incremental replication and Synchronization track only new records for the Deals object. DML Operations Support Operation Object INSERT, UPDATE, DELETE Appointments, Deals, Groups, People, PeopleRelationships, Pipelines, Tasks, Teams, Templates, TextMessageTemplates INSERT, UPDATE ActionPlansPeople, AppointmentOutcomes, AppointmentTypes, Calls, EmailMarketingCampaigns, Stages INSERT EmailMarketingEvents, Events Stored Procedures Skyvia represents part of the supported Follow Up Boss features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . DeleteUser To delete a user, use the following command. call DeleteUser(:id, :assignTo) PARAMETER NAME DESCRIPTION Id The ID of the user to be deleted AssignTo The ID of another user to reassign the deleted user\u2019s leads to MergeEmailTemplate Merge an email template with multiple recipients using the following command. call MergeEmailTemplate(:templateId, :mergePersonId, :recipients) PARAMETER NAME DESCRIPTION TemplateId The ID of the email template MergePersonId Person ID to use for merge fields like %contact_name% Recipients List of recipients in the To: field of this email in JSON format For example, the list of the recipients parameter value may look like this 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 {\n\"to\": [\n{\n\"name\": \"Bob Alvarez\",\n\"phone\": \"+14075558075\"\n},\n{\n\"name\": \"Alice Alvarez\",\n\"phone\": \"+14075558710\"\n},\n{\n\"name\": \"Carol Alvarez\",\n\"phone\": \"+14075551643\"\n},\n{\n\"phone\": \"+14075556097\"\n}\n]\n} MergeTextMessageTemplate To merge a text message template with multiple recipients, use the following command. call MergeTextMessageTemplate(:templateId, :personId, :recipients) PARAMETER NAME DESCRIPTION TemplateId The ID of the text message template PersonId Person ID to use for merge fields like %contact_name% Recipients List of recipients of this text message in JSON format For example, the list of the recipients parameter value may look like this 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 {\n\"to\": [\n{\n\"name\": \"Bob Alvarez\",\n\"phone\": \"+14075558075\"\n},\n{\n\"name\": \"Alice Alvarez\",\n\"phone\": \"+14075558710\"\n},\n{\n\"name\": \"Carol Alvarez\",\n\"phone\": \"+14075551643\"\n},\n{\n\"phone\": \"+14075556097\"\n}\n]\n} DeleteStage To delete a stage, use the command call DeleteStage(:id, :assignStageId) PARAMETER NAME DESCRIPTION Id The ID of the stage to delete AssignStageId The stage Id to assign to action plans referencing the deleted stage DeleteAppointmentType Use the following command to delete the appointment type. call DeleteAppointmentType(:id, :assignTypeId) PARAMETER NAME DESCRIPTION Id The id of the appointment type AssignTypeId The ID of the type to reassign the existing appointments DeleteAppointmentOutcome Use the following command to delete an appointment outcome. call DeleteAppointmentOutcome(:id, :assignOutcomeId) PARAMETER NAME DESCRIPTION Id ID of the appointment outcome to delete AssignOutcomeId The Id of the outcome to reassign existing appointments to Supported Actions Skyvia supports all the common actions for the Follow Up Boss" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/formcrafts_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources FormCrafts [FormCrafts](https://formcrafts.com/) is a robust and simple online form builder. Data integration : Skyvia supports importing data from FormCrafts, exporting FormCrafts data to CSV files, and replicating FormCrafts data to relational databases. Backup : Skyvia Backup does not support FormCrafts. Query : Skyvia Query supports FormCrafts. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to FormCrafts, specify the API Key. Getting Credentials API Key To obtain the FormCrafts API Key, perform the following steps: Log in to FormCrafts. Click Account on the left and select API Keys in the appeared menu. Create a new API Key or copy the existing one. Creating Connection To connect to FormCrafts, paste the obtained API Key to the corresponding box in the Connection Editor. Connector Specifics Skyvia cannot write data to FormCrafts, its data is read-only. Object Peculiarities Nested Objects The Content field in the Responses object stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import. Select the Nested Objects checkbox in import to enable this feature. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Forms and Responses objects. Skyvia doesn\u2019t support Synchronization for the FormCrafts. Supported Actions Skyvia supports the following actions for the FormCrafts connector: Execute Command in Source and Lookup Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/formstack-documents_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Formstack Documents [Formstack](https://www.formstack.com/) is a workplace productivity platform providing no-coding online forms, documents, and signatures. This connector works with the Documents features of Formstack. Use the Formstack connector for working with the Forms feature. Data integration : Skyvia supports importing data to and from Formstack Documents, exporting Formstack Documents data to CSV files, and replicating Formstack data to relational databases. Backup : Skyvia Backup does not support Formstack Documents. Query : Skyvia Query supports Formstack Documents. Establishing Connection To create a connection to Formstack Documents, you need to specify the API Key and API Secret . Getting Credentials To get API Key and API Secret, perform the following actions: Log in to Formstack Documents. Click on the user name in the top right and select API Access . Create the API Key. Copy the API Key and API Secret. Creating Connection Enter the obtained API Key and API Secret into the corresponding boxes in the connection Editor. Connector Specifics Object Peculiarities To successfully import data to the Document object, you must map the required Name and Type fields and additional fields.\nAdditional fields needed for mapping are determined by the Type value. \nFor an HTML document, you must map the Html field and pass the document body to it. You can also map the SizeWidth and SizeHeight fields.For other document types, you must map either FileContents or FileUrl column. DML Operation Operation Object INSERT, UPDATE, DELETE Documents, DataRoutes INSERT DataRouteDeliveries, DocumentDeliveries Incremental Updates Skyvia does not support Synchronization and Replication with Incremental Updates for Formstack Documents. Stored Procedures Skyvia represents part of the supported Formstack Documents features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CopyDocument You can use the following command to copy a specific document call CopyDocument(:document_id, :name) Supported Actions Skyvia supports all the common actions for Formstack Documents." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/formstack_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Formstack [Formstack](https://formstack.com/) is a workplace productivity platform providing no-coding online forms, documents, and signatures. This connector works with the Forms features of Formstack. Use the Formstack Documents connector for working with the Forms feature. Data integration : Skyvia supports importing data to and from Formstack, exporting Formstack data to CSV files, replicating Formstack data to relational databases, and synchronizing Formstack data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Formstack. Query : Skyvia Query supports Formstack. Establishing Connection Creating Connection To connect to Formstack, perform the following steps: In the connection editor, click Sign In with Formstack . Enter your Formstack credentials. Click Authorise to approve access request. Additional Connection Parameters Suppress Extended Requests For the Forms and PartialSubmissions objects Formstack API returns only a part of the fields when querying multiple records. In order to query values of additional fields, Skyvia performs additional extended requests. \nSuch API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The list of additional fields is the following: OBJECT FIELD Forms DB, Deleted, Folder, Language, Viewkey, SubmissionsToday, Encrypted, ThumbnailUrl, SubmitButtonTitle, Inactive, NumColumns, ProgressMeter, ShouldDisplayOneQuestionAtATime, CanAccess1qFeature, IsWorkflowForm, IsWorkflowPublished, Javascript, Html, Fields, HasApprovers, EditUrl, Permissions, CanEdit PartialSubmissions Data To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Fields To successfully import data to the Fields object, you must map the required FormId and Type fields and additional fields.\nAdditional fields needed for mapping are determined by the Type value. \nYou can refer to [Formstack documentation](https://developers.formstack.com/reference/field-types) to find out the list of fields needed for mapping for each field type. SCIM API Skyvia does not support Formstack SCIM API objects. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: Forms , Portals , Smartlists , and SmartlistOptions . Skyvia supports Synchronization for Forms , Smartlists , and SmartlistOptions objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE ConfirmationEmails, Fields, Folders, Forms, NotificationEmails, SmartlistOptions, Smartlists, Subfolders, Submissions, Webhooks UPDATE, DELETE Portals DELETE PartialSubmissions Stored Procedures Skyvia represents part of the supported Formstack features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CopyPortal To copy the existing form portal, use the command call CopyPortal(:portal_id) InviteUserForPortal To invite participant for specified portal, use the command call InviteUserForPortal(:portal_id, :emails) PARAMETER DESCRIPTION Portal_id The identifier of the portal to add user to Emails list of participants\u2019 emails to invite in the array format ModifyPortalParticipantName To modify name of the portal participant, use the command call ModifyPortalParticipantName(:portal_id, :name) PARAMETER DESCRIPTION Portal_id The identifier of the portal which name you modify Name The portal participant new name AddFormToSpecifiedPortal To add existing form to specified portal, use the command call AddFormToSpecifiedPortal(:portal_id, :name, :formId, :deadline, :description, :notifyUsers) Required parameters are portal_id, name, formId. PARAMETER DESCRIPTION Portal_id The identifier of the portal, which you add form to Name Form name FormId Identifier of the form Deadline valid values: null, PortalFormDeadlineDaily, PortalFormDeadlineWeekly, PortalFormDeadlineMonthly . More details are available in [Formstack API documentation](https://developers.formstack.com/reference/portal-add-form) NotifyUsers valid values: true, false ModifySpecifiedPortalForm To modify the form in the portal, use the command call ModifySpecifiedPortalForm(:portal_id, :portalFormId, :name, :formId, :deadline, :description, :notifyUsers) PARAMETER DESCRIPTION Portal_id The identifier of the portal, which you add form to PortalFormId String Name Form name FormId Identifier of the form Deadline valid values: null, PortalFormDeadlineDaily, PortalFormDeadlineWeekly, PortalFormDeadlineMonthly NotifyUsers valid values: true, false DeletePortalFromForm To remove form from a portal, use the command call DeletePortalFromForm(:portal_id, :portalFormId) Supported Actions Skyvia supports all the common actions for Formstack." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/freshbooks_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources FreshBooks [FreshBooks](https://www.freshbooks.com) is the #1 accounting software in the cloud designed exclusively for service-based small business owners and independent professionals. The company has helped over 10 million people worldwide process billions of dollars with its ridiculously easy-to-use invoicing, time tracking, and expense management features. Based in Toronto, Canada, FreshBooks serves to pay customers in 160 countries. Data integration: Skyvia supports importing data to and from FreshBooks, exporting FreshBooks data to CSV files, replicating FreshBooks data to relational databases, and synchronizing FreshBooks data with other cloud apps and relational databases. Backup: Skyvia Backup supports FreshBooks backup. Query: Skyvia Query supports FreshBooks. Establishing Connection To create a connection with Freshbooks, you need to select the Alpha API version, enter your FreshBooks Company name and sign in. FreshBooks Classic API is deprecated. However, Skyvia supports FreshBooks connections with the Classic API version selected for compatibility. Creating Connection To connect to FreshBooks via Alpha API , perform the following steps: Select the Alpha in the the API Version list on the Connection Editor page. Specify your Company Name . Click Sign In with FreshBooks . In the opened window, enter your FreshBooks credentials and click Log in . Connector Specifics Object Peculiarities Bills The Bills object stores complex structured fields in JSON format: Lines , BillPaymets . You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can also replicate the nested objects using the Unwind Nested Objects option. Select JSON Columns to replicate nested object fields as columns with JSON data into the target table or select Separate Tables to replicate nested object fields into additional tables in the database. When you modify the Lines field, Skyvia rewrites its value. For example, you have two lines in the Bill, now you want to add the third line. You have to list all the existing lines and a new one together in the mapping. Project The Services field stores a collection of JSON Services objects. When loading data to the Services field, you can either specify an Id of an existing service (record in the Services object) or specify Name to create a new service with such name, for example: [{\"Id\":24327481},{\"Name\":\"Support\"}] In this example, two services are added to the project: one existing service with the corresponding Id, and one new \u2018Support\u2019 service is created. Services If you work with the Rate field of the Services object, an additional API call is used for each record, both when selecting and modifying this field. If you don\u2019t need the Rate values, don\u2019t query this field. Filtering Specifics FreshBooks supports the following native filters: Object Operator Field Project = Id, Active, Complete Project > , >= Updated Service = Id BillVendor = Id, State Bill = Id, VendorId, CurrencyCode, State Bill = , IN Status BillLine = Id, LineId, BillId BillPayment = Id, State OtherIncome = Id, State, Category Incremental Replication and Synchronization Skyvia does not support the Synchronization and Replication with Incremental Updates for the ExpenseCategory, GateWay, System, and TimeEntry objects. Supported Actions Skyvia supports all the common actions for FreshBooks." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/freshdesk_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Freshdesk [Freshdesk](https://www.freshworks.com/freshdesk/) is a cloud customer support ticketing system. Data integration : Skyvia supports importing data to and from Freshdesk, exporting Freshdesk data to CSV files, replicating Freshdesk data to relational databases, and synchronizing Freshdesk data with other cloud apps and relational databases. Backup : Skyvia Backup supports Freshdesk backup. Query : Skyvia Query supports Freshdesk. Establishing Connection To create connection to Freshdesk, you need to specify the API version to use, the URL to connect to, and the API key. Getting Credentials To find your API Key, perform the following actions: Sign in to your Freshdesk Support Portal Click your profile picture in the top right corner of your portal page. Go to the Profile settings page. Click View API Key on the right. Copy the API Key. Creating Connection To connect to Freshdesk, perform the following steps: Select the API Version to use (v1 or v2). API v2 version provides access to more Freshdesk objects than API v1, and the structure of common objects may be different for different API versions. Specify the URL \u2014 address of your Freshdesk subdomain. Enter the obtained API Key . Additional Connection Parameters Use Custom Objects Select this checkbox to make Freshdesk custom objects available in Skyvia. Connector Specifics Object Peculiarities Ticket When querying Freshdesk tickets via API v2, the amount of required Freshdesk API calls doubles if you query the ticket Description field. It\u2019s better not to query it if you don\u2019t need it. Attachments Information about ticket attachments is stored in the TicketAttachment object. Each record in this table corresponds to the single attachment existing in the Ticket object and contains the attachment body binary data.\nThe TicketAttachment object supports the INSERT and DELETE DML operations. To successfully insert data to the TicketAttachment table, you must map the TicketId, FileName , and Body fields. \nYou can pass the Body values in base64 format. Custom Fields Skyvia does not support custom Freshdesk fields having double quotation marks in their names. Custom Objects Skyvia supports working with Freshdesk custom objects. It supports native sorting and filtering of the custom object records via Freshdesk API. It supports the following operators for filtering via Freshdesk API: = , > , < , >= , <= . Note that the corresponding fields of the custom object must be filterable. You can also natively sort records by one of the fields, if the field is defined as sortable. If you try filtering by non-filterable fields or sort by more than one field, Skyvia requests all the records from the queried custom object and sorts/filters them in the cache. Freshdesk API also supports the COUNT(*) operation natively for custom objects. When you update custom object records, Skyvia uses two Freshdesk API calls per record. Supported Actions Skyvia supports all the common actions for Freshdesk." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/freshsales_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Freshsales Classic [Freshsales Classic](https://www.freshworks.com/crm/sales) is a sales CRM from Freshworks with AI-based lead scoring, phone, email, activity capture, and more. Data integration : Skyvia supports importing data to and from Freshsales Classic, exporting Freshsales Classic data to CSV files, replicating Freshsales Classic data to relational databases, and synchronizing Freshsales Classic data with other cloud apps and relational databases. Backup : Skyvia Backup supports Freshsales Classic backup. Query : Skyvia Query supports Freshsales Classic. Establishing Connection To create a connection to Freshsales Classic, specify the following connection parameters: domain, API Key, and Ids of Leads, Contacts, Accounts, and Deals Views. Getting Credentials API Key To locate your API Key, perform the following steps: Go to Freshsales Classic and sign in using your account. Click the user icon in the top right corner and select Settings . Go to the API SETTINGS tab and copy the API Key. View Id To find the view Ids for the Leads, Contacts, Accounts, and Deals in Freshsales Classic, perform the following actions: Click the icon of the needed object in the left menu. Select the needed view. The view Id is located in the page URL string. This method is relevant for all needed views. For example, you need to obtain the Accounts view Id. To do this, you go to CONTACTS AND ACCOUNTS -> Accounts , select the All accounts view tab, and copy the numeric part of the URL string. Creating Connection To connect to Freshsales Classic, paste the domain, API Key, and Ids of Leads, Contacts, Accounts, and Deals views to the corresponding boxes in the Connection Editor. Connector Specifics Tasks All the tasks are stored in separate objects divided according to the status and due date: OpenTasks, DueTodayTasks, DueTomorrowTasks, OverdueTasks and CompletedTasks . You can import data to any of these objects. When you import tasks to Freshsales Classic, it doesn\u2019t matter what object you set as a target. The record will appear in the object corresponding to the task status and due date. Appointments All the appointments are stored in separate objects divided according to the date PastAppointments and UpcomingAppointments . You can import data to both these objects. When you import appointments to Freshsales Classic, it doesn\u2019t matter what object you set as a target. The record will appear in the object corresponding to the appointment date. Incremental Replication and Synchronization Skyvia supports Synchronization for the following objects: Accounts, Contacts, Deals, Leads, OpenTasks, DueTodayTasks, DueTomorrowTasks, OverdueTasks, CompletedTasks, PastAppointments, UpcomingAppointments, SalesActivities . Skyvia supports Incremental Replication for Freshsales Classic objects containing the CreatedDate or UpdatedDate field. Supported Actions Skyvia supports all the common actions for Freshsales Classic." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/freshsales_suite_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Freshsales Suite [Freshsales Suite](https://www.freshworks.com/crm/suite/) (formerly, Freshworks CRM) is a cloud-based CRM (customer relationship management) solution that helps businesses across different industry verticals to manage their interactions with existing and potential customers. The CRM includes sales force automation, marketing automation, chat, and telephony, all in one solution. Data integration : Skyvia supports importing data to and from Freshsales Suite, exporting Freshsales Suite data to CSV files, replicating Freshsales Suite data to relational databases, and synchronizing Freshsales Suite data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Freshsales Suite. Query : Skyvia Query supports Freshsales Suite. Establishing Connection To create a connection to Freshsales Suite, you need to specify URL, API Key, and Contacts, Accounts and Deals views Ids. Getting Credentials URL Log in to your Freshsales Suite account and copy the URL without a slash charachter in the end. API Key To get the API Key in your Freshsales Suite account, perform the following steps: Log in to your Freshsales Suite account. Click the profile icon in the top right corner of the page. In the drop-down list, click Settings . Switch from Email Settings to API Settings . View Ids for Accounts, Contacts, and Deals You can find the View Id value for each of the three objects in your Freshsales Suite account. For this, perform the following steps: Click the menu on the left in your Freshsales Suite account. Select the object to switch to. For example, we select Accounts . By default, records are filtered by the All Accounts view. Copy Accounts View Id in the URL. Perform the same steps with Contacts and Deals View Ids . To display all Accounts in Skyvia, select the All Accounts view in your Freshsales Suite account and copy the View Id from the URL. To display only your Accounts, click the Open all views icon, select My Accounts view and copy its Id from the URL. To get all records from the Accounts, Contacts, and Deals tables, you need to copy the View Ids from the URLs, which correspond to All Accounts , All Contacts , and All Deals . Creating Connection To connect to Freshsales Suite, enter the obtained connection parameter values into the corresponding boxes in the Connection Editor. Additional Connection Parameters Use Custom Fields Select this checkbox to make Freshsales Suite custom fields available in Skyvia. Connector Specifics Object Peculiarities Contacts To successfully import data to the Contacts object, you must map at least one of the following fields: Emails , MobileNumber , or ExternalId . Tasks All the tasks are stored in separate objects divided according to the status and due date: OpenTasks, DueTodayTasks, DueTomorrowTasks, OverdueTasks, and CompletedTasks .\nWhen you insert tasks to Freshsales Suite, you can select any of the mentioned -Tasks objects as target. However, despite the selected object, new records appear in the OpenTasks object and one of the objects, corresponding to the inserted record due date. Appointments All the appointments are stored in separate objects divided according to the status: PastAppointments, UpcomingAppointments .\nWhen you insert appointments to Freshsales Suite, you can select any of the mentioned objects as target. However, despite the selected object, new records appear in the object that corresponds to the appointment status. Custom Fields The following Freshsales Suite objects support custom fields: Contacts, Accounts, Deals .\nCustom fields support the INSERT and UPDATE operations. Freshsales Suite supports the following custom field types: Freshsales Suite Type DBType Text field String Text area String Number Double Dropdown String Multiselect String Radio button String Checkbox Boolean Date picker Date Lookup String Formula Double, Boolean or String Auto number String DML Operations Support Operation Object INSERT, UPDATE, DELETE AccountNotes, Accounts, CompletedTasks, ContactNotes, Contacts, DealNotes, Deals, DueTodayTasks, DueTomorrowTasks, OpenTasks, OverdueTasks, PastAppointments, SalesActivities, UpcomingAppointments Incremental Replication and Synchronization The following Freshsales Suite objects AccountFiles, AccountNotes, Accounts, CompletedTasks, ContactActivities, ContactFiles, ContactNotes, Contacts, DealFiles, DealNotes, Deals, DueTodayTasks, DueTomorrowTasks, OpenTasks, OverdueTasks, PastAppointments, SalesActivities, UpcomingAppointments . The following objects support synchronization Accounts, Contacts, Deals, AccountNotes, ContactNotes, DealNotes, SalesActivities and objects having Tasks and Appointments in their names. Supported Actions and Actions Specifics Skyvia supports all the common actions for Freshsales Suite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/freshservice_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Freshservice [Freshservice](https://www.freshworks.com/freshservice/) is a cloud-based IT Service Management solution. Freshservice helps IT organizations streamline their service delivery processes with a strong focus on user experience and employee happiness. Data integration : Skyvia supports importing data to and from Freshservice, exporting Freshservice data to CSV files, replicating Freshservice data to relational databases, and synchronizing Freshservice data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Freshservice backup. Query : Skyvia Query supports Freshservice. Establishing Connection To create a connection to Freshservice, you need to specify the Domain and API Key . Getting Credentials To obtain the API Key, do the following: Go to your Freshservice account. Click the user icon and select Profile Settings . Copy the API Key displayed on the right. Creating Connetion Enter your Freshservice domain and paste the obtained API Key to the corresponding box in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For some objects Freshservice API does not return all the fields when querying. To query these fields, Skyvia performs additional requests for each record of such object. This can decrease performance and significantly increase the number of API calls used. The list of such fields is the following: Object Field Tickets Email, Name, Phone, Attachments, Urgency, Impact, Assets, Problem Changes All custom fields PurchaseOrders ShippingAddress, BillingSameAsShipping, BillingAddress, CurrencyCode, DiscountPercentage, TaxPercentage, ShippingCost, Currency_ConversionRate, Currency_Id, Currency_Name, Currency_Symbol, PurchaseItems Problems All custom fields Releases All custom fields SolutionArticleAttachments Content SolutionArticleImages Content To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Metadata Cache You can specify the period after which Metadata Cache expires. Connector Specifics Object Peculiarities Foreign Keys Some objects have fields with a polymorphic relations. For example, the UserId field can refer to both the requester and the agent at the same time. Therefore, Skyvia doesn\u2019t support foreign keys for the fields, which cannot uniquely define the object they refer to. TicketConversations Skyvia supports importing ticket notes, but not ticket replies into the TicketConversations object. Departments Since the type of custom fields is not specified in the metadata, Skyvia treats such fields as read-only string fields. Agents When performing import to the Agents object, you must map the required Roles field to the value in the array format. See below the example value for this field: [{\"role_id\": 51000087054, \"assignment_scope\": \"entire_helpdesk\"}, \n{\"role_id\": 51000087055, \"assignment_scope\": \"specified_groups\", \"groups\": [51000110425]}] Assets Trashed assets are not displayed in query results by default. To get the trashed assets when querying, use filter by Trashed = true . When importing data to the Assets objetc, the AssetTypeId field value determines the required fields for mapping fields in the nested TypeFields object. Here is the example value for this field: {\"product_51000020554\": 51000001690, \"asset_state_51000020554\": \"In Use\"} SoftwareUsers When you load data into the SoftwareUsers object, the IDs of imported records are not displayed in the Log or in the Returning feature. Tickets, Problems, Changes To import data into these objects, map the RequesterId ( Id of the person who creates the ticket) field.\nThere are two ways to do it: Specify the existing requester Id . Specify requester Email , if such contact doesn\u2019t exist yet. In this case, this email will be automatically added to the Requesters object, and the corresponding Id for a new record will be inserted to the RequesterId field of the Tickets object. ServiceItems Freshservice returns values of the CustomFields , ShortDescription and Description fields of the ServiceItems object only when the equals filter for the DisplayID field is applied. When no such filter is applied, (for example, when all ServiceItems records are read) empty values are returned for these fields. Custom Objects Skyvia supports custom objects for all existing workspaces that belong to the Freshservice account. If the custom object belongs to the primary workspace, its name equals to the name in the UI. If the custom object belongs to another workspace, its name consists of the workspace name and the object name concatenation. For example, the object name in the UI is CustomObject1 . It belongs to Workspace2 , which is not the primary workspace. Skyvia will display this object with the name Workspace2_CustomObject1 . Skyvia supports INSERT, UPDATE and DELETE operations for custom objects. Custom Fields Standard Freshservice objects (such as Tickets, Problems, Changes, Agents, Departments, Releases, Requesters, Vendors ) may have custom fields. Custom Freshservice objects may have custom fields as well. \nYou can add custom fields of the following types: Text, Phone_number, Dropdown, Checkbox, Paragraph, Url, Decimal, Date, Number, Multi_select_dropdown, Lookup, Date_Time, Enum, Planning_field, Nested_field, Content . If the field type is not specified in the metadata or there is no way to determine it, Skyvia treats such fields as read-only string fields. The Planning field type custom fields are read-only. Custom Fields Support for Multiple Workspaces Skyvia creates a separate object containing a workspace name prefix in its name for each IT workspace. These objects include custom fields of the main object. For example, the objects _Tickets, _Changes, _Releases, _Problems include custom fields of the main objects Tickets, Changes, Releases, Problems . Business workspaces support the Tickets object custom fields only. Skyvia creates the separate _Tickets object for such workspaces. Global custom fields are supported for the Tickets, Changes, Releases, Problems object. They are available for all workspaces. The Tickets object contains tickets from all workspaces, including global custom fields. \u00a0\nThe Assets, AgentGroups, Changes, Releases , and Problems contain records from all IT workspaces, including global custom fields. If you had a single (primary) workspace and added a new IT workspace, then Skyvia will add new _Tickets, _Changes, _Releases, _Problems objects including custom fields. Main objects Tickets, Problems, Changes, Releases don\u2019t include custom fields. If you had more than one workspace and then delete the second workspace, the objects _Tickets, _Changes, _Releases, _Problems are not displayed in the connector. Custom fields are included in the main objects Tickets, Problems, Changes, Releases . When you delete a workspace, all its objects are also deleted. DML Operations Support Operation Object INSERT, UPDATE, DELETE AgentGroups, Agents, Announcements, Assets, AssetTypes, ChangeNotes, Changes, ChangeTasks, ChangeTimeEntries, Departments, Locations, ProblemNotes, Problems, ProblemTasks, ProblemTimeEntries, Products, Projects, ProjectTasks, PurchaseOrders, ReleaseNotes, Releases, ReleaseTasks, ReleaseTimeEntries, RequesterGroups, Requesters, Software, SolutionArticles, SolutionCategories, SolutionFolders, TicketConversations, Tickets, TicketTasks, TicketTimeEntries, Vendors INSERT, DELETE RequesterGroupMembers INSERT OnboardingRequests, SoftwareInstallation, SoftwareUsers Incremental Replication and Synchronization Skyvia supports Synchronization and Replication with Incremental Updates for such Freshservice objects: AgentGroups, AgentRoles, Agents, Announcements, AssetAssociatedRequests, AssetComponents, Assets, AssetTypes, BusinessHours, CannedResponseFolders, CannedResponses, ChangeFields, ChangeNotes, Changes, ChangeTasks, ChangeTimeEntries, DepartmentFields, Departments, Licenses, Locations, OnboardingFormFields, OnboardingRequests, OnboardingTickets, ProblemNotes, Problems, ProblemTasks, ProblemTimeEntries, Products, PurchaseOrders, Projects, ProjectTasks, ReleaseFields, ReleaseNotes, Releases, ReleaseTasks, ReleaseTimeEntries, Requesters, ServiceCategories, ServiceItems, ServiceRequest_RequestedItems, SLAPolicies, Software, SoftwareInstallations, SoftwareUsers, SolutionArticles, SolutionCategories, SolutionFolders, TicketActivities, TicketConversations, TicketCSATResponses, TicketFields, Tickets, TicketTasks, TicketTimeEntries, Vendors . Stored Procedures Skyvia represents part of the supported Freshservice features as stored procedures.\nYou can [call a stored procedure](https://docs.skyvia.com/supported-sql-for-cloud-sources/call-statements-and-stored-procedures.html) , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddRequesterToRequesterGroup To Add a member to the requester group, use command call AddRequesterToRequesterGroup(:group_id,:requester_id) RemoveDevicesFromSoftware To Delete records from the SoftwareInstallations object, use command call RemoveDevicesFromSoftware(:app_id,:device_ids) PARAMETER DESCRIPTION App_id The Id of the application to be removed Device_ids The InstallationMachineId field values for the corresponding AppId. If you need to delete several records with different InstallationMachineId values for one AppId , specify all the device Ids separated by commas. CreateChildTicket To create a child ticket for an existing ticket, use the following command. call CreateChildTicket(:parent_ticket_id, :status, :priority, :source, :subject, :description, :requester_id, :email, :phone, :workspace_id, :name, :type, :responder_id, :attachments, :cc_emails, :custom_fields, :due_by, :email_config_id, :fr_due_by, :group_id, :tags, :department_id, :category, :sub_category, :item_category, :associate_ci, :urgency, :impact) Parameters Parent_ticket_id, Status, Priority, Source, Subject, Description are required. There are three parameters that identify a contact of the ticket creator: Requester_id, Email, or Phone . You must provide value for at least one of them. If you specified either Requester_id, Email, or Phone , you can omit other two, specify \u2019\u2019 or null instead. All other parameters are optional. See the example of the command with minimal set of parameters. call CreateChildTicket(4, 'Open', 'Medium', 'Portal', 'ChildTicket1_subject', 'ChildTicket1_description', null, 'tom@outerspace.com') PARAMETER NAME DESCRIPTION Parent_ticket_id Required. The existing parent ticket identifier Status Required. Status of the ticket. Valid values are Open, Pending, Resolved, Closed Priority Required. Priority of the ticket. Valid values are Low, Medium, High, Urgent Source Required. The channel through which the ticket was created. Valid values are Email, Portal, Phone, Chat, Feedback widget, Yammer, AWS Cloudwatch, Pagerduty, Walkup, Slack Subject Required. Child ticket subject Description Required. Child ticket description Requester_id User ID of the requester Email Email address of the requester Phone Phone number of the requester Workspace_id ID of the workspace to which this ticket belongs Name Name of the requester Type Type of the ticket Responder_id ID of the agent to whom the ticket has been assigned Attachments Ticket attachments. The total size of these attachments cannot exceed 40 MB Cc_emails Email address added in the \u2018cc\u2019 field of the incoming ticket email Custom_fields Key value pairs containing the names and values of custom fields Due_by Timestamp that denotes when the ticket is due to be resolved. Email_config_id ID of email config which is used for this ticket Fr_due_by Timestamp that denotes when the first response is due Group_id ID of the group to which the ticket has been assigned Department_id Department ID of the requester Category Ticket category Sub_category Ticket sub category Item_category Ticket item category Associate_ci Search for asset and associate with ticket Urgency Ticket urgency Impact Ticket impact Supported Actions Skyvia supports all the common actions for Freshservice." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/front_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Front [Front](https://front.com/) is a cloud-based customer operations platform that enables support, sales, and account management teams to streamline communication and deliver service at scale. Data integration : Skyvia supports importing data to and from Front, exporting Front data to CSV files, replicating Front data to relational databases, and synchronizing Front data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Front. Query : Skyvia Query supports Front. Establishing Connection To create a connection to Front, sign in with Front using your credentials. Creating Connection To connect to Front, perform the following steps: click Sign In with Front in the Connection Editor and enter your Front credentials. Click Authorize to give Skyvia your permission to access your Front data. Additional Connection Parameters Suppress Extended Requests For the Teams object, Front API returns only part of the fields when querying multiple records. To query values of the Members field, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Use Custom Fields Select this checkbox to make Front custom fields available in Skyvia. Connector Specifics Object Peculiarities Conversations When inserting or updating data in this object, map either the TeammateIds or InboxId field in addition to the required fields. Front API supports importing conversations with the Type = discussion only. The Status field valid values for the UPDATE operation are Archived, Open, Deleted, or Spam . Drafts This object requires mapping the ChannelId field for the UPDATE operation. MessageTemplates When you query this object, the values in the Links_Related_ParentFolder fields are returned as URLs in the results. For example, https://entecheco.api.frontapp.com/message_template_folders/ rsf_fqfq . When you insert data or update this field, you must specify only the message folder ID part, for instance, rsf_fqfq . Channels The Type field valid values for the INSERT operation are custom, smtp , and twilio . Filtering Specifics Skyvia supports the following native filters for Front objects. Object Field Operator Contacts < , <= , > , >= UpdatedDate Conversations = Status Events = Type Events < , <= , > , >= EmittedAt Use only these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Sorting Specifics Front API supports sorting for the following objects and fields: Object FieldS Contacts UpdatedDate Tags Id Events EmittedAt MessageTemplateFolders CreatedDate or UpdatedDate (only one field at a time) MessageTemplates CreatedDate or UpdatedDate (only one field at a time) Custom Fields The following objects support custom fields: Contacts, Teammates, Inboxes, Conversations, ConversationFollowers, ConversationInboxes, InboxTeammates . Custom fields support INSERT and UPDATE operations. Front supports custom fields of the following types. Front Data Type DbType String String Inbox String Teammate String Enum String DateTime DateTime Number Double Boolean Boolean Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Tags, ChildrenTags, MessageTemplateFolders, MessageTemplates , and Contacts objects. Skyvia tracks only new records for the ContactNotes, Conversations, ConversationMessages , and Drafts . Skyvia supports Synchronization for the Tags, MessageTemplateFolders and MessageTemplates objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Contacts, Drafts, MessageTemplateFolders, MessageTemplates, Tags INSERT ChildrenTags, ContactNotes, ConversationComments, ConversationMessages, Inboxes INSERT, UPDATE Channels, Conversations INSERT, DELETE ContactGroups UPDATE Teammates Stored Procedures Skyvia represents part of the supported Front features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ValidateChannel To validate the specific channel, use the command. call ValidateChannel(:channelId) AddConversationFollowers and DeleteConversationFollowers The following command adds teammates to the list of followers of a conversation. call AddConversationFollowers(:conversation_id,:teammate_ids) This command removes teammates from the list of followers of a conversation. Parameter Description Conversation_id The identifier of a conversation. Teammate_ids IDs of the teammate to add to the followers list. You can supply the teammates\u2019 emails alternatively. For example, the email skyvia@example.com, must be provided in the format [\u201calt:email:skyvia@example.com\u201d]. AddTeammatesInbox To give access to one or more teammates to an inbox, use the following command. call AddTeammatesInbox(:inbox_id,:teammate_ids) Parameter Description Inbox_id The identifier of the inbox. Teammate_ids IDs of the teammate to add to the followers list. You can supply the teammates\u2019 emails alternatively. For example, the email skyvia@example.com, must be provided in the format [\u201calt:email:skyvia@example.com\u201d] AddTeammatesToTeam and RemoveTeammatesFromTeam To add teammates to a team or to remove teammates from a team, use the corresponding command. call AddTeammatesToTeam(:team_id,:teammate_ids) or call RemoveTeammatesFromTeam(:team_id,:teammate_ids) Parameter Description Team_id The identifier of the team. Teammate_ids IDs of the teammate to add to the followers list. You can supply the teammates\u2019 emails alternatively. For example, the email skyvia@example.com, must be provided in the format [\u201calt:email:skyvia@example.com\u201d] UpdateConversationAssignee The following command assigns or unassigns a conversation. To unassign a conversation, set Assignee_id to null . call UpdateConversationAssignee(:conversation_id,:assignee_id) CreateConversation The following command sends a new message from a channel. call CreateConversation(:channel_id,:to,:body,:cc,:bcc,:sender_name,:subject,:author_id,:text,:options_archive,:options_tagids,:signature_id,:should_add_default_signature) You can omit the unnecessary parameters if needed. Parameter Description Channel_id Required. The sending channel ID. Alternatively, you can supply the sending channel address. To Required. The list of the message recipients. Body Required. The message text. Cc The list of the recipients who get a message copy. Bcc The list of the recipients who get a blind copy of this message. Sender_name Name used for the sender info of the message. Subject Subject of the message for an email message. Author_id ID of the teammate on whose behalf the answer is sent. Text Text version of the body for email messages. Options_archive Boolean. Archive the conversation right when sending the message. Options_tagids Boolean. List of tag IDs to add to the conversation. Signature_id ID of the signature to attach to this draft. If null, no signature is attached. should_add_default_signature Whether or not Front should try to resolve a signature for the message. It is ignored if Signature_id is specified. The default value is false . ReceiveCustomMessages Use this command to receive a custom message in Front. This command works for custom channels ONLY. You can omit the unnecessary parameters if needed. call ReceiveCustomMessages(:channel_id, :sender_handle, :body, :sender_contactid, :sender_name, :subject, :body_format, :attachments, :metadata_headers, :metadata_threadref) Parameter Description Channel_id Required. The sending channel ID. Alternatively, you can supply the sending channel address. Sender_handle Required. Handle of the sender. It can be any string used to identify the sender uniquely. Body Required. The message text. Sender_contactid ID of the contact in Front corresponding to the sender. Sender_name Name of the sender. Subject Subject of the message. Body_format Format of the message body. It can be markdown (default) or html . Attachments Binary data of attached files. Must use Content-Type: multipart/form-data if specified. Metadata_headers Custom object where any internal information can be stored Metadata_threadref Reference, which will be used to thread messages. If omitted, Front threads by sender instead. ImportMessage The following command imports a new message in an inbox. You can omit the unnecessary parameters if needed. call ImportMessage(:inbox_id, :sender_handle, :sender_authorid, :sender_name, :to, :cc, :bcc, :subject, :body, :body_format, :external_id, :created_at, :type, :assignee_id, :tags, :conversation_id, :attachments, :metadata_isinbound, :metadata_isarchived, :metadata_shouldskiprules, :metadata_threadref) Parameter Description Inbox_id Required. The identifier of the inbox. Sender_handle Required. Handle of the sender. It can be any string used to identify the sender uniquely. To Required. The list of the message recipients. Body Required. The message text. External_id Required. External identifier of the message. Front won\u2019t import two messages with the same external ID. Created_at Required. Date at which the message has been sent or received. Sender_authorid ID of the teammate who is the author of the message. Ignored if the message is inbound. Sender_name Name of the sender. Cc The list of the recipients who get a message copy. Bcc The list of the recipients who get a blind copy of this message. Subject Subject of the message. Body_format Format of the message body. It can be markdown (default) or html . Works for the email message type. Type Type of the message to import. Assignee_id ID of the teammate who will be assigned to the conversation. Tags List of tag names to add to the conversation. Conversation_id If supplied, Front will thread this message into conversation with the given ID. Note that including this parameter nullifies the Metadata_threadref parameter completely. Attachments Binary data of attached files. Must use Content-Type: multipart/form-data if specified. Metadata_isinbound Required. Boolean. Determines if a message is received (inbound) or sent (outbound) by you. Metadata_isarchived Boolean. Determines if a message is archived after import. Metadata_shouldskiprules Determines if rules should be skipped. True by default. Metadata_threadref Reference, which will be used to thread messages. If omitted, Front threads by sender instead Supported Actions Skyvia supports all the common actions for Front." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/fullstory_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources FullStory [FullStory](https://www.fullstory.com/) is a web-based digital intelligence system. It enables users to track each customer activity and use this data afterwards to optimize the customer experience. Data integration : Skyvia supports importing data to and from FullStory, exporting FullStory data to CSV files, and replicating FullStory data to relational databases. Backup : Skyvia Backup does not support FullStory backup. Query : Skyvia Query supports FullStory. Establishing Connection You need to enter an API key to create a connection between Skyvia and FullStory. Getting Credentials To get a FullStory API key, do the following: Login to your [FullStory account](https://app.fullstory.com/login/) . Click on your account name on the upper left and choose Settings . Scroll down to Integrations section, and click API Keys . Click Create Key . Name your key and select the Key Permission type. Admin option provides a wider set of permissions. Click Save API Key . Click Copy to Clipboard . Creating Connection To create a connection between Skyvia and FullStory, simply enter your API Key . Connector Specifics Objects Not Supported Objects Skyvia does not support Webhooks object and objects that operate with UserId , such as UserEvents , UserPages , and Sessions . Endpoints If Url is updated for a record in the Endpoints object, it\u2019s Id also changes. Thus, the incremental replication, loads such record to the database as a new record instead of updating an existing ones. Incremental Replication and Synchronization Incremental replication is supported for Endpoints and Operations objects. The Operations object has only the CreateDate field, and does not have the UpdatedDate field. Thus, replication with incremental updates detects only the new records, but not updates to existing records for this object. Skyvia does not support synchronization for FullStory. DML Operations Skyvia supports DML operations for such FullStory objects: Operation Object INSERT Endpoints UPDATE Endpoints DELETE Endpoints Stored Procedures Skyvia represents part of the supported FullStory features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CancelOperation Use the following command to cancel an in-progress operation. call CancelOperation(:operationId) CreateSegmentExport Use the following command to execute Create Segment Export action. call CreateSegmentExport (:segmentId, :type, :format, :timeRange_start, :timeRange_end, :timezone, :fields, :eventDetails_scope) PARAMETER NAME DESCRIPTION SegmentId A segment identifier Type A type of exported data. Valid values: Event , Individual Format Defines the data format. Valid values: JSON , CSV , NDJSON TimeRange_start, timeRange_end Defines a time range for which data will be exported Timezone A time zone value. It is set to UTC by default Fields Defines the array with columns that will be exported EventDetails_scope Defines the scope of the exported events. Possible values: Events, Individuals, Sessions, Pages Supported Actions Skyvia supports all the common actions for FullStory." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/getresponse_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources GetResponse [GetResponse](https://www.getresponse.com/) is an email marketing tool that helps manage contacts, plan marketing campaigns and analyze results, and prepare new marketing strategies. Data integration: Skyvia supports importing data to and from GetResponse, exporting GetResponse data to CSV files, replicating GetResponse data to relational databases, and synchronizing GetResponse data with other cloud apps and relational databases. Backup: Skyvia Backup does not support GetResponse backup. Query: Skyvia Query supports GetResponse. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to GetResponse, you must enter your API Key. Getting Credentials Go to [https://app.getresponse.com/](https://app.getresponse.com/) and sign in to your Account. Click Tools -> Integrations and API . Select the API tab and click Generate . Name your API Key and copy it into Clipboard Creating Connection Paste the API Key you generated in the GetResponse UI into the API Key field in Skyvia. Additional Connection Parameters Suppress Extended Requests GetResponse API returns only part of the fields for some objects when querying multiple records. To query the lacking fields, Skyvia performs additional extended requests. Skyvia can perform such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Campaigns Postal_AddPostalToMessages, Postal_City, Postal_CompanyName, Postal_Country, Postal_Design, Postal_State, Postal_Street, Postal_ZipCode, Confirmation_FromField_FromFieldId, Confirmation_FromField_Href, Confirmation_RedirectType, Confirmation_MimeType, Confirmation_RedirectUrl, Confirmation_ReplyTo_FromFieldId, Confirmation_ReplyTo_Href, Confirmation_SubscriptionConfirmationBodyId, Confirmation_SubscriptionConfirmationSubjectId, OptinTypes_Email, OptinTypes_Api, OptinTypes_Import, OptinTypes_Webform, SubscriptionNotifications_Status, SubscriptionNotifications_Recipients, Profile_Description, Profile_IndustryTagId, Profile_Logo, Profile_LogoLinkUrl, Profile_Title Contacts Geolocation_Latitude, Geolocation_Longitude, Geolocation_ContinentCode, Geolocation_CountryCode, Geolocation_Region, Geolocation_PostalCode, Geolocation_DmaCode, Geolocation_City, Tags, CustomFieldValues Autoresponders Content_Html, Content_Plain Suppressions Masks Newsletters FromField_FromFieldId, FromField_Href, ReplyTo_FromFieldId, ReplyTo_Href, ClickTracks, Content_Html, Content_Plain To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in GetResponse because its API does not return them without extended requests. Connector Specifics Object Peculiarities Contacts If you modify contact records in GetResponse UI, their UpdatedDate field values aren\u2019t updated immediately due to internal processes or optimizations within GetResponse.\nSkyvia can\u2019t track changes for such records in Incremental Replication or Synchronization. ContactActivities By default, ContactActivities shows logs only from the last 14 days. To retrieve records from a different period, use a filter on the FilterCreatedOn field. Newsletters The object displays only the records with the Broadcast type when querying. \nTo retrieve the records of Draft, Splittest, and Automation types, use a filter by the Type field. Autoresponders To insert data into the Autoresponders object, you must map either the Content_Html or the Content_Plain field additionally to other required fields. NewsLetterStatistics and AutoresponderStatistics When you select data from these objects, GetResponse API returns data from 2000-01-01 to the query execution date and time value and groups the results of the Total type (by time interval). You can add filters by the From, To or Group columns fields to redetermine the time interval for the selected data. From and To fields have the Date data type. Group field has the String data type. Valid values are total, hour, day, month. You can use the = (equal to) operator when filtering by these fields. Custom Fields GetResponse connector supports custom fields for the Contacts object. Custom fields support the following data types. GetResponse Type DbType Description Country String Enum field of Single and Multiple types Currency String Enum field of Single and Multiple types Date String or Date Enum field of Single (String) or Multiple (String) type. Or regular field of Line of text (Date) type. Date and time String or DateTime Enum field of Single (String) or Multiple (String) type. Or regular field of Line of text (DateTime) type. Gender String Enum field Ip address String Number String or Decimal Enum field of Single (String) or Multiple (String) type. Or regular field of Line of text (Decimal), and Paragraph (Decimal) types. Phone String Text String Enum field of Single and Multiple types. Or regular field of Line of text and Paragraph types. Url String Filtering Specifics GetResponse supports the following native filters: Object Field and Operator ABTest Name, Type, Status ( = ) Campaigns Name, isDefault ( = ) Carts ExternalId ( = ) ContactActivities FilterCreatedOn ( >= , <= ) Contacts Name, Email, CampaignId, Origin ( = ) Folders, Shops, Suppressions, Taxes, Webinars Name ( = ) Forms Name, CampaignId ( = ) MetaFields Name, Description, Value ( = ) Newsletters Type, Subject, Name, Status, CampaignId ( = ) Orders Description, Status, ExternalId ( = ) Products Name, Vendor, ExternalId ( = ) DML Operations Support Operations Objects INSERT, UPDATE, DELETE Addresses, Autoresponders, Categories, Contacts, MetaFields, Orders, PredefinedFields, Products, Shops, Suppressions INSERT, UPDATE Campaigns UPDATE, DELETE CustomFields INSERT, DELETE Files, Folders, NewsLetters, Taxes INSERT TransactionalEmails Synchronization and Incremental Replication Skyvia supports Synchronization for the following objects: Addresses, Carts, Categories,Contacts, MetaFields, Products, Shops, Taxes . Skyvia does NOT support Incremental Replication for the following objects: Accounts, AccountBilling, AccountLoginHistory, AccountBadge, AccountIndustryTags, AccountTimezones, Multimedia, StorageSpace, ContactCustomFields, CustomFields, PredefinedFields, NewsLetterStatistics, AutoresponderStatistics, Orders, TransactionalEmails, ClickTracks Stored Procedures Skyvia represents part of the supported GetResponse features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CreateNewContact Use the following command to create a new contact with minimal parameters. call CreateNewContact(:Email, :CampaignId) CreateNewContactWithParameters Use the following command to create a new contact with a full set of parameters. call CreateNewContactWithParameters(:Name, :Email, :CampaignId, :DayOfCycle, :Scoring, :IpAddress, :Tags, :CustomFieldValues) PARAMETER NAME DESCRIPTION Name Contact name Email Contact email CampaignId Campaign identifier DayOfCycle The day on which the contact is in the Autoresponder cycle. Null indicates the contact is not in the cycle Scoring Contact scoring. Null removes the score from a contact IpAddress The contact\u2019s IP address Tags Array of JSON objects CustomFieldValues Array of JSON objects Example of the procedure command: call CreateNewContact('NewContact', 'some_mail@gmail.com', 'r3yDN', '4',8,'1.2.3.4', '[{\"tagId\": \"wAS4\"}]', '[{\"customFieldId\": \"pr814L\",\"value\": [\"option 1\"]}]') It takes a few minutes for a new contact to appear in the Contacts object. If the added contact didn\u2019t appear in the list, there\u2019s a possibility that it was rejected at a later stage of processing. Campaigns can be set to double opt-in. It means that the contact has to click a link in a confirmation message before they can be added to your list. Unconfirmed contacts are not returned by the API and can only be found using Search Contacts. OnOffAccountBadge Use the following command to turn on/off the GetResponse badge: call OnOffAccountBadge(:status) PARAMETER NAME DESCRIPTION Status Valid value is enabled or disabled Supported Actions Skyvia supports all the common actions for GetResponse." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/github_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources GitHub [GitHub](https://github.com/) is a hosting service for software development and version control using Git. Data integration : Skyvia supports importing data to and from GitHub, exporting GitHub data to CSV files, replicating GitHub data to relational databases, and synchronizing GitHub data with other cloud apps and relational databases. Backup : Skyvia Backup does not support GitHub. Query : Skyvia Query supports GitHub. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to GitHub, sign in to GitHub via OAuth 2.0. Creating Connection Click Sign In with GitHub in the Connection Editor. In the opened window, enter your GitHub credentials. Click Authorize SkyviaApp to continue. Enter the Username . Additional Connection Parameters Suppress Extended Requests For some GitHub objects, Skyvia performs additional extended requests. Skyvia performs such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Organizations Name, Company, Blog, Location, Type, Email, TwitterUsername, TotalPrivateRepos, OwnedPrivateRepos, PrivateGists, DiskUsage, Collaborators, IsVerified, HasOrganizationProjects, HasRepositoryProjects, PublicRepos, PublicGists, BillingEmail, Plan_Name, Plan_Space, Plan_PrivateRepos, Plan_FilledSeats, Plan_Seats, DefaultRepositoryPermission, MembersAllowedRepositoryCreationType, MembersCanCreateRepositories, TwoFactorRequirementEnabled, MembersCanCreatePublicRepositories, MembersCanCreatePrivateRepositories, MembersCanCreateInternalRepositories, MembersCanCreatePages, MembersCanForkPrivateRepositories You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities OrganizationMembers To get data from this object, use filter by the Org field and specify the organization name.\nThe Org field returns empty values when querying. Use this field for filtering only. Repositories The Repositories object returns only the user\u2019s repositories specified in the connection settings. Repositorylabels The Name field of the Repositorylabels object is its primary key. Together with that, you can update the repository name. When mapping, specify the new name value in the NewName field. RepositoryMilestones The RepositoryMilestones object has a composite primary key consisting of the RepositoryName and Number fields. You must map both of these fields when performing UPDATE and DELETE operations. When you insert records to the RepositoryMilestones object, the RepositoryName field values do not return in the integration run log. RepositoryContents Use this object to view the list of files added to a repository in the branch. To get a list of files from the nested folders, filter by the PathFilter or Path field. DML Operations Support Operation Object INSERT, UPDATE, DELETE Gists, Releases, Repositories, RepositoryIssueComments, RepositoryLabels, RepositoryMilestones, RepositoryProjects, RepositoryPullRequestReviews INSERT, UPDATE RepositoryIssues, RepositoryPullRequests UPDATE, DELETE Invitations, RepositoryComments, UserProjects INSERT ReleaseAssets, RepositoryFork UPDATE Branches, RepositoryTopics Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Gists , RepositoryIssues , RepositoryIssueComments , Repositories , RepositoryPullRequests , RepositoryMilestones , RepositoryIssueEvents , RepositoryFork , RepositoryComments , UserAuthenticated , UserProjects , Invitations , ReleaseAssets , and RepositoryProjects objects. Skyvia supports Synchronization for the Gists , RepositoryIssueComments , RepositoryProjects , and UserProjects objects. Stored Procedures Skyvia represents part of the supported GitHub features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . InsertFile To insert file, use the command call InsertFile(:reponame,:path,:message,:content,:branch) PARAMETER NAME DESCRIPTION Reponame Repository name Path Name of the file to be added Message text of the commit message Content File body Branch Default value is main . Branch to add a file into UpdateFile To update the file, use the command call UpdateFile(:reponame,:path,:message,:content,:branch,:sha) PARAMETER NAME DESCRIPTION Reponame Repository name Path Name of the file to be updated Message Text of the commit message Content File body Branch Default value is main . The branch where you update a file Sha The blob SHA of the file being replaced Deletefile To delete the file, use the command call DeleteFile(:reponame,:path,:message,:branch,:sha) PARAMETER NAME DESCRIPTION Reponame Repository name Path Name of the file to be deleted Message Text of the commit message Branch Default value is main . Branch where you delete a file Sha The blob SHA of the file being deleted Supported Actions Skyvia supports all the common actions for GitHub." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/gmail_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Gmail [Gmail](https://www.gmail.com/) is a free cloud mailing service provided by Google. Data integration : Skyvia supports importing data to and from Gmail, exporting Gmail data to CSV files, replicating Gmail data to relational databases, and synchronizing Gmail data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Gmail backup. Query : Skyvia Query supports Gmail. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , log in with Gmail account. Creating Connection To connect to Gmail, perform the following steps: Click Sign In with Google in the connection editor in Skyvia. Enter your Gmail credentials to log in. Allow Skyvia to perform actions in Gmail. Additional Connection Parameters Suppress Extended Requests When querying multiple records, Gmail API returns only part of the fields for some objects. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: Object Field Drafts LabelIds, Snippet, SizeEstimate, HistoryId, UpdatedDate, Raw Labels MessagesTotal, MessagesUnread, ThreadsTotal, ThreadsUnread Messages LabelIds, Snippet, SizeEstimate, HistoryId, UpdatedDate, Raw Threads Messages You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Auto Send Messages This option defines Gmail behavior after message creation. If the option is set to True , a message is being created and automatically sent. \nIf the option is set to False , a message is being created, but not sent. If you need to create message in the particular folder, map the LabelIds to the needed value. For example, to create message in the \u201cSent\u201d folder, map the LabelIds to the [\u201cSENT\u201d] value. Connector specifics Stored Procedures Skyvia represents part of the supported Gmail features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UserStopReceivingPushNotifications Use the following command to stop push notifications for the given user mailbox. call UserStopReceivingPushNotifications(:UserId) UserPushNotificationsWatch Set up or update a push notification watch on the given user mailbox. call UserPushNotificationsWatch(:UserId, :LabelIds, :LabelFilterAction, :TopicName) Parameter Description UserId The user\u2019s email address LabelIds List of labelIds to restrict notifications about. The valid [values](https://developers.google.com/gmail/api/guides/labels) are: INBOX, SPAM, TRASH, UNREAD, STARRED, IMPORTANT, CATEGORY_PERSONAL, CATEGORY_SOCIAL, CATEGORY_PROMOTIONS, CATEGORY_UPDATES, CATEGORY_FORUMS LabelFilterAction Filtering behavior of labelIds list specified. Valid [values](https://developers.google.com/gmail/api/reference/rest/v1/users/watch#labelfilteraction) are Include , Exclude TopicName Google Cloud Pub/Sub API topic name that the Gmail API should send notifications to. MoveMessageToTrash The following command moves the specified message to the trash. call MoveMessageToTrash(:MessageId) UntrashMessage To remove the specified message from the trash, use the command call UntrashMessage(:MessageId) SendMessage Use the following command to send a message and add it to the Messages object. call SendMessage(:To, :From, :Subject, :Cc, :Bcc, :ReplyTo, :Text, :Html, :Attachments, :ThreadId) SendRawMessage The following command sends a message in the raw format (RFC-2822) and adds it to the Messages object. call SendRawMessage(:Raw, :ThreadId) VerifySendAsEmailAddress Use the following command to send a verification email to the specified send-as alias address. The verification status must be pending. call VerifySendAsEmailAddress(:SendAsEmail) MoveThreadToTrash The following command moves the specified thread to the trash. Any messages that belong to the thread are also moved to the trash. call MoveThreadToTrash(:ThreadId) RemoveThreadFromTrash To remove the specified thread from the trash, use the following command. call RemoveThreadFromTrash(:ThreadId) CreateRawMessage To create a raw (RFC-2822) message in the current user\u2019s mailbox without sending, use the following command. call CreateRawMessage(:Raw, :ThreadId) CreateMessage To create a message in the current user\u2019s mailbox without sending, use the following command. call CreateMessage(:To, :From, :Subject, :Cc, :Bcc, :ReplyTo, :Text, :Html, :Attachments, :ThreadId) Object peculiarities To insert or update data in the Messages and Drafts tables, \u00a0map either the Raw field or one of the following fields: Subject, From, To, Cc, Bcc, ReplyTo, Text, Html, Attachments . Skyvia does not support import for the SettingsSendAs and SettingsForwardingAddresses tables. DML Operations Skyvia supports the following DML operations for Gmail objects: Action Object INSERT, UPDATE, DELETE Drafts, Labels INSERT, DELETE Messages, SettingsFilters DELETE Threads Incremental Replication and Synchronization Replication with Incremental Updates is supported for the Messages and the Drafts tables. The Messages table does not support the UPDATE operation, thus only the newly created records are considered in the replication. Synchronization is supported for the Drafts table only. Supported Actions Skyvia supports all the common actions for Gmail." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/googleads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Google Ads [Google Ads](https://ads.google.com/) is an online advertising platform developed by Google. Using Google Ads, you can create online ads to reach people exactly when they are interested in your products and services. Data integration : Skyvia supports importing data from Google Ads to other applications, exporting its data to CSV files, and replicating Google Ads data to relational databases. Backup : Skyvia Backup does not support Google Ads. Query : Skyvia Query supports Google Ads. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Google Ads, you need to sign in with your Google credentials and select the Customer. Creating Connection To create a Google Ads connection, perform the following steps: Click Sign In with Google in the Connection Editor. Enter your Google email and password and click Sign in . In the next window, click the Allow button to allow Skyvia to access your Google account Additional Connection Parameters Click Advanced Settings in the Connection Editor to enable additional connection parameters. Login Customer Select this parameter when you use a manager account in Google Ads connection to access an operating customer account to avoid ambiguity. Reports Period This parameter determines the period for data queried from the objects with the -Report suffix. Connector Specifics Data Structure Skyvia represents Google Ads data as objects. The objects containing Report and View in their names are read-only. We need them to get statistical and analytical data about all other objects in Google Ads. For example, CampaignsReport is an object that contains all the fields of the Campaigns object together with additional fields like segments and metrics. Segments are attributes of your data. Metrics are quantitative measurements. Part of the Google Ads objects have the Segments_Date field. If you include this field in your query, Google Ads API requires specifying the date range value for this field. You can specify the date range using a filter by the Segments_Date field. If you don\u2019t use such a filter, Skyvia will apply the default date range: from 23 October 2000 till now. Consider segments and metrics compatibility. You may receive an error in case you query incompatible segments and metrics. Incremental Replication and Synchronization The following Google Ads objects support Incremental Replication: AdGroupsReport, AdGroupAdsReport, AdGroupAdAssetView, AdGroupAudienceView, AdScheduleView, AgeRangeView, BiddingStrategiesReport, CampaignsReport, CampaignAudienceView, CampaignBudgetsReport, ClickView, CustomersReport, DetailPlacementView, DisplayKeywordView, DistanceView, DynamicSearchAdsSearchTermView, ExpandedLandingPageView, GenderView, GeographicView, GroupPlacementView, HotelGroupView, HotelPerformanceView, IncomeRangeView, KeywordView, LandingPageView, ManagedPlacementView, PaidOrganicSearchTermView, ParentalStatusView, ProductGroupView, SearchTermView, TopicView, UserLocationView, VideosReport, AssetGroupProductGroupView . Skyvia supports Incremental Replication for objects having the Segments_Date field. This field stores dates without the time part. When performing replication with Incremental Updates, Skyvia only tracks updates up to the last day. Skyvia doesn\u2019t track the updates made on the current date. It means there is no point in scheduling replication with Incremental Updates more than once daily. Skyvia doesn\u2019t support Synchronization for Google Ads. DML Operations Support Operation Object INSERT, UPDATE, DELETE AdGroups, AdGroupAds, BiddingStrategies, Campaigns, CampaignBudgets, ConversionActions Supported Actions Skyvia supports all the common actions for Google Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/googleanalytics_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Google Analytics [Google Analytics](https://analytics.google.com/analytics/web/) is a free web analytics service by Google that tracks and reports website traffic. Data integration : Skyvia supports importing data from Google Analytics to other applications, exporting its data to CSV files, and replicating Google Analytics data to relational databases. Backup : Skyvia Backup DOES NOT support Google Analytics backup. Query : Skyvia Query supports Google Analytics. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Google Analytics, sign in with your Google credentials and specify the View Id. Getting Credentials Obtaining View Id To obtain the view Id, perform the following steps: Sign in to Google Analytics and click Admin . Select the required account, property, and view that you want to connect to. Click View Settings . Under Basic Settings copy the View Id . Creating Connection To create a Google Analytics connection, perform the following steps: Click Sign In with Google in the Connection Editor. Enter your Google credentials and click Sign in . Click Allow Specify the obtained View Id of your Google Analytics view that you want to work with. Additional Connection Parameters Metadata Cache You can specify the period after which Metadata cache is considered expired. Users Start Date This parameter determines the start date to query the Users metric from. By default, it is set to 08/22/2016. If you want to query user information from earlier dates, you also need to turn off the new calculation method in your Google Analytics. You can find more information on this [here](https://support.google.com/analytics/answer/2992042?hl=en) . Click Advanced Settings and specify the Users Start Date in the Connection Editor, to enable this connection parameter. Connector Specifics Skyvia cannot write data to Google Analytics, its data is read-only. Data Structure Each Google Analytics connection works with data from a single Google Analytics View . That\u2019s why you must specify the View Id when creating a Google Analytics connection. Skyvia represents Google Analytics data as a single object CompleteAnalytics with a number of fields, representing Google Analytics measurements and dimensions. Dimensions are attributes of your data, and their values are organized in rows. Metrics are quantitative measurements, and measurement values are organized into columns. The returned data represents the metrics values, calculated for existing combinations of dimension values. For example, if you query the City dimension and some metrics, like Sessions and PageviewsPerSession , you will get rows for every city, from which there were website visits, with the corresponding metrics values: City Sessions PageviewsPerSession San Francisco 5,000 3.74 Berlin 4,000 4.55 If you add another dimension to the query, for example, Browser , the results for cities will be split per browsers, and you will have rows for every city and browser combination, for which there were website visits: City Browser Sessions PageviewsPerSession San Francisco Chrome 3,000 3.5 San Francisco Firefox 2,000 4.1 Berlin Chrome 2,000 5.5 Berlin Safari 1,000 2.5 Berlin Firefox 1,000 4.7 Skyvia can query up to 7 dimensions and up to 10 metrics in a single request, due to Google Analytics API specifics. \nThus, you can select only a limited number of metrics and dimensions in your integratio tasks or queries. Consider metrics and dimensions compatibility when querying. You may receive an error in case you query incompatible metrics and dimensions. Querying Specifics Skyvia supports using the following operators for filtering: > , >= , < , <= , = , != , BETWEEN, IN. You can use multiple filter conditions, united with either AND or OR logical operators. You cannot use both AND and OR, and all these operators must be on the same level. If you configure query visually or configure filtering in data integration , don\u2019t add condition groups, except the main one. However, if you perform filtering by both metrics and dimensions, you cannot use the OR operator. You can only use the AND operator for uniting conditions on metrics and on dimensions. Date field is treated specially in filters - you can either use it once with a BETWEEN condition, or use > , >= and < , <= operators once. Sorting is fully supported for Google Analytics. Segments Support Skyvia supports Google Analytics segments via the segment field. You can use it in filters to query data from specific segments. You can specify multiple segments in filters in the following way: Segment = [\u2018ALL_USERS\u2019,\u2019NEW_USERS\u2019] Cohorts Support Cohort is a group of users that has a common characteristic, for example, all users with the same Acquisition Date. Skyvia allows geting cohort data from Google Analytics via the corresponding metric and dimension fields with Cohort_ prefix in their names. Note that if you query cohort data, you need to add a filter by the Cohort_Type field. Usually the filter Cohort_Type = \u2018FIRST_VISIT_DATE\u2019 is used. Some Cohort fields require additional filters. Incremental Replication and Synchronization Skyvia queries changed data from Google Analytics by the Date field. \nThis field stores dates without the time part. When performing replication with Incremental Updates, Skyvia only tracks updates up to the last day. Skyvia doesn\u2019t track the updates made on the current date. It means there is no point in scheduling replication with Incremental Updates more than once daily. Skyvia doesn\u2019t support Synchronization for Google Ads. Supported Actions Google Analytics connector supports the following actions : Execute Command in Source and Lookup Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/googleanalytics_v4_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Google Analytics 4 [Google Analytics 4](https://analytics.google.com/) is a free web analytics service by Google that tracks and reports website traffic. This connection uses the Google Analytics Data API v1, which gives access to Google Analytics 4 (GA4) report data. Use this connector to access Google Analytics 4 properties only. It is not compatible with Universal Analytics. For more details, see the API [documentation](https://developers.google.com/analytics/devguides/reporting/data/v1) . Data integration : Skyvia supports importing data from Google Analytics 4 to other applications, exporting its data to CSV files, and replicating Google Analytics data to relational databases. Backup : Skyvia Backup DOES NOT support Google Analytics 4 backup. Query : Skyvia Query supports Google Analytics 4. Establishing Connection To create a connection to Google Analytics 4, sign in to Google Analytics 4 with your credentials and select Account and Property. Creating Connection To connect to Google Analytics 4, perform the following steps: Click Sign In with Google in the Connection Editor. Enter your Google credentials and click Sign in . Click Allow Select Account and Property from the corresponding drop-down lists. Additional Connection Parameters Metadata Cache You can specify the period after which Metadata Cache expires. Analytics Start Date This parameter defines the start date for reading data from the CompleteAnalytics object. The default value is the August 14, 2015. Use Display Names This parameter determines whether to use the display or API names for the objects in the connector. Skyvia uses API names by default. Display names may contain spaces or other special characters. API names capitalize the first letter in the object name. For example, the audienceName field display name is Audience Name , and its API name is AudienceName . Connector Specifics Skyvia cannot write data to Google Analytics 4. Its data is read-only. Data Structure Skyvia represents Google Analytics 4 data as two objects CompleteAnalytics and RealtimeAnalytics with a number of fields representing Google Analytics 4 metrics and dimensions.\nDimensions are attributes of your data. Dimension values in report responses are strings. For example, the dimension City indicates the city from which an event originates. It can be San Francisco or Berlin, or other city. Metrics are the quantitative measurements of a report. For example, the metric eventCount is the total number of events. The returned data represents the metrics values calculated for existing combinations of dimension values. For example, if you query the City dimension and some metrics, like Sessions , you will get rows for every city from which there were website visits, with the corresponding metrics values: City Sessions San Francisco 5,000 Berlin 4,000 If you add another dimension to the query, for example, Browser , the results for cities will be split per browser, and you will have rows for every city and browser combination for which there were website visits: City Browser Sessions San Francisco Chrome 3,000 San Francisco Firefox 2,000 Berlin Chrome 2,000 Berlin Safari 1,000 Berlin Firefox 1,000 Due to Google Analytics 4 API specifics, you can query 9 dimensions and 10 metrics max in a single request to the CompleteAnalytics object. You can query 4 dimensions and 4 metrics in a single request to the RealtimeAnalytics object. Consider metrics and dimensions compatibility when querying. You may receive an error if you query incompatible metrics and dimensions. Cohorts Support A cohort is a group of users that has a common characteristic. The only supported characteristic is the first session date (users with the same FirstSessionDate field value). Skyvia allows getting cohort data from Google Analytics 4 via the corresponding metric and dimension fields with the Cohort_ prefix in their names. Cohort dimensions: Cohort, CohortNthDay, CohortNthWeek, CohortNthMonth . Cohort metrics: CohortActiveUsers, CohortTotalUsers . Cohort parameters: CohortGranularity, CohortStartOffset, CohortEndOffset . When you use cohort dimensions or metrics in your query, the\ncohort parameters are applied by default. You can redefine their values by adding them to your query. The CohortGranularity valid values are Daily (default), Weekly, Monthly . The default value for CohortStartOffset is 0. The default value for CohortEndOffset is 10. More details about cohorts are available [here](https://developers.google.com/analytics/devguides/reporting/data/v1/rest/v1beta/CohortSpec#cohortsrange) . Filtering Specifics Skyvia supports using the > , >= , < , <= , = , != , BETWEEN, and IN operators for numeric and date fields (all metrics and some dimensions). String fields (dimensions) support the following operators: = , != , LIKE, IN, IS NULL, IS NOT NULL. The Date field in regular reports, the FirstSessionDate in cohort reports, and the MinutesAgo field in the RealtimeAnalytics object support the > , >= , < , <= , = , and BETWEEN operators. If the FirstSessionDate is used beyond cohort reports, it supports the = , != , LIKE, IN, IS NULL, IS NOT NULL operators. Cohort parameters fields support only the = operator. You can use multiple filter conditions and combine different operators. Skyvia supports sorting for all fields except the cohort parameters. Sorting by multiple fields is also supported. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the CompleteAnalytics object. Skyvia queries changed data from Google Analytics 4 by the Date field of the CompleteAnalytics object. \nThis field stores dates without the time part. When performing replication with Incremental Updates, Skyvia only tracks updates up to the last day. Skyvia doesn\u2019t track the updates made on the current date. It means there is no point in scheduling replication with Incremental Updates more than once daily. Skyvia doesn\u2019t support Synchronization for Google Analytics 4. Supported Actions Google Analytics 4 connector supports the following actions : Execute Command in Source and Lookup Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/googleapps_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources G Suite Skyvia supports the following G Suite applications: Google Contacts \u2014 an online address book integrated with Gmail and other Google applications. Google Calendar \u2014 a free online calendar to track your events from Google. Google Tasks \u2014 application from Google to manage to-do lists. Data integration : Skyvia supports importing data to and from G Suite, exporting their data to CSV files, replicating G Suite data to relational databases, and synchronizing G Suite data with other cloud apps and relational databases. Backup : Skyvia Backup supports G Suite backup except for objects with composite primary keys. Query : Skyvia Query supports G Suite. Establishing Connection When [creating a connection](https://docs.skyvia.com/connections/#creating-connections) to G Suite, you have to log in to your Google account. Creating Connection To create a G Suite connection, perform the following steps: Click Sign In with Google . In the opened window, enter your Google credentials and click Sign in . Click Allow and save connection. Connector Specifics Incremental Replication and Synchronization Synchronization and Replication with Incremental Updates enabled are not supported for objects without the Updated field. Supported Actions Skyvia supports all the common actions for G Suite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/googlesheets_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Google Sheets [Google Sheets](https://www.google.com/sheets/about/) is a cloud-based solution that allows users to edit, organize, and analyze different types of information online. It allows collaborations with multiple users. Users can edit and format files in real-time and track any changes made to the spreadsheet with a revision history. Data integration : Skyvia supports importing data to and from Google Sheets, exporting Google Sheets data to CSV files, and replicating Google Sheets data to relational databases. Backup : Skyvia Backup does not support Google Sheets. Query : Skyvia Query supports Google Sheets. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Google Sheets, sign in with Google and specify the spreadsheet name. Creating Connection Click Sign In with Google in the Connection Editor. Enter your Google credentials and click Sign in . Click Allow to grant access to your Google Account. Select the needed spreadsheet from the Spreadsheet drop-down list. You can use only one spreadsheet in one connection. Spreadsheet is a required parameter. You need to select the spreadsheet you want to work with. You can choose only one spreadsheet in one connection window and read sheets from this particular spreadsheet. The sheets will be created as tables in the connector. Additional Connection Parameters Use Header To Detect Column Names This parameter specifies whether to use the columns from the first row of the table in Google Sheets. If there are no column names in the spreadsheet, then standard Excel column names will be displayed. Cell Max Length This parameter defines the maximum text length in a cell. Connector Specifics Data Structure Each Google Sheets connection represents data from a single spreadsheet. You must select a specific spreadsheet in the Connection Editor. The spreadsheet sheets are represented as separate tables. You can determine the column names using the headers from the first row. Mark the Use Header To Detect Column Names checkbox in the Connection Editor. In this case, Skyvia displays all the columns up to the last one with a non-empty cell in the first row. If a column has an empty value in the first row, but there are columns with non-empty first row values further, the column is available and has the default Google Sheets name (A, B, C, \u2026, AA, AB, AC, \u2026). If the Use Header To Detect Column Names checkbox is NOT selected, all the sheet columns are available in Skyvia with Google Sheets names (A, B, C, \u2026, AA, AB, AC, \u2026). Besides, Skyvia adds the RowNo column, containing row numbers. Incremental Replication and Synchronization Skyvia does not support Synchronization and Incremental Replication for Google Sheets. Supported Actions Skyvia supports all the [common actions](https://docs.skyvia.com/connectors/actions/#common-actions) for Google Sheets." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/greenhouse_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Greenhouse [Greenhouse](https://www.greenhouse.com/) is a recruitment and onboarding platform which optimizes and automates the hiring process. Data integration : Skyvia supports importing data to and from Greenhouse, exporting Greenhouse data to CSV files, replicating Greenhouse data to relational databases and synchronizing Greenhouse data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Greenhouse. Query : Skyvia Query supports Greenhouse. Establishing Connection To create a connection to Greenhouse, enter your API Key. If you want to load data to Greenhouse, and not just read data from Greenhouse, you also need to specify your User Id. Getting Credentials Getting the API Key API Key \u2014 Greenhouse API token. You can read more about creating an Greenhouse API Key [here](https://developers.greenhouse.io/harvest.html#authentication) . You need sign in as a user that has the \u201cCan manage ALL organization\u2019s API Credentials\u201d permission in the \u201cDeveloper permission\u201d section to create the API key. To get a new Greenhouse API Key, perform the following steps: Sign in to Greenhouse. Click the gear icon in the top menu. Select Dev Center and then API Credential Management . Click New API Key . In the API Type list, select Harvest . Enter an optional key description, and click Manage Permissions . Copy the API Key and store it in a safe place. API Key is available for full copying only once when it is created. Later on it will be half hidden. So paste your API Key to a safe place (word document, etc). Select the Select All checkbox and click Save at the bottom of the page. Getting the User Id If you don\u2019t know your User Id, you can omit it for now. The easiest way to find your user Id is to create a read-only connection, first. Then obtain your User Id with above instructions, edit the connection, specify it and save the connection. The easiest way to obtain your Greenhouse user ID is via one of the Skyvia tools \u2014 Query or Export . Both of these ways can be used even within the limits of the free pricing plan. To get user ids via Query, perform the following steps: Click +NEW in the top menu. Under Query , click Builder . Click Select connection on the left and select your Greenhouse connection. Scroll down the object list on the left till you see the Users object. Drag it to the Results pane. After this, you can see your Greenhouse users\u2019 data. Find the corresponding user in the table and copy the value from the Id column of the corresponding row. Alternatively, you can use Skyvia Export to get the contents of the Users object to a CSV file and copy User Id from this file. Creating Connection To start creating a connection, follow the below steps: In the Connection Editor, enter your API Key If you know your User Id (an integer value identifying the Greenhouse user) and want a read-write connection to Greenhouse, enter your User Id. Otherwise, the connection will be read-only. Connector Specifics Object Peculiarities Candidates When importing data to the Candidates object, specify the Applications field as a JSON array, as in example: [{\"job_id\": 215725}, {\"job_id\": 185289}] UserJobPermissions and UserFutureJobPermissions UserJobPermissions and UserFutureJobPermissions objects do not support import of records with UserId containing Ids of a user with the job admin role. CandidateTags When importing data to the CandidateTags object, the TagId and CandidateId fields are not displayed in the integration logs after execution. Applications Greenhouse API supports several options of creating applications for candidates. Every option may have different set of fields required for mapping. Thus, we defined only the CandidateId as field as required for mapping. Other fields may be required depending on the option of record creation. DML Operations Support Skyvia supports DML operations for such Greenhouse objects Operation Object INSERT, UPDATE, DELETE Applications, Candidates, ScheduledInterviews INSERT, UPDATE Departments, Jobs, Offices, Users INSERT, DELETE CandidateTags, Tags, UserFutureJobPermissions, UserJobPermissions INSERT JobOpenings UPDATE JobPosts Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Greenhouse objects: Applications, Candidates, EmailTemplates, JobPosts, Jobs, JobStages, Offers, ScheduledInterviews, ScoreCards, Users . Skyvia supports Synchronization for such Greenhouse objects: Applications, Candidates, Jobs, ScheduledInterviews, Users . Supported Actions Skyvia supports all the common actions for Greenhouse." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/harvest_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Harvest [Harvest](https://www.getharvest.com/) is a convenient cloud-based time tracking and invoicing tool, which has been designed for businesses of all sizes. Data integration : Skyvia supports importing data to and from Harvest, exporting Harvest data to CSV files, replicating Harvest data to relational databases and synchronizing Harvest data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Harvest. Query : Skyvia Query supports Harvest. Establishing Connection To create a connection with Harvest, log in to Harvest via OAuth authentication in Skyvia. Creating Connection To create a Harvest connection, perform the following steps: Click Sign In with Harvest In the opened window, sign in with your Harvest credentials or sign In with Google. In the next window, click Authorize App to allow Skyvia to access your Harvest account. The authentication token is generated. Connector Specifics Object Peculiarities *Reports* Objects The objects with the * Reports * part in their names, such as ExpenseReports_Categories,\nExpenseReports_Clients, ExpenseReports_Projects, ExpenseReports_Teams, TimeReports_Clients, TimeReports_Projects, TimeReports_Tasks, TimeReports_Teams, UninvoicedReport are read-only. \nTo get data from these objects, you must set filters by from and to fields. The specified time span must not exceed 365 days. Invoices and Estimates The LineItems field in the Invoices and Estimates objects stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. InvoiceMessages and EstimateMessages The Recipients field in the InvoiceMessages and EstimateMessages objects stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. Filtering Specifics Harvest supports the following native filters: Object Operator Field Contacts = ClientId > , >= UpdatedDate Clients = IsActive > , >= UpdatedDate Invoices = State, ClientId > , >= UpdatedDate Projects = ClientId, IsActive > , >= UpdatedDate Estimates = ClientId, State > , >= UpdatedDate Tasks = IsActive > , >= UpdatedDate Expenses = ClientId , UserId , ProjectId , IsBilled > , >= UpdatedDate > , >= , < , <= SpentDate TimeEntries = ClientId , UserId , ProjectId , TaskId , ExternalReferenceId , IsBilled , IsRunning > , >= , < , <= SpentDate > , >= UpdatedDate InvoiceItemCategories > , >= UpdatedDate Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following Harvest objects: Clients, Contacts, EstimateItemCategories, EstimateMessages, Estimates, ExpensesCategories, Expenses, InvoiceItemCategories, InvoiceMessages, InvoicePayments, Invoices, Projects, ProjectTaskAssigment, ProjectUserAssigment, Roles, Tasks, TimeEntries, UserBillableRates, UserCostRates, Users . Skyvia supports Synchronization for the following Harvest objects: Clients, Contacts, EstimateItemCategories, Estimates, ExpensesCategories, Expenses, Invoices, InvoiceItemCategories, Projects, ProjectTaskAssigment, ProjectUserAssigment, Roles, Tasks, TimeEntries, Users . DML Operations Support Operation Object INSERT, UPDATE, DELETE Clients, Contacts, EstimateItemCategories, Estimates, ExpensesCategories, Expenses, InvoiceItemCategories, Invoices, Projects, ProjectTaskAssigment, ProjectUserAssigment, Roles, Tasks, TimeEntries, Users INSERT, DELETE InvoiceMessages, InvoicePayments, EstimateMessages INSERT UserBillableRates, UserCostRates Stored Procedures Skyvia represents part of the supported Harvest features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ChangeEventTypeInvoice To change an invoice message event type, use the command call ChangeEventTypeInvoice(:invoice_id,:event_type) PARAMETER NAME DESCRIPTION Invoice_Id The invoice identifier Event_type The type of invoice event that occurred with the message. Valid values are draft, send, close, re-open ChangeEventTypeEstimate To change an estimate event type, use the command call ChangeEventTypeEstimate(:estimate_id,:event_type) PARAMETER NAME DESCRIPTION Estimate_Id The invoice identifier Event_type The type of estimate event that occurred with the message: send, accept, decline, re-open RestartTimeEntry Use the following command to restart a time entry call RestartTimeEntry(:time_entry_id) StopTimeEntry Use the following command to stop a time entry call StopTimeEntry(:time_entry_id) Supported Actions Skyvia supports all the common actions for Harvest." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/helpdesk_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources HelpDesk [HelpDesk](https://www.helpdesk.com/) is a simple ticketing system for tracking, prioritizing and resolving customer support tickets, developed by LiveChat Software. Data integration : Skyvia supports importing data to and from HelpDesk, exporting HelpDesk data to CSV files, replicating HelpDesk data to relational databases, and synchronizing HelpDesk data with other cloud apps and relational databases. Backup : Skyvia Backup does not support HelpDesk backup. Query : Skyvia Query supports HelpDesk. Establishing Connection To connect to HelpDesk with Skyvia, specify the Account Id and Personal Access Token. Getting Credentials To obtain your Account Id and Personal Access Token, do the following: Go to the HelpDesk [developer console](https://platform.text.com/console/) . Click Tools on the left and select Personal Access Tokens . Click Create new token + . Name your token and select scopes. We recommend choosing Select All . Copy and save new credentials somewhere safe. The HelpDesk credentials are available only once when created. We recommend copying them and keeping in a safe place. Creating Connection Paste the obtained Account Id and Token into the corresponding boxes in the Connection Editor. Connector Specifics Synchronization and Replication Limitations Skyvia supports replication with [Incremental Updates](https://docs.skyvia.com/data-integration/replication/#keeping-database-up-to-date) for all the HelpDesk objects but the objects with names starting from Report , except the Reports_FailedEmails object. The latter one supports replication with incremental updates, but it doesn\u2019t have the UpdatedDate field. Thus, replication with incremental updates detects only the new records, but not updates to existing records. Skyvia supports synchronization for the following HelpDesk objects: Agents , CannedResponses , Licenses , Mailboxes , Rules , Subscriptions , Tags , Teams , Templates , Tickets , Views , Webhooks . DML Operations Support Skyvia supports the following DML operations for HelpDesk objects: Operations Objects INSERT, UPDATE, DELETE Agents, CannedResponses, Mailboxes, Rules, Tags, Teams, Templates, Tickets, Views, Webhooks INSERT, DELETE BlockedEmails, EmailDomains, ReplyAddresses, TrustedEmails INSERT, UPDATE Licenses, Subscriptions Object Peculiarities Templates When loading data to the Templates object, values for the Content_Text field must include the {{message.text}} parameter, and values for the Content_Html field must include {{{message.html}}}. For example, the value for the Content_Text field can be the following: 1 This is test template with the following text: {{message.text}} The value for the Content_Html field can be the following: 1
This is test template with the following text: {{{message.html}}}
Tickets When importing data to the Tickets table, you must map either the Message_Text or Message_RichTextObj field in addition to the required Requester_Email field. Reports* When reading data from the objects with name starting with Reports *, you can use the following fields to specify reports: Fields Function Step Aggregation step. Can take the following values: Hour , Day , Month , Year . The default value is Day . From and To These fields specify the period, for which the report is queried. These parameters are not required. You can specify both, one, or none as described [here](https://api.helpdesk.com/docs#section/Common-notions/Relative-dates-and-datetime-ranges) . These fields don\u2019t return any data, when querying. They are used only to configure reports. You can specify values for them either in the WHERE clause of a query or in filters . You must use the equals operator for these fields. Stored Procedures Skyvia represents part of the supported HelpDesk features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendForwardingInstructions The following command sends email with instructions on setting up email forwarding. call SendForwardingInstructions(:LicenseId, :Email) CancelSubscription Use the following command to cancel a subscription. call CancelSubscription(:SubscriptionId) TerminateSubscription To terminate the subscription, use the command: call TerminateSubscription(:SubscriptionId) MergeTickets The following command merges the tickets. call MergeTickets(:TicketId, :ChildTicketId) UnmergeTickets To unmerge the tickets, use the following command. call UnmergeTickets(:TicketId, :ChildTicketId) VerifyEmailDomain The following command verifies the email domain. call VerifyEmailDomain(:EmailDomainId) You can use a call to the stored procedure, for example as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . Supported Actions Skyvia supports all the common actions , available for most of the connectors, for HelpDesk." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/helpscout_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Help Scout [Help Scout](https://www.helpscout.com/) is a cloud-based SaaS (software as a service) HIPAA-compliant help desk solution that helps businesses and teams manage their customer relationships with flexibility and affordability. Data integration : Skyvia supports importing data to and from Help Scout, exporting Help Scout data to CSV files, replicating Help Scout data to relational databases, and synchronizing Help Scout data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Help Scout. Query : Skyvia Query supports Help Scout. Establishing Connection To create a connection to Help Scout, sign in to Help Scout via OAuth 2.0. Creating Connection Click Sign In with Help Scout in Connection Editor. In the opened window, enter your email and password used when signing up to Help Scout. Skyvia will request permission to access your Help Scout account. Click Authorize to give permissions. Connector Specifics Object Peculiarities Customers When importing data to the Customers object, no fields are marked as required in the Import mapping settings. It is because you can specify any field describing contact details. When inserting data to the Customers object, you must map the Emails, Phones, Chats, SocialProfiles, Websites, Addresses and Properties fields. For example, the valid value for the Emails field looks like this: 1\n2\n3\n4 [ {\n \"type\": \"work\",\n \"value\": \"bear@acme.com\"\n } ] You can also import data to the above fields via such child objects as CustomerPhones, CustomerEmails, CustomerWebsites, CustomerSocialProfiles and CustomerChats by mapping the required CustomerId . CustomerAddresses Skyvia supports UPDATE and DELETE operations for this object. The DELETE operation does not physically delete the CustomerAddresses record. Instead, it clears the values from the city, state, postalCode, country , and lines fields. Conversations When importing data to the Conversations object, the Threads field is required for mapping. \nYou must pass its value in the following format: 1\n2\n3\n4\n5\n6\n7 [ {\n \"type\": \"customer\",\n \"customer\": {\n \"email\": \"bear@acme.com\"\n },\n \"text\": \"Hello, Help Scout?\"\n } ] Besides the required Threads field, you must also map the Customer field. You can do it in two ways by specifying the following: If the customer already exists in HelpScout, map the PrimaryCustomer_Id field value. If the customer doesn\u2019t exist, specify the customer email as the PrimaryCustomer_Email field value. \n In this case, a new corresponding record will automatically appear in the Customers object, and this record Id will also appear in the PrimaryCustomer_Id field of the Conversations object. If you don\u2019t map the customer Id or email, you will receive an error from the Help Scout API, saying that the parameters in the request are missing. Reports Skyvia supports the Help Scout reports as separate objects: ReportCompanyCustomersHelped_Currents, ReportChats, ReportCompanyDrillDowns, ReportCompanyOveralls, ReportConversationBusyTimes, ReportConversationDrilldowns, ReportConversationOveralls, ReportEmails, ReportHappinessRatings, ReportHappinessOveralls, ReportProductivityOveralls, ReportPhones, and ReportUserOverall. To successfully select data from these objects, you must use filters by Current_StartDate and Current_EndDate fields. \nYou can optionally use filters by the Previous_StartDate and Previous_EndDate fields. ReportCompanyCustomersHelped_Currents You can optionally use the filter by the ViewBy field to get data from this report successfully. The valid values for filtering are Day, Week, Month . ReportCompanyDrillDowns You must use the filter by the Range field to get data from this report successfully. The valid values for filtering are Replies, FirstReplyResolved, Resolved, ResponseTime, FirstResponseTime, HandleTime. ReportUserOverall You must use a filter by the User_Id field to get data from this report successfully. Custom Objects The Customers object contains custom fields (Properties). Skyvia supports the INSERT operation only. \nThe UPDATE operation support is in our roadmap. You can use custom fields of the following types: Number(Int32), Text(String), Url(String, 4000 characters max), Date, and Dropdown (String). DML Operations Support Skyvia supports the following DML operations for Help Scout objects: OPERATION OBJECT INSERT, UPDATE, DELETE Webhooks UPDATE, DELETE CustomerAddressess, CustomerChats, CustomerEmails, CustomerPhones, CustomerSocialProfiles, CustomerWebsites INSERT, DELETE Conversations INSERT, UPDATE Customers DELETE Users Stored Procedures Skyvia represents part of the supported Help Scout features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CreateCustomerPhones To create a phone number record for existing customers, use the command call CreateCustomerPhones(:customerId,:type,:value) PARAMETER NAME DESCRIPTION Type Phone number type. Valid values: Fax, Home, Mobile, Other, Pager, Work Value Phone number itself (String) CreateCustomerEmails To create an email record for an existing customer, use the command call CreateCustomerEmails(:customerId,:type,:value) PARAMETER NAME DESCRIPTION Type Email type. Valid values: Home, Other, Work Value Email itself (String) CreateCustomerWebsites To create a website for existing customer, use the command call CreateCustomerWebsites(:customerId,:value) PARAMETER NAME DESCRIPTION Value Website URL CreateCustomerSocialProfiles To create a customer\u2019s profile on social media, use the command call CreateCustomerSocialProfiles(:customerId,:type,:value) PARAMETER NAME DESCRIPTION Type Social profile type. Valid values: aboutme, facebook, flickr, forsquare, google, googleplus, linkedin, other, quora, tungleme, twitter, youtube Value Social Profile handle (URL, for example) CreateCustomerChats To create a customer chat, use the command call CreateCustomerChats(:customerId,:type,:value) PARAMETER NAME DESCRIPTION Type Chat type. Valid values: aim, gtalk, icq, msn, other, qq, skype, xmpp, yahoo . Value Chat handle CreateThreadChat To add a conversation thread, use the command call CreateThreadChat(:conversationId,:customerId,:text,:imported,:attachments) PARAMETER NAME DESCRIPTION Text The chat text Imported Boolean. Valid values: true, false Attachments Optional. The list of attachments in the array format CreateThreadCustomer To add a customer thread to a conversation or reopen the updated conversation (if Imported = true ), use the command call CreateThreadCustomer (:conversationId, :customerId, :text, :imported, :cc, :bcc, :attachments) PARAMETER NAME DESCRIPTION Text The chat text Imported Boolean. Valid values: true, false Cc List of emails in carbon copy in array format Bcc List of emails in blind carbon copy in the array format Attachments Optional. The list of attachments in the array format CreateThreadNote To add a note to a thread, use the command call CreateThreadNote(:conversationId,:text,:status,:imported,:attachments) PARAMETER NAME DESCRIPTION Text The note text Status Valid values: active, all, closed, open, pending, spam Imported Boolean. Valid values: true, false Attachments Optional. The list of attachments in the array format CreateThreadPhone To add a phone thread, use the command call CreateThreadPhone(:conversationId,:customerId,:text,:imported,:attachments) PARAMETER NAME DESCRIPTION Text The phone conversation text Imported Boolean. Valid values: true, false Attachments Optional. The list of attachments in the array format CreateThreadReply To create a reply thread, use the command call CreateThreadReply (:conversationId, :customerId, :text, :draft, :imported, :cc, :bcc, :attachments) PARAMETER NAME DESCRIPTION Text The chat text Draft Boolean. Valid values: true, false . Defines if the reply is draft or not Imported Boolean. Valid values: true, false Cc List of emails in carbon copy in array format Bcc List of emails in blind carbon copy in the array format Attachments Optional. The list of attachments in the array format UpdateConversationTag To update tags in conversation, use command call UpdateConversationTag(:conversationId,:tags) PARAMETER NAME DESCRIPTION Tags The list of tags to modify in array format, for example [ \"tag1\", \"tag2\" ] To add a new tag to the list, you must specify the existing values and a new value together. For example, the current Tags value is [ \"tag1\", \"tag2\" ] . To add \u201ctag3\u201d to the list, you must specify [ \"tag1\", \"tag2\", \"tag3\" ] in the command. To delete all the tags, specify only [] . To delete the tag from the list, you must specify the list excluding the tags you want to delete. For example, the current Tags value is [ \"tag1\", \"tag2\", \"tag3\" ] . To delete \"Tag3\" , exclude it from the list and specify [ \"tag1\", \"tag2\" ] . UpdateConversationField To update custom fields in the Conversations object, use the command call UpdateConversationField(:conversationId,:fields) PARAMETER NAME DESCRIPTION Fields List of fields for an update in the following format [{\"id\":\"44977\",\"value\":\"Custom field 108.88\"},{\"id\":\"44978\",\"value\":\"88.88\"}] Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Users, Teams, TeamMembers, Tags, Workflows, MailBoxes, Conversations, Customers objects. The Conversations object contains only CreatedDate field, thus Skyvia detects only new records. Skyvia supports Synchronization only for the Customers object. Supported Actions Skyvia supports all the common actions for Help Scout." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/heymarket_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Heymarket [Heymarket](https://www.heymarket.com/) provides conversational, personalized, and secure text messaging that integrates with Salesforce, HubSpot, Microsoft Teams, and other business tools. Data integration : Skyvia supports importing data to and from Heymarket, exporting Heymarket data to CSV files, replicating Heymarket data to relational databases, and synchronizing Heymarket data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Heymarket. Query : Skyvia Query supports Heymarket. Establishing Connection To create a connection to Heymarket, specify the API key. Getting Credentials To get an API Key, contact the support service via the Chat with Support section and request access to Heymarket API. Creating Connection To connect to Heymarket, specify the obtained API key in the corresponding box in the Connection Editor. Additional Connection Parameters Metadata Cache You can specify the time when the Metadata Cache expires. Suppress Extended Requests Heymarket API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Contacts AvatarFileName Messages MediaFileContent TemplateFiles Content You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities Contacts Due to Heymarket API specifics, the UPDATE operation updates the mapped fields with provided values and the unmapped fields with existing values. Thus, the integrations or queries that perform the Update operation may affect API call usage and take time. Skyvia performs the additional requests to get the existing values for the unmapped Contacts fields in the following cases: If one or more standard fields are not mapped. If the object has custom fields, and not all of them are mapped. Lists and Templates Due to Heymarket API specifics, the UPDATE operation updates the mapped fields with provided values and the unmapped fields with existing values. Skyvia performs the additional requests to get the existing values for the unmapped Lists and Templates fields. Thus, the integrations or queries that perform the Update operation may affect API call usage and take time. Filtering Specifics Heymarket API supports the following native filters: Object Field Operator Contacts Id = UpdatedDate < , <= Conversations UpdatedDate < , <= Lists Id = UpdatedDate < , <= Messages CreatedDate > , >= Surveys Id = Tags Id = UpdatedDate < , <= Use these filters to improve performance and save API calls. You can use filters with other fields or operators, which may increase API call usage. Custom Fields You can add custom fields to the Contacts object. All custom fields are strings. Skyvia performs additional requests to Heymarket API to get the custom field metadata. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Contacts, Conversations, Inboxes, Lists, Messages, Surveys, Tags , and Templates objects. Skyvia supports Synchronization for the Contacts and Lists objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Contacts, Lists INSERT, DELETE Tags UPDATE, DELETE Templates INSERT Messages UPDATE Members Supported Actions Skyvia supports all the common actions for Heymarket." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/hive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Hive [Hive](https://hive.com/) is a tool to manage projects, tasks, and dashboards for effective collaboration. Data integration : Skyvia supports importing data to and from Hive, exporting Hive data to CSV files, replicating Hive data to relational databases, and synchronizing Hive data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Hive backup. Query : Skyvia Query supports Hive. Establishing Connection To create a connection to Hive in Skyvia, you have to specify the API key and User ID. Getting Credentials Login to Hive. Click on your account avatar -> select Edit profile . Click API info . Copy the API key and the User ID. Creating Connection Paste the obtained credentials into the corresponding API Key and User ID fields in Skyvia connection editor. Connector Specifics Object Peculiarities Projects The Teams field is used for INSERT and UPDATE operations. When performing SELECT, it returns an empty result. By default, the Projects table displays only the active projects. To retrieve the deleted records, use filter by the Deleted field. Note that the Deleted field itself returns empty. You can retrieve only the list of deleted records. If you try to select a single deleted record by a specific Id, you get an error \u2018Error occurred while reading \u2018Projects\u2019 object: Project has been deleted.\u2019 Actions and related tables By default, the Actions table displays the active records only. \nTo retrieve the deleted records, use the filter by the Deleted field. Note that the Deleted field itself returns empty. When performing Import into the Actions table, if you assign mapping for both of the fields PhaseId and PhaseName , only the PhaseId values are inserted to target. Use call CreateActionAttachments(:ActionId,:Url) to add new records to the ActionAttachments table. The ActionComments table displays a maximum of 200 comments for each Action . Custom Fields Hive allows adding the custom fields in the Projects and Actions tables. The list of custom fields is available in the CustomFields table. You can manually add new custom fields via UI or API using the INSERT operation. Projects custom field values are stored in the ProjectCustomFields field. Actions custom field values are stored in the CustomFields field. For user convenience, custom fields from the Projects table are also displayed in a separate ProjectCustomFields table. This table supports the INSERT and UPDATE operations. When you perform the Import with INSERT operation to the ProjectCustomFields table, the result log returns the array of added custom colums, not the list of Id\u2019s Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: ActionAttachments, ActionComments, Actions, ActionTemplates, CustomFields, CustomTags, Groups, Labels, \nProjects, ProjectStatuses, RoleTags, Workspaces. Incremental Replication tracks only the new records for the ActionComments and Groups tables. These tables contain only the CreatedDate field, and there is no UpdatedDate field which would have allowed tracking the updated records. Synchronization is supported for the following objects: Actions, CustomTags, Labels, Projects, RoleTags . DML Operations Operation Object INSERT, UPDATE, DELETE Actions, CustomTags, Labels, Projects, RoleTags, Teams INSERT, UPDATE ProjectCustomFields INSERT, DELETE WorkspaceUsers INSERT ActionComments, CustomFields, ProjectStatuses UPDATE ProjectTags, Users, UserSettings, UserTags Supported Actions Skyvia supports all the common actions for Hive." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/hubspot_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources HubSpot [HubSpot](https://www.hubspot.com) is an all-in-one cloud CRM, sales, and marketing platform. Data integration : Skyvia supports importing data to and from HubSpot, exporting HubSpot data to CSV files, replicating HubSpot data to relational databases, and synchronizing HubSpot data with other cloud apps and relational databases. Backup : Skyvia Backup supports HubSpot backup. Query : Skyvia Query supports HubSpot. Establishing Connection To create a connection with HubSpot, sign in with HubSpot and select the account to use. Creating Connection While [creating a connection](https://docs.skyvia.com/connections/#creating-connections) , you need to perform the following steps: Click Sign In with HubSpot . In the opened window, enter your HubSpot credentials and click Log in . Select your account to use and click Choose Account . Scroll the opened page to the bottom, select the checkbox, and click Connect app to approve the access request. Additional Connection Parameters Metadata Cache You can specify the period after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Batch API Skyvia supports HubSpot Batch API to load data into Contacts, Companies, Tickets, and Products objects. By default, it uses Batch API to load data to these objects. The Use Batch API checkbox in the Advanced Settings controls using Batch API via the connection. Batch API allows much higher performance and lower API call use but also provides less information about the success or error of an operation. For example, Batch API does not return the IDs of inserted records. Thus, when using batch API, you won\u2019t get the ids of inserted records in [Import](https://docs.skyvia.com/data-integration/import/) success logs and won\u2019t be able to use the [Import Returning](https://docs.skyvia.com/data-integration/import/#returning) feature. Additionally, you cannot use Batch API with [synchronization](https://docs.skyvia.com/data-integration/synchronization/) integrations because synchronization relies on obtaining inserted record IDs to establish correspondence between source and target records. Thus, if you plan to use a HubSpot connection for synchronization of HubSpot Contacts, Companies, Tickets, or Products or use the Returning import feature for obtaining record IDs when loading data to these objects, you need to turn off Batch API for this connection . Custom Objects HubSpot allows enterprise customers to create custom objects and store data in them. Skyvia supports HubSpot custom objects - it can load data from and to them. Custom object support is not enabled by default. You need to explicitly select the Use Custom Objects checkbox in the Advanced Settings to work with custom objects. Besides, if you change this setting for an existing connection, you need to clear the connection metadata cache. You can do it by clicking the Clear now link. If changes are made to the custom objects metadata, you also need to clear the metadata cache of the connection so that Skyvia can detect these changes. Column-wise Chunking If the Column-wise chunking checkbox is selected, requests to the ContactListContacts , Contacts , and Deals objects are split into multiple ones with different fields queried if the request URI exceeds 8000 characters. HubSpot limits the request URI to 8190 characters by default. In some cases, ContactListContacts , Contacts , and Deals objects may have so many fields that Skyvia exceeds this limit and receives the 414 Request-URI Too Large error when querying data from all of the fields. Selecting the Column-wise chunking checkbox allows you to avoid such errors. However, in such cases request number increases, and thus, more API calls are consumed and performance decreases. You may consider querying only a part of the fields in such cases to avoid increasing the number of requests and API calls spent. Customize Associations With this parameter, you can select only those object associations you will work with.\nIt allows selecting multiple associations. If you don\u2019t specify the needed associations, only the static associations will be available by default. Disconnecting HubSpot from Skyvia To disconnect HubSpot from Skyvia, you can uninstall Skyvia in your HubSpot settings: Sign in to HubSpot. Select your account in the top right corner of the page. Select Profile & Preferences . In the menu on the left, under Account Setup , select Integrations -> Connected Apps . In the list of connected applications, find Skyvia and click Actions . Then click Uninstall . This will revoke all the OAuth tokens in connections created on Skyvia. After this, all the Skyvia connections to your HubSpot account will stop working, and all the integrations, backups, automations, endpoints, and queries using these connections will also stop working. Alternatively, you can disconnect HubSpot from Skyvia by deleting all the HubSpot connections you have created together with all the objects that depend on the connections: integrations, endpoints, backups, automation, and queries. Deleting the connection object and all the related objects on Skyvia would ensure no traces of data are stored on Skyvia, for example, in error or success logs, etc. Connector Specifics Object Associations HubSpot objects may relate to each other by [associated records](https://knowledge.hubspot.com/crm-setup/associate-records) . Record associations are always two-way (if record A is associated with record B, record B is associated with record A). For user convenience, Skyvia represents object associations as pairs of objects with reversed names, for example, DealContacts \u2014 ContactDeals . HubSpot object specified in the left part of the name contains less data than the object specified in the right part of the name. If you query data from each object in the pair, you receive the same results, but the query performance and the number of used API calls may differ. This approach allows users to manage the query performance. For example, you have many contacts with a few associated deals in HubSpot. Use the DealContacts object to get the list of all related deals and contacts instead of ContactDeals . In this case, Skyvia reads all the Deals IDs first and then retrieves the Contacts IDs. It takes less time and API calls to query this association from DealContacts than from ContactDeals . Static Association Objects By default, the most popular associations are available in our HubSpot connector. Skyvia represents them by the following objects: DealContacts \u2014 ContactDeals DealTickets \u2014 TicketDeals ContactTickets \u2014 TicketContacts CompanyTickets \u2014 TicketCompanies EngagementContacts \u2014 ContactEngagements EngagementTickets \u2014 TicketEngagements EngagementCompanies \u2014 CompanyEngagements EngagementDeals \u2014 DealEngagement If you customize association list via the Customize Associations parameter, the static association objects won\u2019t be available via this connection. Only the dynamic association objects, selected in this parameter will be available instead. Dynamic Association Objects HubSpot allows associating standard and custom objects. You can create tens of associations between HubSpot objects and work with the specific associations in Skyvia. To do that, specify the associations you want to work with in the Customize Associations parameter in the connection Editor. Skyvia will add the specified associations as separate objects in the connector. You can adjust the list of the displayed associations if needed. Associations between standard objects are activated automatically. The associations involving custom objects must be activated in HubSpot UI. Such associations are available in Skyvia if the Use Custom Objects parameter is enabled. Skyvia represents the dynamic association objects by a combination of object names divided by the underscore. For example, Contacts_To_Deals , Deals_To_Contacts . If the relation includes the custom object, its foreign key field will have the name Left_ID or Right_ID , depending on the position of the custom object in the relation. Object Peculiarities Deals AssociatedCompanyIds, AssociatedVids, and AssociatedDealIds fields originate from the [deprecated](https://legacydocs.hubspot.com/docs/methods/deals/deals_overview) HubSpot API version. They are present in the Deals object for compatibility. Exclude these fields from queries or mapping to improve performance if they are unnecessary in your query or integration. CompanyId field is an additional virtual field in this object. Due to API specifics, Skyvia must perform additional API calls to read it. This impacts performance when querying the Deals object. Exclude this field from a query when you don\u2019t need it. Actors This is a read only object. It allows only querying records by their IDs. When querying data from this object, you must specify a filter by ID with the equals operator. Property History Skyvia supports getting a list of changes made to records or fields in HubSpot Deals, Contacts, Companies, Tickets, and LineItems objects. Skyvia represents history data in separate read-only objects for each of them DealsHistory, ContactsHistory, CompaniesHistory, TicketsHistory, LineItemsHistory . \nThese objects store large data volumes. To improve query performance, we recommend using filters when querying data from these objects. Lists The Lists object stores HubSpot lists - subsets of records from a specific object. Lists can have one of the following types, determined by the ProcessingType field: Dynamic - the list is formed automatically, by filtering records of the list object. You need to specify the filter conditions in the FilterBranch field, in addition to Name , ProcessingType , and ObjectTypeId fields, when creating such a list. Static - the list is filled manually in the UI or via API. Only Name , ProcessingType , and ObjectTypeId are required. Snapshot - the list is formed automatically, by filtering records of the list object at the time of the list creation. After this, records can be added or deleted from the list via UI or API. Here is an example value for the FilterBranch field: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26 \"filterBranch\": {\n\"filterBranches\": [\n{\n\"filterBranches\": [],\n\"filters\": [\n{\n\"property\": \"city\",\n\"operation\": {\n\"operator\": \"CONTAINS\",\n\"includeObjectsWithNoValueSet\": false,\n\"values\": [\n\"test\"\n],\n\"operationType\": \"MULTISTRING\"\n},\n\"filterType\": \"PROPERTY\"\n}\n],\n\"filterBranchOperator\": \"AND\",\n\"filterBranchType\": \"AND\"\n}\n],\n\"filters\": [],\n\"filterBranchOperator\": \"OR\",\n\"filterBranchType\": \"OR\"\n} The required ObjectTypeId field determines, records of which object constitute this list. You can find the standard object IDs in the [HubSpot documentation](https://developers.hubspot.com/docs/guides/api/crm/using-object-apis) . You can also see the id of the corresponding object in the URL when viewing the object records from the browser. ListMemberships This object stores information on which records belong to which list. It stores records for all the list. The Id field of this object is a virtual field, consisting of the list Id and the record Id, separated by an underscore. *BasedListMemberships The objects with names starting with a name of a HubSpot object and ending with \u2018BasedListMemberships\u2019 store information on which records belong to which list for this object. They have columns both from the ListMemberships object and the corresponding object. Field Naming Skyvia uses property labels for HubSpot field names, not the internal names. For example, the field with internal name hubspot_owner_id has the label Company owner in the Companies object by default, and the Activity assigned to label in the Emails object. DML Operations Support Skyvia supports DML operations for such HubSpot objects Operation Object INSERT, UPDATE, DELETE BlogAuthors, BlogPosts, BlogTopics, CallProperties, Calls, CommunicationProperties, Communications, Companies, CompanyProperties, ContactProperties, Contacts, DealPipelines, DealPipelineStages, DealProperties, Deals, EmailProperties, Emails, Engagements, Files, Folders, Forms, LineItems, ListFolders, Lists, MarketingEmails, MeetingProperties, Meetings, NoteProperties, Notes, Orders, Pages, PostalMail, PostalMailProperties, Products, Quotes, QuotesProperties, TaskProperties, Tasks, Templates, TicketPipelines, TicketPipelineStages, TicketProperties, Tickets, UrlMappings INSERT, UPDATE Owners INSERT, DELETE BroadcastMessages, Comments, CompanyContacts, CompanyDeals, CompanyEngagements, CompanyTickets, ContactCompanies, ContactDeals, ContactEngagements, ContactTickets, DealCompanies, DealContacts, DealEngagements, DealLineItems, DealTickets, EngagementCompanies, EngagementContacts, EngagementDeals, EngagementTickets, InvoiceLineItems, LineItemDeals, LineItemInvoices, ListMemberships, TicketCompanies, TicketContacts, TicketDeals, TicketEngagements, Workflows, *BasedListMemberships UPDATE, DELETE Threads INSERT Messages Supported Actions Skyvia supports all the common actions for HubSpot. Data Integration Tutorials You can read the following tutorial on HubSpot - Zoho CRM integration: How to Import Deals and Contacts from Zoho CRM to HubSpot Preserving Relations Troubleshooting Current Oauth-token Does Not Have Proper Permissions! If you have such an error, the HubSpot account you are using to connect does not have access to that tool. For example, see the following links at the HubSpot community portal: [https://community.hubspot.com/t5/APIs-Integrations/Oauth-Token-does-not-have-proper-permissions-Issue/m-p/248934](https://community.hubspot.com/t5/APIs-Integrations/Oauth-Token-does-not-have-proper-permissions-Issue/m-p/248934) [https://community.hubspot.com/t5/APIs-Integrations/Oauth-token-does-not-have-proper-permissions-contact-api/m-p/248560](https://community.hubspot.com/t5/APIs-Integrations/Oauth-token-does-not-have-proper-permissions-contact-api/m-p/248560) 414 Request-URI Too Large Such an error means there are too many custom fields in the object you are querying. To avoid this error for ContactListContacts , Contacts , and Deals objects, select the Column-wise chunking checkbox in the connection editor (in Advanced Settings ). Alternatively, you can exclude some fields from the query (or replication, backup, or export - depending on where you get this error). If you get such an error for other HubSpot objects, contact our support, and we will extend the effect of this checkbox to cover them. You Have Reached Your Ten_secondly_rolling Limit The \u00a0You have reached your ten_secondly_rolling limit error happens when too many API calls to HubSpot are made within 10 seconds. This is a HubSpot API limit. To avoid such an error, make sure that your HubSpot integrations don\u2019t run at the same time, both integrations and backups on Skyvia and other integrations. If such an error occurs while making a snapshot in the HubSpot backup, you may try to divide your backup into several ones with different objects and launch them at different times. FAQ How can I connect Power BI to HubSpot? You can integrate HubSpot with Power BI through Skyvia Connect. Skyvia Connect is a cloud service that allows you to easily publish your data from various sources via the OData protocol and make it available in JSON or XML format over the web. For more details, see Connect You can access the created endpoint directly in Power BI. Refer to [Power BI documentation](https://learn.microsoft.com/en-us/power-bi/desktop-connect-odata) .\n\u00a0\nAnother option is to use a replication. You move data for analysis to your relational database and access this database in Power BI. For more details, see [Replication](https://docs.skyvia.com/data-integration/replication/) . Skyvia logs show that 200 records are not synced to Hubspot because of the error, but I don\u2019t see exact error message in error logs You get no error message because Batch API is used. In this case, if one or more records in a batch fail, the whole batch is considered failed, and it\u2019s not possible to obtain an exact error message. You can temporarily clear the Use Batch API checkbox in the Advanced Settings of the connection and reimport failed records. This can help you identify specific records causing the error and its reason." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/insightly_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Insightly CRM [Insightly CRM](https://www.insightly.com) is a CRM platform for small and medium size businesses for marketing, sales, and project management automation. Data integration : Skyvia supports importing data to and from Insightly CRM, exporting Insightly CRM data to CSV files, replicating Insightly CRM data to relational databases, and synchronizing Insightly CRM data with other cloud apps and relational databases. Backup : Skyvia Backup supports Insightly CRM backup. Query : Skyvia Query supports Insightly CRM. Establishing Connection To create a connection , you need to specify the API Key, an automatically generated API key that is used for connecting to Insightly CRM. Getting Credentials Your API Key can be found in your Insightly CRM Integrations list: Click the profile icon in Insightly CRM and then click User Settings . In the API section of the User Settings page, copy your API Key and paste it to the connection editor. Creating Connection Paste the obtained API key to the API Key box in the connection editor. Additional Connection Parameters Use Custom Fields Determines how Skyvia presents Insightly CRM custom fields and custom objects . Suppress Extended Requests If an object has custom fields of the Image type, Insightly CRM API does not return these field values when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Attachments Skyvia does not support getting content of file attachments. It allows getting only their metadata. For files linked from Dropbox or other sources, Skyvia does not display them in the *Attachments objects at all. Emails Since Insightly CRM API does not allow getting full Emails data with full email texts, Skyvia also does not allow getting them. TaskComments Insightly CRM API does not allow creating task comments with body of more than 200 characters, including the HTML formatting code. Thus, Skyvia can back up and load large comments from Insightly CRM, but cannot restore and load large comments to Insightly CRM because of their API limitation. Backup Peculiarities Skyvia supports restoring links from backup only in case when the target link object still exists in Insightly CRM. For example, Skyvia can restore a deleted contact with its links to other objects, like organizations, etc., in case if these organizations were not deleted from Insightly CRM. Since *Links objects use polymorphic relation to the target object, the foreign key to target object is not treated as a relation. Some links cannot be restored at all, for example, links between events and leads, because Insightly CRM API does not return target object information for such links. Skyvia can restore events to the Insightly CRM calendar, but it cannot link them to some of the objects, for example, Leads, if the events were linked to them. Skyvia cannot back up Email body, because Insightly CRM API don\u2019t return the full email body. Emails cannot be restored. Skyvia works with Insightly CRM via their API. There are certain limits on how much Insightly API calls can be performed per day and per second. You can read about these limits in the [Insightly CRM documentation](https://api.insightly.com/v3.1/Help#!/Overview/Technical_Details) (scroll down to the Technical Details -> Rate Limiting / Throttling Requests section). These limitations may cause issues when performing Insightly CRM backup if your Insightly CRM instance contains a lot of data. Such a backup may fail with the Too many requests error. If you experience such an error, you may split your backup into multiple ones, backing up different objects, and schedule them to run at different time or even different days, to avoid exceeding API call limits. Custom Fields The following objects in Insightly CRM can have custom fields: Opportunities, Contacts, Leads, OpportunityLineItems, Products, Tasks, and Projects . Skyvia presents Insightly CRM custom fields in two ways: the new one - just like the standard object fields, and legacy - as a single CustomFields field storing an array of JSON objects. Legacy Custom Fields Representation Insightly API represents the CustomFields field as array containing the information about custom fields. For example, [{\"FIELD_NAME\":\"cf_op_number__c\",\"FIELD_VALUE\":\"5444\",\"CUSTOM_FIELD_ID\":\"cf_op_number__c\"}, {\"FIELD_NAME\":\"cf_opp_text__c\",\"FIELD_VALUE\":\"qwerty\",\"CUSTOM_FIELD_ID\":\"cf_opp_text__c\"}] The CustomFields supports INSERT and UPDATE operations. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. To replicate such fields as separate tables, set the Unwind Nested Objects option to Separate tables in your Replication. Custom Objects All records of Insightly CRM custom objects are available via the CustomObjectRecords object. It has several predefined fields, like ObjectName containing the corresponding custom object name and others. All the specific custom object field values are available via the CustomFields field, storing them as an array of JSON objects, as described above . With Use Custom Fields check box selected, Skyvia also presents Insightly CRM custom objects as separate objects with the corresponding names ending with the _CustomObjectRecords suffix. These objects have common fields of the CustomObjectRecords object as well as specific object fields. Incremental Replication and Synchronization Replication with incremental updates and synchronization enabled are not supported for objects without CreatedDate and UpdatedDate fields. Replication with incremental updates requires at least one of the fields. Both fields must be present for synchronization. Thus, only the following objects support synchronization: ContactNotes , Contacts , Events , Leads , LeadNotes , Opportunities , OpportunityLineItems , OpportunityNotes , OrganisationNotes , Organisations , PriceBookEntries , PriceBooks , Products , ProjectMilestones , ProjectNotes , Projects , Quotes , QuoteLineItems , Tasks , Teams . DML Operations Support Operation Object INSERT, UPDATE, DELETE ContactLinks , Contacts , CustomObjectRecords , Events , FileCategories , LeadLinks , Leads , LeadSources , LeadStatuses , Opportunities , OpportunityCategories , OpportunityLineItems , OpportunityLinks , OrganisationLinks , Organisations , PriceBookEntries , Products , ProjectCategories , ProjectLinks , ProjectMilestones , Projects , QuoteLineItems , Quotes , TaskCategories , Tasks , Teams INSERT, UPDATE LeadNotes , OpportunityNotes , OrganisationNotes , PriceBooks , ProjectNotes , UPDATE, DELETE ContactNotes , Notes INSERT, DELETE EmailLinks , EventLinks , NoteLinks , TaskLinks , TeamMembers INSERT EmailComments , NoteComments , TaskComments UPDATE OrganisationDates DELETE Emails Supported Actions Skyvia supports all the common actions for Insightly CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/intercom_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Intercom [Intercom](https://www.intercom.com/) is a complete customer communications platform with bots, apps, product tours, etc., that enables targeted communication with customers on the website, inside the web, mobile apps, etc. Data integration : Skyvia supports importing data to and from Intercom, exporting Intercom data to CSV files, replicating Intercom data to relational databases, and synchronizing Intercom data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Intercom backup. Query : Skyvia Query supports Intercom. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Intercom, you need to sign in to Intercom. Creating Connection To connect to Intercom, perform the following steps: Click Sign In with Intercom in the Connection Editor. Enter the email and password in the opened window when signing up to Intercom. Click Authorize access . Additional Connection Parameters Suppress Extended Requests Intercom API returns only part of the fields for some objects when querying multiple records. To query the values of additional fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in Intercom because its API does not return them without extended requests. The list of such additional fields is the following: Object Field Articles Statistics_Type , Statistics_Views , Statistics_Conversations , Statistics_Reactions , Statistics_HappyReactionPercentage , Statistics_NeutralReactionPercentage , Statistics_SadReactionPercentage Connector Specifics Object Peculiarities Companies When you query all data from this object, you get only the companies associated with specific users. To get companies unrelated to any user, use the filter by Id in your query. The Companies object and the related CompanySegments , and CompanyContacts objects support only one querying session at a time. Don\u2019t use the Companies, CompanySegments , or CompanyContacts objects in a single integration or separate integrations that run in parallel. Contacts To import data to the Contacts object, map either the Email or the ExternalId . Conversations The INSERT operation creates new conversations. The From_Type field accepts the following valid values: User, Lead, Contact .\nSpecify the Id of the corresponding user, lead or contact in the From_Id . Specify the message text in the Body field. Tickets and Tickets The TicketAttributes field in the Tickets object has a complex structure in JSON format. The set of ticket attributes depends on the ticket type. For user convenience, we create a separate object for each ticket type. You can recognize such objects by the ticket type name and the *Tickets suffix in their name. DataAttributes When you query this object, the Model field returns three value options: Contact, Company , and Conversation . You can insert data attributes only for Model = Contact or Company . Custom Fields The following Intercom objects support custom fields: Contacts, Companies, Conversations, Tickets . Custom fields support the following data types. Intercom Data Type DbType Comment Text String For the Conversations and Tickets objects the field length is 1000 characters. For the fields with names containing memo, note, description, comment, notes, address or ending with url, reason, keywords , the length is 4000 characters. If a field name contains content or html , the length is 2147483647 characters. For the Contacts and Companies objects, the length is 255 characters. Number Int32 Decimal Number Double Date Date May be used in the Contacts and Companies objects DateTime DateTime May be used in Conversations and Tickets objects. Read-only for the Tickets objects. True or false or Boolean Boolean List String Read-only File upload JSON Array Read-only field. Includes the following fields Name (String), Url (String), ContentType (String), FileSize , (Int64), Width (String), Height (String) File upload FileComplexType Read-only complex structured object Wait a few minutes before querying the Contacts object containing the updated custom field values. Or query the updated contact records using filter by Id. If you update custom field values and query the whole Contacts object immediately, you get the old values. DML Operations Support Operation Object INSERT, UPDATE, DELETE Articles , Collections , Companies , Contacts , Tags INSERT, DELETE ContactCompanies, ContactsSubscriptions, ContactTags INSERT, UPDATE Conversations, Tickets, TicketTypes INSERT DataAttribute, Notes Incremental Replication and Synchronization Skyvia supports Synchronization for the following Intercom objects: Articles, Companies, Collections, Contacts, Conversations, Tickets, TicketTypes . Skyvia suports Replication with Incremental Updates for the following Intercom objects: Articles, ActivityLogs, Companies, Collections, CompanySegments, CompanyContacts, Contacts, ContactSegments, ContactCompanies, Conversations, Events, HelpCenters, Notes, Segments, Tickets, TicketTypes . Stored Procedures Skyvia represents part of the supported Intercom features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ArchiveContact To archive a specific contact, use the command call ArchiveContact(:id) UnarchiveContact To unarchive a specific contact, use the command call UnarchiveContact(:id) ContactReplyToConversation To reply to a conversation with a message on behalf of a contact, use the following command call ContactReplyToConversation(:conversation_id, :intercom_user_id, :body) AdminReplyToConversation To reply to a conversation with a message from an admin, use the command call AdminReplyToConversation(:conversation_id, :message_type, :admin_id, :body) PARAMETER NAME DESCRIPTION Conversation_id Conversation identifier Message_type Valid values are comment , note Admin_id Admin identifier Body Message text AddTagToConversation To add a tag to a specific conversation, use the command call AddTagToConversation(:conversation_id, :admin_id, :tag_id) RemoveTagFromConversation To remove a tag from a specific conversation, use the command call RemoveTagFromConversation(:conversation_id, :tag_id, :admin_id) CloseConversation To close a conversation, use the command call CloseConversation(:conversation_id, :admin_id) SnoozeConversation To snooze a conversation to reopen on a future date, use the command call SnoozeConversation(:conversation_id, :admin_id) OpenConversation To open a conversation which is snoozed or closed, use the command call OpenConversation(:conversation_id, :admin_id) AssignConversation Use the following command to assign a conversation to an admin and/or team call AssignConversation(:conversation_id, :type, :admin_id, :assignee_id) PARAMETER NAME DESCRIPTION Conversation_id Conversation identifier Type Valid values are admin , team Admin_id Admin identifier Assignee_id The identifier of the admin or the team ConvertConversationToTicket To convert a conversation to a ticket, use the command call ConvertConversationToTicket(:conversation_id, :ticket_type_id) SubmitEvent To submit an event, use the command call SubmitEvent(:event_name, :created_at, :id, :metadata) PARAMETER NAME DESCRIPTION Event_name Name of the event Created_at Event date and time Id Identifier of the contact for which the event is being created metadata Optional metadata about the event in JSON format CreateEmailMessage To create an email message that an admin has initiated, use the command call CreateEmailMessage(:subject, :body, :template, :from_id, :to_type, :to_id) PARAMETER NAME DESCRIPTION Subject The message subject Body The message text Template Style of the outgoing message. Valid values are plain or personal From_id Sender identifier To_type Role associated to the contact user or lead To_id Recipient identifier CreateInAppMessage To create an in-app message that an admin has initiated, use the command call CreateInAppMessage(:subject, :body, :template, :from_id, :to_type, :to_id) PARAMETER NAME DESCRIPTION Subject The message subject Body The message text Template Style of the outgoing message. Valid values are plain or personal From_id Sender identifier To_type Role associated to the contact user or lead To_id Recipient identifier CreateTicketTypeAttribute To create a new attribute for a ticket type, use the command call CreateTicketTypeAttribute(:ticket_type_id, :name, :description, :data_type, :required_to_create, :required_to_create_for_contacts, :visible_on_create, :visible_to_contacts, :multiline, :allow_multiple_values) PARAMETER NAME DESCRIPTION Ticket_type_id Identifier of the ticket type attribute Name Ticket type attribute name Description Description of the ticket type attribute Data_type The type of the data attribute. Valid values: string, list, integer, decimal, boolean, datetime, files Required_to_create Whether the attribute is required or not for teammates. Boolean. Default value is false Required_to_create_for_contacts Whether the attribute is required or not for contacts. Boolean. Default value is false Visible_on_create Whether the attribute is visible or not to teammates. Default value is true Visible_to_contacts Whether the attribute is visible or not to contacts. Default value is true Multiline Whether the attribute allows multiple lines of text (applicable to string attributes). Boolean Allow_multiple_values Whether the attribute allows multiple files to be attached to it (applicable to file attributes). Boolean UpdateTicketTypeAttribute To update an existing attribute for a ticket type, use the command call UpdateTicketTypeAttribute(:ticket_type_id, :name, :description, :required_to_create, :required_to_create_for_contacts, :visible_on_create, :visible_to_contacts, :multiline, :allow_multiple_values, :archived) PARAMETER NAME DESCRIPTION Ticket_type_id Identifier of the ticket type attribute Name Ticket type attribute name Description Description of the ticket type attribute Data_type The type of the data attribute. Valid values: string, list, integer, decimal, boolean, datetime, files Required_to_create Whether the attribute is required or not for teammates. Boolean. Default value is false Required_to_create_for_contacts Whether the attribute is required or not for contacts. Boolean. Default value is false Visible_on_create Whether the attribute is visible or not to teammates. Default value is true Visible_to_contacts Whether the attribute is visible or not to contacts. Default value is true Multiline Whether the attribute allows multiple lines of text (applicable to string attributes). Boolean Allow_multiple_values Whether the attribute allows multiple files to be attached to it (applicable to file attributes). Boolean Archived Whether the ticket type attribute is archived or not. Boolean Supported Actions Skyvia supports all the common actions for Intercom." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/iterable_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Iterable [Iterable](https://iterable.com/) is a cross-channel marketing platform that powers unified customer experiences and empowers you to create, optimize and measure every interaction across the entire customer journey. Data integration : Skyvia supports importing data to and from Iterable, exporting Iterable data to CSV files, replicating Iterable data to relational databases and synchronizing Iterable data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Iterable. Query : Skyvia Query supports Iterable. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Iterable, you need to specify your API Key. Getting Credentials To get a new Iterable API Key, perform the following steps: Sign in to Iterable. Click Integrations in the top menu. Select API keys . Click New API Key . API Key is available for copying only the first time when it is created. So copy it and store it in a safe place. Creating Connection To connect to Iterable, select the Data Center and paste the obtained API key into the API Key box in Skyvia. Additional Connection Parameters Suppress Extended Requests Iterable API returns only part of the fields for some objects when querying multiple records. To query the values of lacking fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls.\nThe list of additional fields is the following: Object Field Templates CreatorUserId , MessageTypeId , Metadata , FromName , FromEmail , ReplyToEmail , Subject , PreheaderText , CcEmails , BccEmails , HTML , PlainText , DataFeedIds , CacheDataFeed , MergeDataFeedContext To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that the fields, listed above will not be available in Skyvia (will return empty values) even if they have values in Iterable because its API does not return them without extended requests. Connector Specifics Object Peculiarities Campaigns The fields SendAt , SendMode , StartTimeZone , DefaultTimeZone and DataFields are added only for using in import operations. They always return empty values. Events When importing data to the Events object, in addition to the required EventName field, you also need to specify one of these fields: Email or UserId . Required Filters Retrieving data from the Events, InAppMessages, and Users objects is only possible when filtering by Email . DML Operations Support Operation Object INSERT, UPDATE, DELETE CatalogItems INSERT, UPDATE Templates INSERT, DELETE Catalogs , Lists INSERT Events , Campaigns , InAppMessages Incremental Replication and Synchronization Synchronization is supported for the following objects: CatalogItems , Templates . Replication with Incremental Updates is supported for the following objects: Campaigns , CatalogItems , Events , InAppMessages , Lists , MessagesTypes , Templates . Note that you need to add filters by the Email field for Events and AppMessages in a replication if you want to replicate these object. Stored Procedures Skyvia represents part of the supported Iterable features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AbortCampaign The following command aborts a campaign with the specified id. call AbortCampaign(:campaignId) TriggerCampaign The following command triggers a campaign with the specified id against the specified lists. call TriggerCampaign (:campaignId, :listIds, :suppressionListIds, :dataFields, :allowRepeatMarketingSends) PARAMETER NAME DESCRIPTION CampaignId The campaign identifier ListIds The array of the lists identifiers in the array format, for example [1, 2, 3] SuppressionListIds The identifiers of lists to suppress in the array format, for example [1, 2, 3] DataFields Fields to merge into handlebars context in the JSON Object format AllowRepeatMarketingSends Boolean parameter, which defines whether to allow repeating marketing sends. It is true by default CancelScheduledOrRecurringCampaign The following command cancels a campaign with the specified id. call CancelScheduledOrRecurringCampaign (:campaignId) Supported Actions Skyvia supports all the common actions for Iterable." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/jibble_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Jibble [Jibble](https://www.jibble.io/) is a time tracking and attendance app that helps companies stay on top of their teams\u2019 activities. Data integration : Skyvia supports importing data to and from Jibble, exporting Jibble data to CSV files, replicating Jibble data to relational databases, and synchronizing Jibble data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Jibble backup. Query : Skyvia Query supports Jibble. Establishing Connection Getting Credentials To connect to Jibble, you need to fill out Client Id and Client Secret . To get them, do the following: Login to your [Jibble](https://www.jibble.io/) account. Go to Settings -> Organization -> Create New Secret Create a name for your API Key and click Save . A pop-up window will appear. Use Copy buttons next to API Key ID and API Key Secret to copy their values. Creating Connection To [create a Jible connection](https://docs.skyvia.com/connections/#creating-connection) enter your Client Id and Client Secret in the corresponding fields and click Create Connection . Connector Specifics Object Peculiarities Timesheets You must set a filter by the Date field to get the records from the Timesheets table. Additionally, you can set a filter by the Period field that accepts the following values: Day, Month, Week. Stored Procedures ClockIn Use the following command to start the timer. call ClockIn(:personId, :type, :activityId, :projectId, :clientType, :platform_clientVersion, :platform_os, :platform_deviceModel, :platform_deviceName, :coordinates_latitude, :coordinates_longitude) PARAMETER NAME DESCRIPTION PersonId Person identifier Type Defines whether to start or end the timer. Use the In value to start the timer ActivityId The identifier of activity for the current time entry ProjectId The identifier of the project, which the specified activity belongs to ClientType The type of the client you use to track time, for examble Web Platform_clientVersion Platform client version, for example web 3.0 Platform_os Operational system, for example Windows 10 Platform_deviceModel The model of the device, for example MacbookPro Platform_deviceName Name of the device TestLaptop Coordinates_latitude Latitude, for example 42 Coordinates_longitude Longtitude, for example -13.5 Example: call ClockIn('a2786385-b7a2-4e28-a725-cba33f79229e', 'In', '0eef5b96-dca7-46ae-88a8-a889d5e1973f', 'd15607ac-d2cc-4e14-acbe-3681d069de75', 'web', 'Web'.0', 'Windows 10', 'MacbookPro', 'TestLaptop', 42, -13.5) ClockOut To stop the timer, use the following command `call ClockOut(:personId, :type, :clientType, :platform_clientVersion, :platform_os, :platform_deviceModel, :platform_deviceName) Example: `call ClockOut(\u2018a2786385-b7a2-4e28-a725-cba33f79229e\u2019, \u2018Out\u2019, \u2018Web\u2019, \u2018web 3.0\u2019, \u2018Windows 10\u2019, \u2018MacbookPro\u2019, \u2018TestLaptop\u2019) PARAMETER NAME DESCRIPTION PersonId Person identifier Type Defines whether to start or end the timer. Use the Out value to start the timer ActivityId The identifier of activity for the current time entry ProjectId The identifier of the project, which the specified activity belongs to ClientType The type of the client you use to track time, for examble Web Platform_clientVersion Platform client version, for example web 3.0 Platform_os Operational system, for example Windows 10 Platform_deviceModel The model of the device, for example MacbookPro Platform_deviceName Name of the device TestLaptop Coordinates_latitude Latitude, for example 42 Coordinates_longitude Longtitude, for example -13.5 Incremental Replication and Synchronization Incremenal Replication is supported for the following objects: Activities, Clients, Groups, LatestTimeEntries, Locations, Organizations, People, Projects . Synchronization is supported for the following objects: Locations, People . DML Operations Skyvia supports the following DML operations for Jibble objects: Operations Objects INSERT, UPDATE, DELETE Locations , People UPDATE LatestTimeEntries , Organizations INSERT Clients , Groups , Projects Supported Actions Skyvia supports all the common actions available for most of the connectors for Jibble." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/jiraservicemanagement_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Jira Service Management [Jira Service Management](https://www.atlassian.com/software/jira/service-management/features) is Atlassian\u2019s service management solution for IT and service teams. It is built on Jira, it encompasses deeper service management practices across service request, incident, problem, change, knowledge, asset, and configuration management. Data integration : Skyvia supports importing data to and from Jira Service Management, exporting data from Jira Service Management to CSV files and replicating data from Jira Service Management to relational databases. Backup : Skyvia Backup does not support Jira Service Management. Query : Skyvia Query supports Jira Service Management. Establishing Connection To create a connection to Jira Service Management, you need to enter site and log in to Jira Service Management via OAuth 2.0. Creating Connection To create a Jira Service Management connection, perform the following steps: Paste your Jira domain to the Site field, which you can find in your Jira account profile. Click Sign In with Jira Service Management . In the opened window, enter your Atlassian credentials used when registering in Jira and click Log in . Select the Site that you need to access from Skyvia. Click Accept . Connector Specifics Object Peculiarities RequestParticipants The RequestParticipants object has the AccountIds field with the JsonArray type. This field is used for import only, and it does not return any data. It is a required field. When importing data to the AccountIds field, provide a value in the following format: 1\n2\n3\n4\n5\n6\n7\n8\n9 [ \"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3581db05e2a66fa80b\" , \"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3a01db05e2a66fa80bd\" , \"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d69abfa3980ce712caae\" ] AssetSchemaObjecttypeAttributes If you update a record where the DefaultType_Id field is null, you must map the DefaultType_Id field to 0. \nIf this field is not null, no additional mapping is needed. Incremental Replication and Synchronization The following objects support replication with incremental updates: AssetObjects, AssetSchemaObjecttypes, AssetSchemaObjecttypesFlat, AssetSchemas , RequestApprovals , RequestAttachments , RequestCommentAttachments , Requests , RequestComments , RequestInQueues . Skyvia detects only the new records for the following objects: RequestApprovals , RequestAttachments , RequestCommentAttachments , Requests , RequestComments , RequestInQueues . Synchronization is not supported for Jira Service Management. DML Operations Support Operation Object INSERT, UPDATE, DELETE AssetSchemas, AssetSchemaObjecttypes, AssetSchemaObjecttypeAttributes, AssetObjects INSERT, DELETE Customers, Organizations, RequestFeedbacks UPDATE, DELETE OrganizationProperties, RequestTypeProperties INSERT RequestAttachments, RequestComments, RequestParticipants, Requests, RequestTransitions UPDATE RequestApprovals Stored Procedures Skyvia represents part of the supported Jira Service Management features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddCustomersToServiceDesk This command adds one or more customers to a service desk. If any of the specified customers are already associated with the service desk, no changes will be made to them. call AddCustomersToServiceDesk(:serviceDeskId,:accountIds) PARAMETER NAME DESCRIPTION ServiceDeskId The service desk identifier AccountIds The one or more identifiers of the customers to be added to the specified service desk in JSON format. For example [\"qm:9d14b428-3b7a-463d-8807-d8a413df01c5:33a26c5b-f924-4c40-ac5d-68c2b4163432\"] RemoveCustomers Use the following command to remove customers from the service desk. The service desk must have closed access. Specified customers not having access to the specified service desk are ignored by this procedure. call RemoveCustomers(:serviceDeskId,:accountIds) PARAMETER NAME DESCRIPTION ServiceDeskId Id of the service desk to remove customers from. Id A JSON array of customer IDs that are removed from the specified service desk. AddOrganizationToServiceDesk This command adds an organization to a service desk. If the specified organization is already associated with the service desk, no change is made to this organization. call AddOrganizationToServiceDesk(:serviceDeskId, :organizationId) RemoveOrganizationFromServiceDesk To remove an organization from a service desk, use the following command call RemoveOrganizationFromServiceDesk(:serviceDeskId, :organizationId) AddUsersToOrganization Use the following command to add records to the OrganizationUsers objects. call AddUsersToOrganization(:organizationId, :accountIds) PARAMETER NAME DESCRIPTION OrganizationId Id of the organization to add users to. AccountIds A JSON array of account IDs to add to the organization. An example of calling the stored procedure: call AddUsersToOrganization(1, '[\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3581db05e2a66fa80b\",\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3a01db05e2a66fa80bd\",\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d69abfa3980ce712caae\"]') RemoveUsersFromOrganization call RemoveUsersFromOrganization(:organizationId, :accountIds) PARAMETER NAME DESCRIPTION OrganizationId Id of the organization to remove users from. AccountIds A JSON array of account IDs to remove from the organization. An example of calling the stored procedure: call RemoveUsersFromOrganization(1, '[\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3581db05e2a66fa80b\",\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d3a01db05e2a66fa80bd\",\"qm:a713c8ea-1075-4e30-9d96-891a7d181739:5ad6d69abfa3980ce712caae\"]') SubscribeToRequestNotifications Use the following command to subscribe to request notifications. call SubscribeToRequestNotifications(:issueId) UnsubscribeToRequestNotifications Use the following command to unsubscribe to request notifications. call UnsubscribeToRequestNotifications(:issueId) RemoveRequestParticipants Use the following command to remove records from the RequestParticipants object. call RemoveRequestParticipants(:issueId, :accountIds) PARAMETER NAME DESCRIPTION IssueId Id of the issue to remove participants for. AccountIds A JSON array of account IDs that are removed from the participant list. You can see an example value for this parameter above, in the description of the AddUsersToOrganization procedure. PerformCustomerRequestTransition Use the following command to perform the specified customer transition for the specified request (add a record to the RequestTransitions object). call PerformCustomerRequestTransition(:issueId, :id) PARAMETER NAME DESCRIPTION IssueId Id of the issue to perform a transition of. Id Id of the transition to perform. Supported Actions Skyvia supports all the common actions for Jira Service Management." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/jira_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Jira [Jira](https://www.atlassian.com/software/jira) is an issue tracking solution with bug tracking, issue tracking, and project management functions. Data integration : Skyvia supports importing data to and from Jira, exporting Jira data to CSV files, replicating Jira data to relational databases, and synchronizing Jira data with other cloud apps and relational databases. Backup : Skyvia Backup supports Jira backup. Query : Skyvia Query supports Jira. Terminology Jira API versions . Version 2 and version 3 offer the same collection of operations. However, version 3 provides support for the [Atlassian Document Format](https://developer.atlassian.com/cloud/jira/platform/apis/document/structure/) (ADF) in: body in comments, including where comments are used in issue, issue link, and transition resources. comment in worklogs. description and environment fields in issues. textarea type custom fields (multi-line text fields) in issues. Single line custom fields (textfield) accept a string and don\u2019t handle Atlassian Document Format content. Establishing Connection Skyvia supports Jira Cloud and Jira Server connections. You may connect to Jira Cloud through API versions 2.0 and 3.0 with OAuth and Basic authentication. Jira server connection provides Basic authentication only. Getting Credentials To connect to the Jira instance you need to fill out Site field, your Jira user name, and API token . Site is a public IP address of your Jira instance or its DNS analog. Copy its value from the address bar. It should look like https://yourcompanyname.atlassian.net. You can manage your Atlassian API tokens on the [Atlassian Token Management](https://id.atlassian.com/manage-profile/security/api-tokens) page. Refer to Atlassian API Tokens Documentation in case of any questions. Creating Connection [Creating a connection](https://docs.skyvia.com/connections/#creating-connections) to Jira may slightly differ depending on whether you connect to Jira Server or Jira Cloud. Jira Server Connection Enter your Site address. Select Server in the Environment dropdown. Enter your user name and API token. You can manage your Atlassian API tokens [here](https://id.atlassian.com/manage/api-tokens) . Click Create Connection . Jira Cloud Connection Enter your Site address. Select Cloud in the Environment dropdown. Choose the preferred API version . Choose the Authentication type. If you choose Basic , enter your Jira login and password. If you choose OAuth , login with your Jira account, your access token will be applied automatically. Click Create Connection . Connector Specifics Object Peculiarities Some Jira objects can be accessed only via their parent objects. For example, to query ProjectComponents, ProjectStatuses, ProjectProperties and ProjectVersions objects, Jira API requires the id of the corresponding project. Skyvia does not require id of the parent object from the user. If you don\u2019t specify the ids of the parent objects, Skyvia will query all parent objects and get their ids. Then, it will query child objects for parent object records. This allows querying child objects without knowing their parents. However, this method takes a lot of time and consumes many API calls. It uses at least one API call for every parent object even if this object does not have any child records. It is strongly recommended to use filters on the parent object fields when querying data from child objects. This limits the number of parent object records, for which child object data must be queried. Nested Objects Some Jira objects contain fields that store complex, nested data in JSON format. These include: Object Field Issues IssueLinks IssueChangeLogs Items The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To replicate nested data into separate tables with the new replication runtime, select the Separate Tables option for Unwind Nested Objects . Incremental Replication and Synchronization Incremental Replication requires the object to have either CreateDate or UpdatedDate field. Synchronization requires both fields. DML Operations Skyvia supports the following DML operations for Jira objects: Operations Objects INSERT, UPDATE, DELETE Filters , IssueComments , IssueLinkTypes , Issues , IssueTypes , IssueWorklogs , PermissionSchemes , ProjectCategories , ProjectComponents , ProjectRoles , Projects , ScreenTabs INSERT, UPDATE ProjectVersions INSERT, DELETE SharePermissions INSERT ApplicationProperties , Groups , IssueFields , IssueVotes , IssueWatchers , ScreenTabFields Supported Actions Skyvia supports all the common actions for Jira." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/jira_software_cloud_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Jira Software Cloud [Jira Software Cloud](https://www.atlassian.com/software/jira) is a project management tool to plan and track work across every team. Data integration : Skyvia supports importing data to and from Jira, exporting Jira data to CSV files, and replicating Jira data to relational databases. Backup : Skyvia Backup does not support Jira Software Cloud. Query : Skyvia Query supports Jira Software Cloud. Establishing Connection To create a connection to Jira Software Cloud, you need to have the Site URL . Creating Connection To connect to Jira Software Cloud, perform the following steps: Enter your Site address. Click Sign In with Jira Software Cloud . Enter your Atlassian credentials. Click Log in . Additional Connection Parameters Use Custom Objects This option allows you to work with BoardIssues custom objects. If enabled, you will see two new objects: Issues A custom object that contains objects with all custom fields of all types for a specific project. Issues A custom object that includes BoardIssues objects, each containing the custom fields for a specific project and issue type. These objects support native filtering. You can see the supported fields in the Filtering Specifics . If the name of the Board , Project , or Issue Type contains spaces or underscores, Skyvia will replace these symbols.\nFor example, if BoardName = Skyvia Board , ProjectName = Skyvia_Project , and IssueTypeName = Bug , the object will be named Skyvia_Board Skyvia__Project Bug Issues . Metadata Cache You can specify the period after which Metadata cache is considered expired. Connector Specifics Custom Fields Jira Software Cloud can have the following custom fields: Jira Software Cloud Type DBType Checkbox Boolean Checkboxes String Connection String Custom formula Double Date of First Response DateTime Date Picker Date Date Time Picker DateTime Days since last comment Int64 Domain of Assignee String Domain of Reporter String External asset platform String Global Rank String Group Picker (single group) String Group Picker (multiple groups) String Labels String Last commented by a User Flag Boolean Last public comment date String Message Custom Field (for View) String Message Custom Field (for Edit) String Number Field Decimal Number of attachments Decimal Number of comments Decimal Paragraph (Supports Rich Text) String Participants of an issue String Project Picker (single project) String Radio Buttons String Rating Double Reactions String Select List (cascading) String Select List (multiple choices) String Select List (single choice) String Short text (plain text only) String Slider Double Text Field (single line) String Text Field (read only) String Time in Status String Time interval String URL Field String User Picker (single user) String User Picker (multiple users) String User Property Field (< 255 characters) String Username of last updater or commenter String Version Picker (single version) String Version Picker (multiple versions) String Filtering Specifics Jira Software Cloud supports the following native filters: Object Field and Operator Board Type ( = ) BoardEpics Done ( = ) BoardVersions Released ( = ) BoardIssues Id ( IN ), Key ( = ), BoardId ( = ), LastViewed ( =, <, <=, >, >= ), Priority_Id ( = ), Priority_Name ( = ), Status_Id ( = ), Status_Name ( = ), Status_StatusCategory_Id ( = ), Status_StatusCategory_Key ( = ), Status_StatusCategory_Name ( = ), Creator_AccountId ( = ), Creator_EmailAddress ( = ), Creator_DisplayName ( = ), Reporter_AccountId ( = ), Reporter_EmailAddress ( = ), Reporter_DisplayName ( = ), Votes_Votes ( =, <, <=, >, >= ), Issuetype_Id ( = ), Issuetype_Name ( = ), Project_Id ( = ), Project_Key ( = ), Project_Name ( = ), CreatedDate ( =, <, <=, >, >= ), UpdatedDate ( =, <, <=, >, >= ) + Custom Fields Issues Id ( IN ), Key ( = ), LastViewed ( =, <, <=, >, >= ), Priority_Id ( = ), Priority_Name ( = ), Status_Id ( = ), Status_Name ( = ), Status_StatusCategory_Id ( = ), Status_StatusCategory_Key ( = ), Status_StatusCategory_Name ( = ), Creator_AccountId ( = ), Creator_EmailAddress ( = ), Creator_DisplayName ( = ), Reporter_AccountId ( = ), Reporter_EmailAddress ( = ), Reporter_DisplayName ( = ), Votes_Votes ( =, <, <=, >, >= ), Issuetype_Id ( = ), Issuetype_Name ( = ), CreatedDate ( =, <, <=, >, >= ), UpdatedDate ( =, <, <=, >, >= ) + Custom Fields Issues Id ( IN ), Key ( = ), LastViewed ( =, <, <=, >, >= ), Priority_Id ( = ), Priority_Name ( = ), Status_Id ( = ), Status_Name ( = ), Status_StatusCategory_Id ( = ), Status_StatusCategory_Key ( = ), Status_StatusCategory_Name ( = ), Creator_AccountId ( = ), Creator_EmailAddress ( = ), Creator_DisplayName ( = ), Reporter_AccountId ( = ), Reporter_EmailAddress ( = ), Reporter_DisplayName ( = ), Votes_Votes ( =, <, <=, >, >= ), CreatedDate ( =, <, <=, >, >= ), UpdatedDate ( =, <, <=, >, >= ) + Custom Fields Custom Field Filters You can filter custom fields using these operators: Field Operator Checkbox = Connection = Custom formula =, <, <=, >, >= Date of First Response =, <, <=, >, >= Date Picker =, <, <=, >, >= Date Time Picker =, <, <=, >, >= Domain of Assignee = Domain of Reporter = Global Rank = Group Picker (single group) = Last commented by a User Flag = Message Custom Field (for View) = Message Custom Field (for Edit) = Number Field =, <, <=, >, >= Number of attachments =, <, <=, >, >= Number of comments =, <, <=, >, >= Paragraph (Supports Rich Text) = Project Picker (single project) = Rating =, <, <=, >, >= Reactions = Short text (plain text only) = Slider =, <, <=, >, >= Text Field (single line) = Text Field (read only) = Time in Status = Time interval = URL Field = User Picker (single user) = User Property Field (< 255 characters) = Username of last updater or commenter = Filtering Dates Jira Software Cloud does not support date filtering with second-level precision. For example, if you try to set the filter to get records viewed at 01/01/2025 11:34:48, the results will include all records where the LastViewed value falls within the range of 01/01/2025 11:34:00 to 01/01/2025 11:34:59. This can lead to duplicate rows when replicating data within the same minute. Incremental Replication and Synchronization Skyvia does not support Replication with Incremental Updates for Jira Software Cloud. Skyvia does not support Synchronization for Jira Software Cloud objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE BoardSprints INSERT, DELETE Boards UPDATE BoardEpics Stored Procedures Skyvia represents part of the supported Jira Software Cloud features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . MoveIssuesToEpic To move an issue to an epic, use the command: call MoveIssuesToEpic(:epic_id,:issues) PARAMETER NAME DESCRIPTION epic_id Epic identifier. issues Array of issues. Example: [1000, 2222] To use this procedure, you need to have an edit issue permission for all issues you want to move. You can move up to 50 issues at once. This procedure does not support team-managed (next-gen) projects. Supported Actions Skyvia supports all the common actions for Jira Software Cloud." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/jotform_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Jotform [Jotform](https://www.jotform.com/) is a powerful, user-friendly tool for managing online forms. Data integration : Skyvia supports importing data to and from Jotform, exporting Jotform data to CSV files, and replicating Jotform data to relational databases. Backup : Skyvia Backup does not support Jotform backup. Query : Skyvia Query supports Jotform. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Jotform in Skyvia, you have to specify the API key, data center location and compliance with [HIPAA](https://www.jotform.com/hipaa/) . Getting Credentials To obtain the API Key, you have to log in to Jotform and perform the following actions: Click on your account avatar in the top right corner of the page. Click Settings and select API . Click Create New Key and copy the generated API Key. Creating Connection Paste the obtained key into the API Key box in Skyvia. Select the data center. The US value is selected by default. You can switch it to Europe. Specify if your account is HIPAA compliant. Connector Specifics Object Peculiarities FormSubmissionAnswers and UserSubmissionAnswers These tables contain the submitted answers to the forms. \nWhen selecting from them, the submitted answers may return in different result columns depending on the question type. The majority of the submitted answers return in the Answers column. Folders and Subfolders The root folder properties are stored returned in the RootFolder table. This table is read-only. The Folders table contains the folder. The subfolders list is stored in the Subfolders field as an array. \nFor user convenience, the array values of the Subfolders field are represented as a separate Subfolders table. This table stores the subfolders of the first nesting level.\nThe folders of further nesting levels are stored in the Subfolders field of the Subfolders table. You can move the forms from one folder to another when performing the UPDATE operation against the Folders and Subfolders tables. To do this, you have to map the Forms field, providing the Id\u2019s of the forms to be moved in the following format: [\"223172341365349\", \"223172341365350\"] . Forms When you perform the DELETE operation against the Forms object, the records change their status to Deleted . FormSubmissions When importing data to the FormSubmissions table, you must map the Answers field and provide its values in JSON object format as pairs \u201cquestion number\u201d: \u201canswer\u201d, for example: {\"1\": \"answer to question 1\", \"2\": \"Samantha James\", \"3\": \"42\"} . _FormSubmissions For user convenience, Skyvia creates a separate object for each form with a form name prefix in its name. Each form submission is a separate record in such an object. Form questions are the object fields. Form answers are the field values. For example, you have a form with the name Quiz and questions Question1 , Question2 , and Question3 . Skyvia creates an additional object named Quiz__FormSubmissions . Form questions will be represented as additional fields Question1 , Question2 , and Question3 . A new record will be created in the Quiz__FormSubmissions object whenever someone submits the Quiz form. The _FormSubmissions object can contain fields of the following types Jotform Type DbType Comment Full Name String Depending on settings, it may be represented by two fields: FirstName and LastName , and additiomal fields: MiddleName, Prefix, and Suffix . Email, Short Text, Long Text String The standard length of such field is 1000 characters. If the field name is memo , note , or contains description , comment , notes , address or ends with url , reason or keywords , its length increases to 4000 characters. If the field name contains content or html , its length increases to 2147483647 characters. Address String May be represented by one or more fields: Street Address 1, Street Address 2, City, Postal / Zip Code, State / Province , and Country . Phone String May be represented by one field if the Input Mask setting is disabled. Otherwise, it contains fields Phone, Area, Country . Date Picker Date or Datetime if the field includes time. Appointment Represented by two fields Name_Date of the DateTime type and Name_Duration of Int32 type Skyvia operates with this field according to the UTC standard. Jotform UI displays the values in the time zone selected in the field settings. Signature Binary Skyvia performs an additional request to obtain a binary value. If you enable the Suppress Extended Requests option, no additional requests are performed, and the value is null. Dropdown, Single Choice, Multiple Choice Enum String field You can set the predefined values for such fields \u200b\u200b(for example, Days, Months). In this case, such a field becomes a text field. Number Decimal File Upload String This field contains an array of links. Time Time May be represented by one or two Name Start and Name End fields. Spinner Decimal Table JSONArray Includes the following fields: RowId (Int32), RowTitle (String), dynamic fields of the String or Boolean types, Star Rating (Int32), Scale Rating (Int32). To insert data to any table cells, specify the RowId of and the column value. For example, you have a field \u0421ustomField_Table with three columns Col1, Col2, Col3 . To insert data to the \u0421ustomField_Table field, you should map it to JSON value in the following format '[{RowId\":1, \"Col1\": 'val1',\"Col3\":'val3'},{\"RowId\":2, \"Col2\":false}]' . You can omit some columns if necessary. Fill in the Blank May be represented by multiple fields of different types. More details are available below. The Fill in the Blank fields may be represented by the following fields Jotform Type DbType Date Date Email String Text String Number Decimal FullName String Time Time Phone String Dropdown Enum String Single Choice Enum String Signature Binary Incremental Replication and Synchronization support Replication with Incremental Updates is supported for the following objects: FormReports, Forms, FormSubmissions, User, UserReports, \nUserSettings, UserSubmissions . Synchronization is not supported for Jotform. DML Operations Supports Operation Object INSERT, UPDATE, DELETE Folders, FormQuestions, Subfolders INSERT, DELETE FormReports, Forms, FormSubmissions Stored procedures Use call CloneForm(:formId) to clone the existing form. \nYou can use a call to the stored procedure, for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow. Supported Actions Skyvia supports all the common actions for Jotform." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/keap_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Keap [Keap](https://keap.com/) is a CRM software that automates your entire business. Data integration : Skyvia supports importing data to and from Keap, exporting Keap data to CSV files, replicating Keap data to relational databases, and synchronizing Keap data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Keap. Query : Skyvia Query supports Keap. Establishing Connection To create a connection , authenticate with one of the options: API key or OAuth 2.0 . Getting Credentials To obtain an API key, perform the following steps: Log in to your Keap account. Click your profile avatar and choose Settings . Under Application , select API settings \u2192 Personal Access Tokens . Click New Token . Enter the token name and click Create . Copy the API key. Creating API Key Connection To connect to Keap, specify the API Key . Creating OAuth Connection To connect to Keap using OAuth 2.0, perform the following steps: Click Sign In with Keap . Enter your credentials. Select the CRM application you want to connect to. Click Allow . Additional Connection Parameters Suppress Extended Requests For some objects, Keap API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Campaigns Sequences, Goals Contacts ScoreValue, OptInReason, OriginDate, OriginIpAddress, TagIds Files FileContent, CreatedDate, UpdatedDate To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Filtering Specifics Keap supports the following native filters: Object Operator Field Affiliates = Id, ContactId, ParentId, Code, Name, Status Commissions = SalesAffiliateId CommissionPrograms, AffiliateRedirects, AffiliateSummaries, AffiliateClawbacks, AffiliatePayments = AffiliateId Appointments = Id, ContactId =, <, <=, >, >= StartDate Campaigns = Id Companies = Id, CompanyName Contacts = Id, GivenName, FamilyName =, <, <=, >, >= UpdatedDate ContactCreditCards, ContactTags = ContactId ContactEmails = ContactId, SentToAddress Orders = Id, ContactId =, <, <=, >, >= CreatedDate OrderItems, OrderPayments = OrderId Subscriptions = ContactId Transactions = Id, ContactId =, <, <=, >, >= TransactionDate Emails = Id, ContactId, SentToAddress =, <, <=, >, >= SentDate Files = Id, ContactId, FileName, Public, FileBoxType Notes = Id, ContactId, UserId Opportunities = Id, StageId, UserId Products = Id ProductSubscriptions = ProductId Tags = Id, Name, CategoryId TaggedCompanies, TaggedContacts = TagId Tasks = Id, Completed, ContactId =, <, <=, >, >= DueDate Custom Fields Skyvia supports custom fields for the Companies , Contacts , and Opportunities objects. You can use the following custom fields: Keap Type DBType Text String Email String PhoneNumber String Website String SocialSecurityNumber String Name String TextArea String ListBox String DayOfWeek String Dropdown String Month String Radio String State String User String UserListBox String Drilldown String Currency Double DecimalNumber Double Percent Double WholeNumber Integer Year Integer Date Date DateTime DateTime YesNo Boolean Nested Objects The connector includes objects with fields that store complex, structured data in JSON format. The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To enable it, select the Nested Objects checkbox in the Import Options. Keap connector has the following nested objects: Object Field Nested object Campaigns Sequences SequenceType Goals GoalType Contacts EmailAddresses EmailAddressType Addresses ContactAddressType PhoneNumbers PhoneNumberType FaxNumbers FaxNumberType SocialAccounts SocialAccountType Relationships RelationshipType Orders OrderItems EcommerceReportingOrderItemType Transactions Orders EcommerceReportingOrderType Opportunities StageDetailsCheckListItems CheckListItemDetailsType Products ProductOptions ProductOptionType SubscriptionPlans SubscriptionPlansType Users PhoneNumbers PhoneNumberType FaxNumbers FaxNumberType SequenceType Paths SequencePathType SequencePathType Items SequencePathItemType EcommerceReportingOrderType OrderItems EcommerceReportingOrderItemType ProductOptionType Values ProductOptionValueType Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: Appointments, Contacts, Orders, OrderPayments, Files, Notes, Opportunities, Tasks, Users . Skyvia supports Replication for the Campaigns object, but does not track the updated records. Skyvia supports Synchronization for the following objects: Appointments, Contacts, Notes, Opportunities, Tasks . DML Operations Support Operation Object INSERT, UPDATE, DELETE Appointments, Contacts, Notes, Products, Tasks INSERT, UPDATE Companies, Opportunities INSERT, DELETE Orders, OrderItems, Emails, Files, ProductSubscriptions INSERT Affiliates, ContactEmails, Subscriptions, Tags, Users Stored Procedures Skyvia represents part of the supported Keap features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CreateTagCategory To create a new tag category, use the command: call CreateTagCategory(:name, :description) PARAMETER NAME DESCRIPTION Name Name of the tag. Description Description of the tag. This parameter is optional. ApplyTagsToContact To apply tags to contact, use the command: call ApplyTagsToContact(:contactId, :tagIds) PARAMETER NAME DESCRIPTION ContactId ID of the contact. TagIds ID of the tag. This parameter accepts single value or an array of values. RemoveTagsFromContact To remove tags from contact, use the command: call RemoveTagsFromContact(:contactId, :tagIds) PARAMETER NAME DESCRIPTION ContactId ID of the contact. TagIds ID of the tag. This parameter accepts single value or an array of values. Supported Actions Skyvia supports all the common actions for Keap." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/klaviyo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Klaviyo [Klaviyo](https://www.klaviyo.com/) is an sms and email marketing automation platform for e-commerce, which helps grow businesses. Data integration : Skyvia supports importing data to and from Klaviyo, exporting Klaviyo data to CSV files, replicating Klaviyo data to relational databases, and synchronizing Klaviyo data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Klaviyo. Query : Skyvia Query supports Klaviyo. Establishing Connection To create a connection with Klaviyo, you need to select the authentication type and specify credentials accordingly. Getting Credentials Private API Key \u2014 REST API authentication token for connecting to Klaviyo. A private API key is also used for reading data from Klaviyo and manipulating sensitive objects, such as lists. You can manage an API token in the Klaviyo interface. If needed, you can also generate multiple private API keys in your Klaviyo account. Read more about private API keys [here](https://developers.klaviyo.com/reference/api-overview) . To get the API Key, perform the following steps: Sign in to Klaviyo. Click the User icon in the bottom left corner. Select Settings on the appeared menu. Switch to the API Keys tab and copy your Private API Key . Creating Private API Key Connection Enter the obtained API Key in the Connection Editor. Creating OAuth 2.0 Connection To connect to Klaviyo via OAuth 2.0, do the following. In the Connection Editor, click Sign In with Klaviyo . Enter your Klaviyo credentials and click Log In . Click Allow to grant Skyvia permission to access your Klaviyo data. Connector Specifics Object Peculiarities Read-only Objects The following Klaviyo objects are read-only: Accounts, EmailCampaignTags, ImportBulkProfiles, SMSCampaignTags, SegmentProfiles, SegmentTags, ListTags, EventMetrics, EventProfiles, FlowActions, FlowTags, GroupTags, Forms, Reviews, SegmentFlowTriggers . EmailCampaigns and SMSCampaigns The CampaignMessages field returns empty results when querying. \nThis field is needed for importing data to these objects. BulkProfiles The ProfilesData and Relationships_ListData fields store complex structured data in JSON format. To Import data into these fields, use our Nested Objects mapping feature in Import. Select the Nested Objects checkbox in import to enable this feature. You can use this object to insert multiple profiles at once. If you insert the already existing profile, the existing record will be updated. DailyMetrics, WeeklyMetrics, MonthlyMetrics The DailyMetrics, WeeklyMetrics, and MonthlyMetrics are the child objects for the Metrics object. We recommend filtering by MetricId when querying data from the DailyMetrics, WeeklyMetrics, and MonthlyMetrics to increase query performance. Skyvia does not require filtering by the parent object record Id when querying data from a child object. Without filtering by parent record Id, Skyvia first queries all the parent object records, reading each record Id. Then Skyvia looks up child object records to each parent object record Id. This approach allows querying child objects without knowing their parents, but this method consumes time and API calls. It uses at least one API call for every parent object record. Thus, working with these tables may affect performance. We strongly recommend using filters on the parent object fields when querying data from child objects. DailyMetrics, WeeklyMetrics, MonthlyMetrics store metrics for calendar days, weeks, and months respectively. \nwhen querying data from these tables you can set the date range for data selection. To do this, set filters by Date, WeekStartDate and MonthStartDate fields and =, >, >=, <, <= operators. If you don\u2019t use filters by date when querying data from these objects, by default, Klaviyo returns data for one calendar year before today. If you use the filter by start date only, Klaviyo returns the data for a calendar year from the start date or until today if a year has yet to pass from the start date. These objects return the data only for the full and already passed calendar days, weeks (Monday till Sunday), and months (1 to 30 (31) days). \nThey do not return data for current days, weeks or months. You can\u2019t select a part of the period, only full period returns in the query result. If you filter by the date range, which is wider than a full calendar week or month, Klaviyo returns the data only for the full included calendar week or month. \nIf you filter by the date range, which is less than a calendar week or month, Klaviyo returns the empty result. For example, you perform the following query: SELECT * FROM MonthlyMetrics WHERE MonthStartDate > '2022-12-25' AND MonthStartDate < '2023-02-05' This query result will return only the data for the full month included in the selected period (from the 1st to the 31st of January). Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Klaviyo objects as Images, EmailCampaigns, SMSCampaigns, EmailCampaignMessages, Segments, SMSCampaignMessages, Lists, Metrics, DailyMetrics, WeeklyMetrics, and MonthlyMetrics . Incremental Replication tracks only the inserted records for the DailyMetrics, WeeklyMetrics, and MonthlyMetrics objects. When replicating DailyMetrics, WeeklyMetrics, and MonthlyMetrics , Skyvia pulls data only for the already passed periods.\nIncremental Replication does not pull data for the current date for DailyMetrics , the current week for WeeklyMetrics , and the current month for MonthlyMetrics . Skyvia supports Synchronization for EmailCampaigns , Segments , SMSCampaigns and Lists . DML Operations Support Operation Object INSERT, UPDATE, DELETE CouponCodes, Lists, Templates, CatalogCategories, CatalogVariants, CatalogItems, Tags, TagGroups, EmailCampaigns, SMSCampaigns, Segments, UniversalContents INSERT, UPDATE Coupons, Images, Profiles INSERT BulkProfiles UPDATE EmailCampaignMessages, SMSCampaignMessages, Flows, TrackingSettings DELETE ListProfiles Stored Procedures Skyvia represents part of the supported Klaviyo features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddListProfiles The following command inserts records to the ListProfiles object. call AddListProfiles(:ListId, :PersonId) CreateEventWithMetricAndProfile The following command creates new events and associates them with new or existing profile. call CreateEventWithMetricAndProfile(:metric_name, :properties, :person_id, :email, :phone_number, :first_name, :last_name, :organization, :title, :image, :location_address1, :location_address2, :location_city, :location_country, :location_latitude, :location_longitude, :location_region, :location_zip, :location_timezone, :location_ip, :properties_person, :append, :unappend, :unset, :metric_service, :time, :value) PARAMETER NAME DESCRIPTION Metric_name The associated metric for the event Properties Properties of the new event in format, for example {\"Color\": \"Grey\",\"Tovar\": \"TV-1506\"} Person_id Profile unique identifier, if you associate this event with the existing one Email Email of the profile for which the event is being created Phone_number Phone number of the profile for which the event is being created. You can omit this parameter if the required profile has no phone number First_name Individual\u2019s first name Last_name Individual\u2019s last name Organization Name of the company or organization within the company for whom the individual works Title Individual\u2019s job title Image URL pointing to the location of a profile image Location_address1 First line of street address Location_address2 Second line of street address Location_city City name Location_country Country name Location_latitude Latitude coordinate. Precision of four decimal places is recommended Location_longitude Longitude coordinate. Precision of four decimal places is recommended Location_region Region within a country, such as state or province Location_zip Zip code Location_timezone Time zone name. We recommend using time zones from the [IANA Time Zone Database](https://www.iana.org/time-zones) Location_ip IP Address Properties_person Custom profile properties Append Append a simple value or values to this property array Unappend Remove a simple value or values from this property array Unset Remove a key or keys (and their values) completely from properties Metric_service This is for advanced usage. For api requests, this should use the default, which is set to API Time Event date and time Value A numeric value to associate with this event. For example, the dollar amount of a purchase CreateEvent The following command adds new event for specific existing profile. call CreateEvent(:email,:phone_number,:metric,:properties,:time,:value) PARAMETER NAME DESCRIPTION Email Email of the profile for which the event is being created. Optional in case when the Phone is specified. Phone_number Phone number of the profile for which the event is being created. Optional if the Email is specified. Metric The associated metric for the event Properties Properties of the new event in format, for example {\"Color\": \"Grey\",\"Tovar\": \"TV-1506\"} Value A numeric value to associate with this event. For example, the dollar amount of a purchase SuppressProfiles To change the profile status to suppressed, use the command call SuppressProfiles(:SuppressionEmail) PARAMETER NAME DESCRIPTION SuppressionEmail The list of emails for suppression in the following format [{\"type\":\"profile\",\"attributes\":{\"email\":\"address1@mail.com\"}}] UnsuppressProfiles To change the profile status to unsuppressed, use the command call UnsuppressProfiles(:SuppressionEmail) PARAMETER NAME DESCRIPTION SuppressionEmail The list of emails for suppression in the following format [{\"type\":\"profile\",\"attributes\":{\"email\":\"trial@free.com\"}}] Supported Actions Skyvia supports all the common actions for Klaviyo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/linkedinads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources LinkedIn Ads [LinkedIn Ads](https://business.linkedin.com/marketing-solutions/ads) is a powerful advertising tool for reaching your audience. Using LinkedIn Ads, you can promote your organization\u2019s updates to targeted audiences on desktop, mobile, and tablet. Data integration : Skyvia supports importing data from LinkedIn Ads to other applications, exporting it to CSV files, and replicating LinkedIn Ads data to relational databases. Backup : Skyvia Backup does not support LinkedIn Ads. Query : Skyvia Query supports LinkedIn Ads. Establishing Connection To create a connection with LinkedIn Ads, sign in with LinkedIn. Creating Connection To create a LinkedIn Ads connection, perform the following steps: Click Sign In with LinkedIn . Enter your LinkedIn email and password and click Sign in . Click the Allow button to grant Skyvia access to your LinkedIn account. Connector Specifics Object Peculiarities Ads All ad data is consolidated in the Ads object. The Content field stores data specific to each ad type in JSON format. \nFor user convenience, we grouped ads of different types into separate objects with the suffix *Ads in their names: TextAds, JobsAds, SpotlightAds, FollowerAds, DocumentAds . Each object contains fields specific to a particular ad type. Fields of such objects represent data from the Content field of the Ads object as separate fields. *Statistics Content statistics data is represented in the objects with the *Statistics suffix. By default, such objects return data for a year before today. However, LinkedIn Ads API allows querying data for the last 13 months if you set a corresponding filter. The FollowerDailyStatistics and PageDailyStatistics objects return data up to the day before yesterday when querying. These objects don\u2019t return data for today and yesterday when querying. The ShareDailyStatistics returns data, including yesterday, when querying. Organizations You can query only those organizations where you have administrator access rights. Filtering Specifics LinkedIn Ads API supports the following native filters: Objects Operators and Fields Accounts Id ( = ) Campaigns Id, AccountId, Test ( = ). CampaignGroupId, Name, Type, Status ( = , IN). CampaignGroups Id, AccountId, Test ( = ). Name, Status ( = , IN). Ads, DocumentAds, FollowerAds, JobsAds, TextAds, SpotlightAds Id, AccountId ( = ). CampaignId, IntendedStatus ( = , IN). Conversions Id, AccountId ( = ). LeadForms AccountId ( = ). AccountsDailyReport, AdsDailyReport, CampaignGroupsDailyReport, CampaignsDailyReport, ConversionsDailyReport Date ( = , Between, > , >= , < , <= ). FollowerDailyStatistics, PageDailyStatistics, ShareDailyStatistics Date* ( = , Between, > , >= , < , <= ). OrganizationId ( = ). Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. DML Operations Support LinkedIn Ads objects support the following operations. Operation Object INSERT, UPDATE, DELETE Accounts, Campaigns, CampaignGroups, Conversions, TextAds UPDATE, DELETE Ads The following objects are read-only: AccountsDailyReport, AdsDailyReport, CampaignsDailyReport, CampaignGroupsDailyReport, ConversionsDailyReport, DocumentAds, FollowerAds, FollowerDailyStatistics, JobsAds, LeadForms, Organizations, PageDailyStatistics, ShareDailyStatistics . Incremental Replication and Synchronization Incremental Updates for the following LinkedIn objects: Accounts, AccountsDailyReport, Ads, AdsDailyReport, CampaignGroups, CampaignGroupsDailyReport, Campaigns, CampaignsDailyReport, Conversions, ConversionsDailyReport, DocumentAds, FollowerAds, FollowerDailyStatistics, JobsAds, Organizations, PageDailyStatistics, ShareDailyStatistics, SpotlightAds, TextAds . Incremental Replication detects only new records for the following objects: AccountsDailyReport, CampaignGroupsDailyReport, CampaignsDailyReport, AdsDailyReport, ConversionsDailyReport, FollowerDailyStatistics, PageDailyStatistics, ShareDailyStatistics .\n\u00a0 \nThe *Report and *Statistics objects contain the Date field. This field stores dates without time. Thus, you can replicate only records created before the current date. The first run of the Incremental Replication will return the report with the data for the previous day. Further runs with New Replication Runtime enabled will consider retrospectively updated reporting data according to the attribution window, including data for the previous day. To get data for the current date using the old Replication runtime, run the Replication on the next day using a filter by the Date field. The FollowerDailyStatistics and PageDailyStatistics objects support replicating data up to the date before yesterday. Future runs with the New Replication Runtime enabled using retrospective updates will include data for the previous and current dates. The old Replication runtime will lose data for the previous and current dates if the AttributionWindow parameter is set to 0. We recommend setting the AttributionWindow parameter to 1 or more and using the new replication runtime for these objects. The CreatedAt field in the Organizations and *Statistics objects stores the date the organization was created in LinkedIn. This date may differ from the date the current user became the organization administrator. Skyvia supports Synchronization for the following objects: Accounts, Campaigns, CampaignGroups, Conversions, and TextAds . Supported Actions Skyvia supports all the common actions for LinkedIn Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/lionobytes_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources LionOBytes [LionOBytes](https://www.lionobytes.com/) is a cloud platform providing AI-based customer relationship management (CRM), field service management (FSM), and enterprise resources planning (ERP) solutions for small businesses. Data integration : Skyvia supports importing data to and from LionOBytes, exporting LionOBytes data to CSV files, replicating LionOBytes data to relational databases, and synchronizing LionOBytes data with other cloud apps and relational databases. Backup : Skyvia Backup does not support LionOBytes. Query : Skyvia Query supports LionOBytes. Establishing Connection To create a connection to LionOBytes, you need a token Getting Credentials Token To get a token, execute a POST request to the following endpoint: https://vendor.liono360.com/LionO360.Platform/api/Authentication/token , providing your LionOBytes username and password. The request will return the token. Creating Connection To connect to LionOBytes, enter your token in the box in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For the Accounts object, LionOBytes API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Skyvia can perform such API requests for each record. This can decrease performance and significantly increase the number of API calls used. You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities Leads The Addresses field in the Leads object stores complex structured data in JSON format. You can use our Nested Objects feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can add several addresses for a lead in LionOBytes UI, for example, billing, shipping, or mailing addresses. When you query such lead records, only one address is displayed in the query results. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates and Synchronization for the Accounts and Cases objects. DML Operations Support Skyvia supports the following DML operations for LionOBytes. Operation Object INSERT, UPDATE Accounts, Cases, Leads INSERT AnnualRevenues, EmployeeRanges, Industries, JobFunctions, LeadTypes, Sources Supported Actions Skyvia supports all the common actions for LionOByte." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/magento_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Magento [Magento](https://business.adobe.com/products/magento/magento-commerce.html) is an open-source digital commerce platform written in PHP. Data integration : Skyvia supports importing data to and from Magento, exporting Magento data to CSV files, replicating Magento data to relational databases, and synchronizing Magento data with other cloud apps and relational databases. Backup : Skyvia Backup supports Magento backup. Query : Skyvia Query supports Magento. Establishing Connection Skyvia supports both Magento 1 and Magento 2. They provide different connection options. Getting Credentials Both for Magento 1 and Magento 2 you need to enter Domain . This is the address of the Magento service. It can be just the address of the Magento store, like http://192.168.10.197/ or http://magento235.yourdomain.com/ . Depending on Magento configuration, you may need to add magento or magento/index.php to it: http://magento.yourdomain.com/magento/ or http://magento.yourdomain.com/magento/index.php . Or you may configure Magento to use a different path. Magento 1 For connecting to Magento 1, you need the API Key. The API Key that you need is specified when creating or editing a SOAP/XML-RPC user in Magento. See more information [here](https://www.simicart.com/blog/magento-api-key/) . Magento 2 For connecting to Magento 2, you can use either your username and password or OAuth token. The latter can be obtained by creating an integration in Magento in the following way: Log in to Magento Admin Panel. On the sidebar on the left, click System , and then, under Extensions click Integrations . Click Add New Integration . Enter the integration Name and Email , and also specify Your Password Under Basic Settings , on the left, click API . Select check boxes for Resources to allow access to, or just select All in the Resource Access list. Click Save . This creates a new Magento integration. It is inactive yet, and now you need to activate it. On the Integrations page, click the Activate link for the corresponding integration. The list of Magento resources, the integration is granted access to, is displayed. Click Allow . Consumer Key and Authentication Token with the corresponding secrets are displayed. Copy the Authentication Token value. You can find more details about Magento integrations in [Magento documentation](https://developer.adobe.com/commerce/webapi/get-started/create-integration/) . Alternatively, you can use the admin token, which you can obtain by performing a web API call, as described in [Magento documentation](https://developer.adobe.com/commerce/webapi/rest/tutorials/orders/order-admin-token/) . This is not a recommended way, because performing a web API request is not a trivial task, and such token has a limited lifetime (4 hours by default). You can change the token lifetime in the following way: Log in to Magento Admin Panel. On the sidebar on the left, click Stores , and then, under Settings click Configuration . Click Access Token Expiration . Specify Admin Token Lifetime (hours) . Creating Connection When creating a connection to Magento, the required connection parameters differ for Magento 1 and Magento 2; besides, for Magento 2 Skyvia supports two ways of authentication \u2014 User Name & Password and Access Token . Magento 1 Connection Enter your Magento Domain .\n (for example, http://192.168.10.197/magento/ or http://magento235.yourdomain.com/ ). In the Version list, select Ver1 . Enter the User and the corresponding API Key . Magento 2 Connection Using User Name & Password Authentication Note that this authentication may not suit you if you use [Two-Factor Authentication](https://docs.magento.com/user-guide/stores/security-two-factor-authentication.html) Enter your Magento Domain . In the Version list, select Ver2 . Enter the User and Password . Magento 2 Connection Using Access Token Authentication If you don\u2019t want to store your Magento user name and password or use two-factor authentication, use this kind of authentication. Enter your Magento Domain . In the Version list, select Ver2 . In the Authentication list, select Access Token . Enter Access Token . Additional Connection Parameters Store View Specifies the store view against which the API requests are executed. This parameter is available for Magento Ver2 . Metadata Cache Specifies the period of time, after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Connector Specifics Complex Structured Data The following Magento objects store complex structured data in JSON format: Object Field Customer Addresses SalesOrders Items Products CustomizableOptions, Values You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can also replicate them into separate tables with our new replication runtime and use the Unwind component to map them in data flows. Supported Actions Skyvia supports all the common actions for Magento." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/mailchimp_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Mailchimp [Mailchimp](https://Mailchimp.com) is a cloud-based email marketing solution that allows designing and sending marketing emails. Data integration : Skyvia supports importing data to and from Mailchimp, exporting Mailchimp data to CSV files, replicating Mailchimp data to relational databases, and synchronizing Mailchimp data with other cloud apps and relational databases. Backup : Skyvia Backup supports Mailchimp backup. Query : Skyvia Query supports Mailchimp. Establishing Connection To create a connection with Mailchimp, log in with your Mailchimp credentials and give your permission for Skyvia to access your account. Creating Connection To create a connection with Mailchimp, perform the following steps: Click Sign In with Mailchimp . In the opened window, enter your Mailchimp credentials and click the Log In button. Grant Skyvia the requested permissions. Additional Connection Parameters Merge Fields Behavior Merge fields are the ListMembers object custom fields. The set of merge fields may vary in different lists. Merge Fields Behavior parameter in the Mailchimp connection determines how to treat merge fields to avoid mismatch among multiple lists and ListMembers static set of fields. You can select one of the available behaviors: Without Merge Fields \u2014 only static ListMembers fields are available, merge fields are unavailable. Join Common Merge Fields \u2014 only merge fields common for all the lists join the ListMembers object. Join All Merge Fields \u2014 all the merge fields from all the lists join the ListMembers object. If different lists have merge fields with the same name but another type, default value, required setting, tag, etc., such fields are ignored regardless of the Merge Fields Behavior parameter. Merge fields with the same name for different lists must be the same to be available in Skyvia. Merge Tag As Field Name If tag values (the values in the Put this tag in your content: column on the Audience fields and *|MERGE|* tags tab in Mailchimp list settings) are different for different lists and types, default values, and required settings are the same, you can select the Merge Tag As Field Name checkbox in your connection settings and have access to merge fields that have the same labels but different tags. \nOptionally, if you want to use merge tag values as names for Mailchimp merge tags instead of merge tag labels, click Advanced Settings and select the Merge Tag As Field Name checkbox. Skyvia requires Mailchimp merge fields for mapping only if the merge field is common and required for all Mailchimp lists. Metadata Cache You can specify the period after which Metadata Cache expires. Use Batch Operations Using batch operations significantly improves the performance of loading data to the ListMembers object, which stores subscribers. Use Batch Operations checkbox in the Advanced Settings block in the Connection Editor is enabled by default. Connector Specifics Object Peculiarities ListMembers Skyvia cannot read merge fields in the ListMembers object with the same name but a different type, default value, required setting, etc. Skyvia also does not support Mailchimp merge fields having double quotation marks in their name. To perform the UPDATE operation against Mailchimp subscribers ( ListMembers ) faster, you can map the Email field. The Email field itself won\u2019t be updated. It helps to search the subscribers for updates faster. Incremental Replication and Synchronization Skyvia supports Incremental Replication for Mailchimp objects containing the CreatedDate or UpdatedDate field. Skyvia supports Synchronization for Mailchimp objects, which support INSERT and UPDATE operations and store the CreatedDate or UpdatedDate field. Backup Specifics Skyvia cannot restore data in objects with composite primary keys like the InterestGroups , for example. Supported Actions Skyvia supports all the common actions for Mailchimp." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/mailerlite_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources MailerLite [MailerLite](https://www.mailerlite.com/) is a straightforward and user-friendly email marketing and website building tool. Data integration : Skyvia supports importing data to and from MailerLite, exporting MailerLite data to CSV files, replicating MailerLite data to relational databases, and synchronizing MailerLite data with other cloud apps and relational databases. Backup : Skyvia Backup does not support MailerLite backup. Query : Skyvia Query supports MailerLite. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to MailerLite in Skyvia, you must specify the API token. Getting Credentials To obtain the API token, you have to log in to MailerLite and perform the following actions: In the left menu, select Integration . Click Use on the right from the API label. Click Generate new token and enter the token name. Download or copy the generated token to the clipboard. Creating Connection Paste the obtained token into the API token box in Skyvia. Additional Connection Parameters Suppress Extended Requests For the Subscribers object, MailerLite API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. One such field that requires extended requests is Groups . To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Subscribers The Email field is required for mapping when performing the INSERT operation.\nIf the mapped Email value is already present in the table, the corresponding record will be updated. \nIf Email value is absent in the table, the record will be inserted. MailerLite allows adding custom fields for subscribers. It also includes some preset custom fields: name, last_name, company, country, city, phone, state, z_i_p. In the Subscribers table, preset and personalized custom fields are displayed in the Fields field. For user convenience, the preset custom fields in the Subscribers table are available as separate fields with the Field- prefix: FieldZIP, FieldState, FieldPhone, FieldCity, FieldCountry, FieldCompany, FieldLastName, and FieldName . These fields are available for mapping when performing the INSERT operation. All the custom fields (both preset and not preset) are available in the Fields field as a JSON object. When you load data into the preset custom fields only, you can map the fields with the Field- prefix. When you load data into personalized custom fields only, you need to map the Fields field. If you need to insert data into preset and personalized custom fields, then you have to use the Fields field too. Webhooks The Events field is required for mapping when performing the INSERT operation. The values must be provided in array format, for example [\"subscriber.updated\",\"subscriber.automation_triggered\"] . Available events are the following: subscriber.created - Fires when a new subscriber is added to an account. subscriber.updated - Fires when any of the subscriber\u2019s custom fields are updated. subscriber.unsubscribed - Fires when a subscriber becomes unsubscribed. subscriber.added_to_group - Fires when a subscriber is added to a group. subscriber.removed_from_group - Fires when a subscriber is removed from a group. subscriber.added_through_form - Fires when a subscriber is added though a form. subscriber.bounced - Fires when an email address bounces. subscriber.automation_triggered - Fires when subscriber starts automation. subscriber.automation_completed - Fires when subscriber finishes automation. spam.subscriber.spam_reported - Fires when subscriber marks a campaign as a spam. campaign.sent - Fires when campaign is sent. For more details about the available events, refer to MailerLite [documentation](https://developers.mailerlite.com/docs/webhooks#available-events) Incremental Replication and Synchronization support Replication with Incremental Updates is supported for the following objects: Automations, DraftCampaigns, EmbeddedForms, Groups, PopUpForms, PromotionForms, ReadyCampaigns, Segments, SentCampaigns, Subscribers, Webhooks . Incremental Replication considers only the new records for the Automations, EmbeddedForms, Groups, PopUpForms, PromotionForms, and Segments tables. These tables contain only the CreatedDate field, and there is no UpdatedDate field which would have allowed considering the updated records. Synchronization is supported for Webhooks . DML Operations Supports Operation Object INSERT, UPDATE, DELETE Fields, Groups, Webhooks INSERT, DELETE DraftCampaigns, Subscribers UPDATE, DELETE EmbeddedForms, PopUpForms, PromotionForms, Segments Supported Actions Skyvia supports all the common actions for MailerLite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/mailgun_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Mailgun [Mailgun](https://www.mailgun.com/) is an email delivery and service, designed for sending, receiving, and tracking emails. Data integration : Skyvia supports importing data to and from Mailgun, exporting Mailgun data to CSV files, and replicating Mailgun data to relational databases. Backup : Skyvia Backup does not support Mailgun. Query : Skyvia Query supports Mailgun. Establishing Connection To create a connection to Mailgun, specify your Private API Key and Domain . Getting Credentials API Key To obtain the API Key, perform the following steps. Go to [Mailgun](https://www.mailgun.com/) . Click the user icon and select API Security . Click Create API Key on the bottom of the page. Enter the description and click Create Key Copy the API Key value and save it somewhere. The API Key is available only once when creating. We recommend saving it in a safe place. Domain To get the domain, do the following. Go to [Mailgun](https://www.mailgun.com/) . Click Sending on the left. Creating Connection To create connection, follow the below steps: Select API Region from the drop-down list. Paste your private API Key. Specify your domain. Connector Specifics Object Peculiarities Events* Objects There are nine types of events in Mailgun. There is a separate object for each event type in our Mailgun connector: Events_Accepted, Events_Delivered, Events_Failed, Events_Opened, Events_Unsubscribed, Events_stored, Events_Rejected, Events_Clicked, Events_Complained . Skyvia supports filters by the Timestamp field using < , and >= operators and the Headers_To, Headers_From, Headers_Subject , Recipient fields using the = operator. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following Mailgun objects: Complaints, Domains, MailingLists, Routes, Templates, Unsubscribes, Whitelists . Skyvia supports Synchronization for the MailingLists, Routes, and Templates objects. DML Operations Support Skyvia supports DML operations for such Mailgun objects: Operation Object INSERT, UPDATE, DELETE IPPools, ListMembers, MailingLists, Routes, Templates INSERT, DELETE Bounces, Compliants, Domains, Unsubscribes, Whitelists UPDATE, DELETE Tags INSERT SeedLists DELETE SeedResults Stored Procedures Skyvia represents part of the supported Mailgun features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendMessage Use the following command to send a message. call SendMessage(:from, :to, :text, :html, :cc, :bcc, :subject, :amp-html, :template) Parameter Description From Email address for From header To Email address of the recipient(s). Example: \u201cBob bob@host.com \u201d. Use commas to separate multiple recipients Cc Email address of the recipient(s) who receive the message copy. Use commas to separate multiple recipients Bcc Email address of the recipient(s) who receive the hidden message copy. Use commas to separate multiple recipients Subject Message subject Text Body of the message (text version) Html Body of the message (HTML version) Amp-html AMP part of the message. Please follow Google guidelines to compose and send AMP emails Template Message template name, if needed Supported Actions Skyvia supports all the common actions for Mailgun." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/mailjet_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Mailjet [Mailjet](https://www.mailjet.com/) is a secure and reliable cloud-based email delivery and tracking platform, which allows users to send marketing and transactional emails. Data integration : Skyvia supports importing data to and from Mailjet, exporting Mailjet data to CSV files, replicating Mailject data to relational databases and synchronizing Mailjet data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Mailjet. Query : Skyvia Query supports Mailjet. Establishing Connection To create a connection to Mailjet, specify an API Key and API Secret. Getting Credentials To get your API and Secret Keys for Mailjet REST API, perform the following steps: Sign in to Mailjet. Click the User icon in the top right corner and select Account settings . Go to the Rest API block and select Master API Key & Sub API key management . Copy API Key and Secret Key on the API Key Management page . Creating Connection On this page, to connect to Mailjet, you need to specify API Key and Secret Key . Connector Specifics Object Peculiarities The Campaign object does not support INSERT, instead you add a new campaign to the CampaignDraft object. The Campaign object supports the UPDATE operation, but you can update only columns IsDeleted and IsStarred . The CampaignDraft object supports INSERT and UPDATE operations, however, you are able to update only records with the value Used = false , which means you update only campaigns that have not been sent. You should also note that the campaign content itself is not displayed in the columns of the Campaign or CampaignDraft objects. When you import campaigns into the CampaignDraft object, a record with the draft status is created without content. For content there is a separate object \u2014 CampaignDraftDetailContent . This object displays records that have some content. You can also import content into this object as text or HTML code by specifying the Id of the corresponding campaign. DML Operations Support Skyvia supports the following operations for the Mailjet objects: Operation Object INSERT, UPDATE, DELETE ContactMetadata, EventCallbackUrl, List, ListRecipient, ParseRoute, Segmentation, Sender, Template INSERT, UPDATE APIKey, CampaignDraft, Contact, MetaSender, TemplateDetailContent UPDATE Campaign, MyProfile, User INSERT CampaignDraftDetailContent, CampaignDraftSchedule, ContactList UPDATE, DELETE ContactData Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following Mailjet objects: APIKey, APIKeyAccess, Campaign, CampaignDraft, Contact, ContactHistoryData, List, MessageInformation, MetaSender, Sender, User, Template . Skyvia supports Synchronization for the CampaignDraft, and Template . Supported Actions Skyvia supports all the common actions for Mailjet." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/marketo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Marketo [Marketo](https://www.marketo.com) is a cloud lead management and marketing solution. Data integration : Skyvia supports importing data to and from Marketo, exporting Marketo data to CSV files, replicating Marketo data to relational databases, and synchronizing Marketo data with other cloud apps and relational databases. Backup : Skyvia Backup supports Marketo backup. Query : Skyvia Query supports Marketo. Establishing Connection To create a connection with Marketo, you need to specify the domain to connect to, Client ID, and Client Secret. Getting Credentials Client ID A GUID value, representing Marketo API client id. Client Secret Marketo API Client Secret. To configure access to Marketo and get the client ID and Client Secret for the connection, check [this instruction](https://developers.marketo.com/rest-api/) in Marketo documentation. Domain The address of your Marketo subdomain. Creating Connection To connect to Marketo, specify the Client ID, Client Secret, and Domain. Additional Connection Parameter Use Bulk Extract This parameter determines whether to use Marketo Bulk Extract to read Marketo Leads, Activities, and custom objects. If you select this checkbox, the mentioned objects\u2019 querying will use fewer API calls but may take more time. Select this checkbox only if you need to read many Marketo records and spend fewer API calls. \nRead Bulk Extract Specifics to know more. Metadata Cache Specify the period after which Metadata Cache is expired. Connector Specifics Bulk Extract Specifics Marketo provides interfaces for retrieval of large sets of data, called Bulk Extract. Skyvia supports Bulk Extract for the Leads , Activities objects, and for custom objects. Due to Marketo Bulk Extract API [limits](https://experienceleague.adobe.com/en/docs/marketo-developer/marketo/rest/bulk-extract/bulk-extract#limits) , Skyvia can read data by 31 days intervals. Thus, when you query data for more than 31 days, Skyvia reads data in chunks. Each chunk includes data for 31 days. When you enable Use Bulk Extract and query Leads, Activities or custom objects without filters, Skyvia reads all the records created since the account creation date until UTC Time Now and divides data into 31-day chunks. It takes 60 seconds minimum to read a single chunk. For example, you created your Marketo account three years ago, and you used the query SELECT * From Leads .\nSkyvia will read 36 chunks of data, spending 60 seconds minimum on each chunk, which results in 36 minutes or more to perform a single query. To increase query performance and save API calls, narrow the selected time interval and minimize a number of chunks by using filters. We strongly recommend using the following filters when querying Leads , Activities , and custom objects when Bulk Extract is enabled: Object Field Operator Leads Updated At > , >= , < , <= Activities ActivityDate, Created At > , >= , < , <= Custom objects Updated At > , >= , < , <= When to use Bulk Extract It is better to enable the Use Bulk Extract parameter in the following cases: When Leads or Activities object contains many records and the API limits don\u2019t allow you to query all records at once. When querying custom objects. When using queries with date filters and the selected time interval is not very long. When using Incremental Replication. It is better to disable the Use Bulk Extract parameter in the following cases: When the Leads or Activities object contains a huge number of records, and the API limits allow you to query all records at once without filters. When Leads or Activities object contains not many records. Object Peculiarities Leads, Activities and Custom Objects Marketo API doesn\u2019t allow filtering queried leads, activities, and custom objects data based on the Updated At or Created At fields. Thus, whenever a Synchronization or Incremental Replication runs, Skyvia queries all the records from these objects into the cache and applies a filter to this cache. It takes a lot of API calls and time if you have many records in these objects. However, the Bulk Extract interface supports native filtering by date fields for these objects. Thus we recommend enabling the Use Bulk Extract connection parameter in the Connection Editor to save API calls for Leads, Activities , and custom objects in Synchronization or Incremental Replication integrations. Querying Specifics Marketo API supports querying data from certain objects only by their identifying fields. To get data from the Companies, Opportunities, OpportunityRoles , and SalesPersons objects, use filter by the identifying fields. To get data from the custom objects use filter by the identifying fields or enable the Bulk Extract to get all the data without filters. Incremental Replication and Synchronization Objects having Created At or Updated At fields support Synchronization and Replication with Incremental Updates. Synchronization requires both fields to be present in an object. Incremental Replication requires at least one of the fields to be present in an object. Supported Actions Skyvia supports all the common actions for Marketo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/microsoftads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Microsoft Ads [Microsoft Ads](https://ads.microsoft.com) is a tool for advertising on the Bing search network and its partner networks (Yahoo and AOL). Data integration : Skyvia supports importing data to and from Microsoft Ads, exporting Microsoft Ads data to CSV files, replicating Microsoft Ads data to relational databases, and synchronizing Microsoft Ads data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Microsoft Ads. Query : Skyvia Query supports Microsoft Ads. Establishing Connection To create a connection to Microsoft Ads, sign in with Microsoft Ads account using your credentials and specify the customer and account. Creating Connection To connect to Microsoft Ads, perform the following steps: Click Sign In with Microsoft in the Connection Editor. Enter your Microsoft Ads credentials. Click Accept , to grant Skyvia the requested permissions. Select the Customer from the dropdown list. Select the Account from the dropdown list. Additional Connection Parameters Attribution Window Set the Attribution Window parameter to define the time frame within which a click or impression resulted in a conversion from a marketing campaign. With this setting, each incremental replication will include any updates to historical reporting data within the specified attribution window. Connector Specifics Object Peculiarities Ads and Audiences The Ads and Audiences objects store records of different types. \nWhen you import data to these objects, you must map the Type field. The mapped Type value determines the set of the required fields for mapping. The required fields differ for different record types. \nIf you map a field unnecessary for the specified type, it is ignored. \nIf you leave a required field unmapped, you get an error. Reports Microsoft Ads allows you to use reports to retrieve detailed statistics on advertising accounts, campaigns, and ad groups. The Microsoft Ads connector supports a variety of reporting objects \u2013 pre-set, read-only datasets used to build report queries. The metrics are grouped by day in the objects with the *DailyReport suffix in the names and contain the Date field. In the objects with the *MonthlyReport suffix in the names, the metrics are grouped by month and contain the Date field, corresponding to the month\u2019s first day. Report objects without suffixes in the names contain metrics for the entire selected period (Summary) between the specified StartDate and EndDate (or default values). Attributes and Metrics In MicrosoftAds, report fields can be either attributes (Attribute Columns) or metrics (Performance Statistics Columns). When selecting data from the reports, the results are aggregated by the selected fields (other attributes and metrics than date). Start Dates and End Dates The default StartDate for all *Reports objects is 2006-01-01 . The default EndDate value equals the current date. You can set filters by these fields to modify the report period. If the StartDate value in the filter(except the >= operator) differs from the first day of the month for the monthly report, it is automatically changed to the first day of the next month. Report fields in Microsoft Ads can act as attributes (Attribute Columns) or metrics(Performance Statistics Columns). The set of the fields selected from the *Report objects determines the data aggregation in the result. \nIf you add the attribute field to the selection when querying, the data will be aggregated by this field. Related objects Some Microsoft Ads objects are related to each other by the aggregation relation. To get data from one object, Skyvia has to read all the records from its parent object first. This process is time and API calls consuming. You can use filters to speed up the reading time and optimize the API call usage. Such objects and filters are the following: The AdGroups object is related to the Campaigns object. Set filters by the AdGroups Id and CampaignId fields when performing the Update and Delete operations. The Ads object is related to the AdGroups object. Use filters by AdGroupId and Type when performing Update. To optimize the Delete operation, use filters by the Id and AdGroupId . The Keywords object is related to the AdGroups object. Use filters by the Keywords Id and AdGroupId fields when performing Update and Delete operations. Nested Objects The connector includes objects with fields that store complex, structured data in JSON format. The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To replicate nested data into separate tables with the new replication runtime, select the Separate Tables option for Unwind Nested Objects . The list of objects with specified fields that store complex data structures is the following: Object Field Nested Object AdGroups FrequencyCapSettings FrequencyCapSettingsType UrlCustomParameters CustomParametersType Settings SettingsType Ads UrlCustomParameters CustomParametersType Descriptions AssetLinksType Headlines AssetLinksType Images AssetLinksType LongHeadlines AssetLinksType Videos AssetLinksType Audiences CustomerShareCustomerAccountShares CustomerAccountSharesType RuleAnotherRuleItemGroups RuleItemGroupsType RuleItemGroups RuleItemGroupsType RuleExcludeRuleItemGroups RuleItemGroupsType RuleIncludeRuleItemGroups RuleItemGroupsType CombinationRules CombinationRulesType Campaigns UrlCustomParameters CustomParametersType Settings SettingsType CustomerAccountSharesType Associations CustomerAccountShareAssociationsType DSASearchQueryPerformanceDailyReport CategoryList CategoryListType DSASearchQueryPerformanceMonthlyReport CategoryList CategoryListType DSASearchQueryPerformanceReport CategoryList CategoryListType Keywords UrlCustomParameters CustomParametersType RuleItemGroupsType Items RuleItemsType SettingsType Details TargetSettingDetailType Incremental Replication and Synchronization The *DailyReport and *MonthlyReport objects store dates without the time part. When performing replication with Incremental Updates, Skyvia only tracks updates up to the last day/month. Skyvia doesn\u2019t track the updates made on the current date/current month. Skyvia supports Synchronization for the Accounts, AdGroups, Ads, Audiences, Budgets, Campaigns, Keywords objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE AdGroups, Ads, Audiences, Budgets, Campaigns, Keywords Supported Actions Skyvia supports all the common actions for Microsoft Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/mondaycom_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Monday.com [Monday.com](https://monday.com/) is a collaboration platform that helps businesses to manage tasks, projects, and workflows to streamline communication, enhance productivity, and track progress. Data integration : Skyvia supports importing data to and from Monday.com, exporting Monday.com data to CSV files, replicating Monday.com data to relational databases, and synchronizing Monday.com data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Monday.com. Query : Skyvia Query supports Monday.com. Establishing Connection To create a connection to Monday.com, specify the Access Token. Getting Credentials To get the Access Token, do the following: Sign in to Monday.com. Click the user icon on the left and select Developers . Click Developer above and select My Access Tokens . Click Show and copy the appeared access token value. Creating Connection To connect to Monday.com, paste the obtained Access Token value to the corresponding box in the Connection Editor. Additional Connection Parameters Board Item Tables Naming Rule To specify how Board Item tables are named, choose between Name , Id , or Name and Id . Depending on your selection, objects will be named in one of these formats: _Items, _Items , or __Items . Board Item Columns Naming Rule To control how Board Item columns are named, choose between the Title or the Id . Connector Specifics Object Peculiarities There are three types of Monday.com objects: Static Objects Users, Teams, Workspaces, Boards, BoardItemGroups, BoardViews, ActivityLogs, Docs, DocBlocks, Tags, Updates, UpdateAssets These objects have static metadata. _Items and _Subitems Objects Every Board created by a user in Monday.com UI has a corresponding object with _Items name, where is the name of the Board created by the user. Most of the fields in these objects are custom. Users manage these fields and define their types. The _Subitems object is created when the corresponding Board has subitems. To increase performance when selecting data from such objects, you can use the following filters Field Type Operator Primary key (Id) = , In Checkbox = Numbers = , != , > , >= , < , <= , In, Is Null, Is Not Null Text, long_text Is Null, Is Not Null, Like (for constant expressions meaning \u201ccontains\u201d (LIKE \u2018%sometext%\u2019) or complete match (LIKE \u2018sometext\u2019) text, long_text fields with the *phone or *email suffix = , != , In, Is Null, Is Not Null, Like (for constant expressions meaning \u201ccontains\u201d (LIKE \u2018%sometext%\u2019) or complete match (LIKE \u2018sometext\u2019) Status, rating, date, hour Is Null, Is Not Null Phone, country = , != , In, Is Null, Is Not Null Assets Type Objects Every _Items and _Subitems field of the Assets type has a corresponding object with the _Items__Assets and BoardName_Subitems__Assets names, where __ is the name of the Board created by the user and is a name of the field of the Assets type. Read-only Objects The following objects are read-only: Users, Teams, BoardViews, ActivityLogs, Docs, DocBlocks . Content Field The Content field value in the UpdateAssets, _Items__Assets , and _Subitems__Assets objects contains the binary data of the assets binary data. Skyvia reads the Content values via additional API call for each record. Thus, querying this field significantly increases reading time and API calls usage. Skyvia supports only asset type attachments. It doesn\u2019t support files attached directly to the item records in the _Items or _Subitems objects. Tags You can insert tags with new Name values. \nIf you try to insert a Tag with an already existing Name value, the API will return the Id of the existing record instead of inserting it. Nested Types The following Monday.com objects store the nested JSON objects: Object Field Nested object _Items, _Subitems People PersonsType _Items, _Subitems File FilesType _Items, _Subitems TimeTracking TimeTrackingSpansType Updates Replies RepliesType Updates Assets UpdateAssetsType You can use the nested objects supporting INSERT, UPDATE and DELETE operations in Import and Data Flow integrations. DML Operations Support Operation Object INSERT, UPDATE, DELETE Boards, BoardItemGroups, _Items, _Subitems INSERT, DELETE Workspaces, Updates INSERT Tags, UpdateAssets, _Items__Assets, _Subitems__Assets Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Users, Workspaces, Boards, ActivityLogs, Updates, UpdateAssets, _Items, _Subitems, _Items__Assets, _Subitems__Assets objects. Replication detects only the new records for the Users, Workspaces, ActivityLogs, UpdateAssets, _Items__Assets, _Subitems__Assets object. Skyvia supports Synchronization for the objects which contain the CreatedDate or UpdatedDate field and support the INSERT and UPDATE operations. Supported Actions Skyvia supports all the common actions for Monday.com." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/motion_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Motion [Motion](https://www.usemotion.com/) is a cloud productivity platform using AI for planning and time management. Data integration : Skyvia supports importing data to and from Motion, exporting Motion data to CSV files, and replicating Motion data to relational databases. Backup : Skyvia Backup does not support Motion. Query : Skyvia Query supports Motion. Establishing Connection To create a connection to Motion, specify the API Key. Getting Credentials To locate the API Key, do the following: Go to Motion and click the gear icon. Click API on the left, switch to the API Keys tab and click Create API Key . Name your API Key and click Create . Copy the generated API Key and save it in the safe place. Adjust the API Key access if needed and click Update . The API Key is available only once after you create it. You can\u2019t access it later. Creating Connection Paste the obtained API Key to the corresponding box in the Connection Editor to connect to Motion. Connector Specifics Object Peculiarities RecurringTasks Motion API requires providing the Frequency field values in [specific format](https://docs.usemotion.com/docs/motion-rest-api/41f447b4b598d-create-a-recurring-task#defining-specific-days-for-a-frequency) when importing data to this object. Specify the frequency type and define the specific weekdays if needed. For example, INSERT INTO RecurringTasks (Name, Workspace_Id, Frequency, AssigneeId) \nvalues ('Task - Recurring Insert','YYYD23Jx-R9wGvMKM5wpY','monthly_first_MO','LwXFskHGTXT6xtsHmM4qlA7cBuh2') Valid weekday values are MO, TU, WE, TH, FR, SA, SU . Specify them in the array format, for example, [MO, FR, SU] , for Monday, Friday, and Sunday. Always use an array of days together with frequency type. To specify Daily, Weekly , and Bi-weekly frequency, use the valid values below. Frequency Valid Values Example Daily daily_every_day, daily_every_week_day, daily_specific_days_DAYS_ARRAY daily_specific_days_[MO, TU, FR] Weekly weekly_any_day, weekly_any_week_day, weekly_specific_days_DAYS_ARRAY weekly_specific_days_[MO, TU, FR] Bi-weekly biweekly_first_week_specific_days_DAYS_ARRAY, biweekly_first_week_any_day, biweekly_first_week_any_week_day, biweekly_second_week_any_day, biweekly_second_week_any_week_day biweekly_first_week_specific_days_[MO, TU, FR] To specify Monthly Frequency, use the followin values. Frequency Valid Values Example Specific Week Day Substitute the DAY with the day code MO, TU, WE, TH, FR, SA, or SU . monthly_first_DAY, monthly_second_DAY, monthly_third_DAY, monthly_fourth_DAY, monthly_last_DAY monthly_first_MO Specific Day Options Use numbers to provide the specific day. If you specify a number that exceeds the actual number of days in the month, it will be replaced by the actual last-day number. monthly_1, monthly_15, monthly_31 Specific Week Options monthly_any_day_first_week, monthly_any_day_second_week, monthly_any_day_third_week, monthly_any_day_fourth_week, monthly_any_day_last_week Any Week Day monthly_any_week_day_first_week, monthly_any_week_day_second_week, monthly_any_week_day_third_week, monthly_any_week_day_fourth_week, monthly_any_week_day_last_week Other Options monthly_last_day_of_month, monthly_any_week_day_of_month, monthly_any_day_of_month For specifying a Quarterly frequency, use the following values. Frequency Valid Values Example First Days Substitute the DAY with the day code MO, TU, WE, TH, FR, SA, or SU . Quarterly_first_day, quarterly_first_week_day, quarterly_first_DAY quarterly_first_MO Last Days quarterly_last_day, quarterly_last_week_day, quarterly_last_DAY quarterly_last_MO Other Options quarterly_any_day_first_week, quarterly_any_day_second_week, quarterly_any_day_last_week, quarterly_any_day_first_month, quarterly_any_day_second_month, quarterly_any_day_third_month Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Tasks and Comments . Replication detects only new records and doesn\u2019t detect the updated records. Skyvia doesn\u2019t support Synchronization for Motion. DML Operations Support Skyvia supports the following DML operations for Motion objects. Operation Object INSERT, UPDATE, DELETE Tasks INSERT, DELETE RecurringTasks INSERT Comments, Projects Stored Procedures Skyvia represents part of the supported Motion features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . MoveTaskToWorkspace Use the following command to move a specific task to a specific workspace. call MoveTaskToWorkspace(:taskId,:workspaceId,:assigneeId) UnassignTask To unassign the specific task, use the following command. call unassignTask(:taskId) Supported Actions Skyvia supports all the common actions for Motion." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/myhours_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources My Hours [My Hours](https://myhours.com/) is a project time-tracking solution designed to manage performance and efficiency. Data integration : Skyvia supports importing data to and from My Hours, exporting My Hours data to CSV files, and replicating My Hours data to relational databases. Backup : Skyvia Backup does not support My Hours. Query : Skyvia Query supports My Hours. Establishing Connection Enter your username and password to create a connection to My Hours. Additional Connection Parameters Suppress Extended Requests When querying multiple records, the My Hours API returns only part of the fields for some objects. Skyvia performs additional extended requests to query the values of missing fields. Skyvia performs such API requests for each record of an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD TeamMembers CanOnlyManageTeam, CanEditLogsForOthers, CanApproveLogsForOthers, CanManageProjects, CanManageLaborCostsForOthers, CanManageBillableRatesAndBudgets, Approved, AssignToFutureProjects, FirstLogDate, Notes You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities TimeLogs To query data from this object, use filters by the DateFrom or/and DateTo fields. Otherwise, a query will return an empty result.\nIf you filter only by the DateTo field, the query will return all time logs for dates before the specified one. When importing data to this object, map either the StartTime and EndTime fields or the LogDuration field. Dashboard* Objects Skyvia represents the Dashboard as four separate objects depending on the reporting group: DashboardProjectTasks, DashboardTasks, DashboardClientProjects, DashboardTeamMembers .\nUse filters by the DateFrom or/and DateTo fields to query data from these objects. Otherwise, a query will return an empty result. IncompletedTasks When you import data to this object, consider the following.\nMap the BudgetValue field if the project budget is Task-based. \nMap the Rate field if your project\u2019s invoice method is Task-based. Clients The ContactPhone and Address fields are designed for mapping when performing INSERT and UPDATE operations. The TaxName, TaxValue , and TaxNumber are intended for mapping when performing the UPDATE operation. TeamMembers The fields UpdateUserRateOnProjects, UpdateUserBillableRateOnProjects, ChangedFromDateLaborRate , and ChangedFromDateBillableRate are designed for mapping when performing the UPDATE operation. They return empty values when querying. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Projects object only. Skyvia detects only new records for this object. Skyvia doesn\u2019t support Synchronization for My Hours. DML Operations Support My Hours objects support the following DML operations. Operation Object INSERT, UPDATE Clients, IncompletedTasks, Projects, Teammembers INSERT, DELETE TimeLogs INSERT Tags Stored Procedures Skyvia represents part of the supported My Hours features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . StartNewTimer To start a new timer for the current user, use the following command. call StartNewTimer(:projectId,:taskId,:note,:date,:start,:billable,:expense) PARAMETER NAME DESCRIPTION ProjectId The ID of the project TaskId The ID of the task Note Optional description Date Timer Date in UTC with timezone info Start Timer start in ISO8601 with date and time in UTC with timezone info Billable Optional. Boolean. The default value is true when the assigned project is set as billable . You can set the billable parameter as false when needed. Expense Optional. Integer. StopTimer To stop the running timer for a current user, use the following command. call StopTimer(:logId, :end) PARAMETER NAME DESCRIPTION LogId The ID of the time log End The moment of timer stop in ISO8601 with date and time in UTC with timezone info EditTimelog Use the following command to modify the existing time log. call EditTimelog(:logId, :userId, :projectId, :taskId, :note, :date, :start, :end, :billable, :expense) Leaving any of the parameters empty will remove their value. PARAMETER NAME DESCRIPTION LogId The ID of the log UserId The logged-in user ID ProjectId The ID of the project TaskId The ID of the task Note Description Date Timer Date in UTC with timezone info Start Timer start in ISO8601 with date and time in UTC with timezone info End The moment of timer stop in ISO8601 with date and time in UTC with timezone info Billable Boolean. The default value is true when the assigned project is set as billable . You can set the billable parameter as false when needed. Expense Integer ArchiveProject To archive the project, use the following command. call ArchiveProject(:projectIds) PARAMETER NAME DESCRIPTION ProjectIds The list of project IDs in the array format, for example [123,456,789] CopyProject To create a project based on an existing project, use the following command. call CopyProject(:projectId) ArchiveTask To archive the task, use the following command. call ArchiveTask(:projectId,:projectTaskId) AssignMemberToProject To add the existing user to the specific project, use the command call AssignMemberToProject(:projectId,:userId) ArchiveTeamMember To archive the user, use the command call ArchiveTeamMember(:userId) Supported Actions Skyvia supports all the common actions for My Hours." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/netsuite_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources NetSuite (SOAP) [NetSuite](https://www.netsuite.com/portal/home.shtml) is a unified cloud business management solution, including ERP/financials, CRM, and ecommerce. This connector uses SOAP API to work with NetSuite. It does not support NetSuite fields, storing array data, but supports backup of some NetSuite data. This connector is considered deprecated, and it is left for compatibility reasons. We recommend using NetSuite v2 (REST) connector for new integrations and other use cases. Data integration : Skyvia supports importing data to and from NetSuite, exporting NetSuite data to CSV files, replicating NetSuite data to relational databases, and synchronizing NetSuite data with other cloud apps and relational databases. Backup : Skyvia Backup supports NetSuite backup. Query : Skyvia Query supports NetSuite. Establishing Connection Skyvia supports two kinds of authentication in this connector: basic and token-based authentication. Basic authentication assumes storing your NetSuite credentials on Skyvia, while token-based authentication uses automatically generated token, which can easily be revoked. 2-factor authentication is required for NetSuite users with highly privileged roles. This means that such NetSuite user account cannot be used for basic authentication. Getting Credentials For Basic authentication, you only need to create an integration in NetSuite and obtain the Application Id . To obtain credentials for connecting Skyvia to NetSuite using token-based authentication, you need to sign in to NetSuite and perform the following actions: Obtain Account ID You can get the account id value in NetSuite SOAP Web Services Preferences. Point to Setup , then to Integrations , and then click SOAP Web Services Preferences . On the opened page, copy your Account ID under Primary Information . Enable Token-based Authentication in NetSuite Perform the following actions (if you plan to use token-based authentication): Point to Setup , then to Company , then click Enable Features . Then click Suite Cloud . In the Manage Authentication group, select the Token-based Authentication checkbox. Create Role Create a role with the permissions for the objects you would like to use. You can find available permissions in the Transactions and Lists sections. To allow API calls, grant the following permissions in the Setup section: SOAP Web Services User Access Tokens Custom Record Types Custom Lists Custom Fields Create Integration Record To create an integration record, perform the following: Point to Setup , then to Integrations , then Manage Integrations , and then click New . In the opened page, enter Name for the application, for example, Skyvia . If you plan to use token-based authentication, make sure that under Token-based Authentication : The Token Based Authentication checkbox is selected. The TBA: Authorization Flow checkbox is not selected. If you plan to use basic authentication, select the User Credentials check box at the bottom of the page. Click Save . From the top of the page, copy Application Id . For Token-based authentication, copy the generated Consumer Key / Client ID , and Consumer Secret / Client Secret from the bottom of the page. Note that they are displayed only once on this page. If you need them again, you will have to re-generate them, which make the old ones to stop working. You may reuse these values if you need multiple Skyvia connections to this NetSuite account. Create Access Token. Perform the following actions to create an access token and obtain TokenID and Token Secret: Point to Setup , then to Users/Roles , then Access Tokens , and then click New . Select the Integration record, User, and Role created on the previous steps. Click Save . From the confirmation screen, copy the generated Token ID and Token Secret. Note that Token ID and Token Secret are displayed only once on this page. If you need them again, you will have to re-generate them, which make the old ones to stop working. Creating Connection When creating a NetSuite connection , you need to specify different connection parameters depending on the authentication kind you select. Basic Authentication To create a NetSuite connection using basic authentication, perform the following steps: In the Authentication list, select Basic . Enter the obtained NetSuite Account ID . Enter your NetSuite account email to the User box. Enter your Password . Token-based Authentication To create a NetSuite connection using token-based authentication, perform the following steps: In the Authentication list, select Token-Based . Enter the obtained NetSuite Account ID . Paste the copied Consumer Key / Client ID to the Consumer Key box. Paste the copied Consumer Secret / Client Secret to the Consumer Secret box. Paste Token ID and Token Secret that you obtain when creating an access token to the Token and Token Secret boxes respectively. Additional Connection Parameters Metadata Cache You can specify the period of time, after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Custom Objects and Fields Skyvia supports custom NetSuite objects and fields. Their support is disabled by default. You can enable it using the following connection options: Use Custom Tables \u2014 select this checkbox if you need to work with custom NetSuite tables in Skyvia. However, please note that processing custom tables may take a substantial amount of time, and it\u2019s better not to select this checkbox if you don\u2019t need to work with custom NetSuite tables in Skyvia. Use Custom Lists \u2014 select this checkbox if you need to work with NetSuite custom lists (as with other objects) in Skyvia. However, please note that processing custom lists may take a substantial amount of time, and it\u2019s better not to select this checkbox if you don\u2019t need to work with NetSuite custom lists in Skyvia. Use Custom Fields \u2014 select this checkbox if you need to work with custom fields of predefined NetSuite tables in Skyvia. However, please note that processing custom fields may take a substantial amount of time, and it\u2019s better not to select this checkbox if you don\u2019t need to work with custom fields in Skyvia. Connector Specifics Nested Array Fields This connector does not support NetSuite fields, storing array data. Skyvia cannot extract data from such fields, and they are not displayed in Skyvia interface. If such field is required in the table (must be filled in order to create a new record), Skyvia cannot load data to such tables. Thus, such tables are read-only in Skyvia. The list of such objects, for example, includes all the transactions objects, because Skyvia cannot extract their items. Here is the list of such objects: BinWorksheet , CashRefund , CashSale , Check , CreditMemo , Estimate , Invoice , Opportunity , PurchaseOrder , ReturnAuthorization , SalesOrder , TransferOrder , VendorBill , VendorCredit , VendorReturnAuthorization , WorkOrder , Deposit , InventoryAdjustment , InventoryTransfer , ItemSupplyPlan , JournalEntry , VendorPayment , SerializedAssemblyItem , ExpenseReport , ManufacturingRouting , PaycheckJournal , ManufacturingOperationTask , CustomerRefund , InventoryAdjustment , InventoryCostRevaluation , ManufacturingCostTemplate . This list is not complete. Skyvia also does not support updating the Address object and some other objects. Skyvia cannot load data to these tables and cannot load certain data from them. If you need to work with such fields, use our NetSuite v2 (REST) connector. Other Connector Limitations Skyvia also does not support custom NetSuite fields having double quotation marks in their name. As for NetSuite files, Skyvia cannot read NetSuite file content. Thus, Skyvia can import files to NetSuite, but can only read file metadata and not content from NetSuite. Supported Actions Skyvia supports all the common actions for NetSuite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/netsuite_v2_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources NetSuite V2 (REST) [NetSuite](https://www.netsuite.com/portal/home.shtml) is a unified cloud business management solution, including ERP/financials, CRM, and ecommerce. This connector uses REST API to work with NetSuite. It provides support for NetSuite fields, storing array data, but does not support backup. See NetSuite (SOAP) for an alternative connector. Data integration : Skyvia supports importing data to and from NetSuite, exporting NetSuite data to CSV files, replicating NetSuite data to relational databases, and synchronizing NetSuite data with other cloud apps and relational databases. Backup : Skyvia Backup does not support NetSuite backup. You can use our NetSuite (SOAP) connector. Query : Skyvia Query supports NetSuite. Establishing Connection Skyvia supports two kinds of authentication in this connector: OAuth 2.0 and Token-based authentication. Getting Credentials To obtain credentials for connecting Skyvia to NetSuite, you need to sign in to NetSuite and perform the following actions: Obtaining Account ID You can get the account id value in NetSuite SOAP Web Services Preferences. Point to Setup , then to Integrations , and then click SOAP Web Services Preferences . On the opened page, copy your Account ID under Primary Information . Obtaining Time Zone and Date Format You can get the account time zone and date format in your NetSuite preferences. Point to the home icon in the top left corner of the page and click Set Preferences . Under Localization , find your Time Zone , Under Formatting find Date Format . Obtaining Company Language This parameter is necessary when Multiple Languages feature is enabled in NetSuite, and the language of the connection user differs from the language of the company. You can get the account id value in NetSuite Company Preferences. Point to Setup , then to Company , and then under Preferences click General Preferences . On the opened page, scroll down and click Languages tab. There you can see your company language. Enable Authentication in NetSuite Perform the following actions: Point to Setup , then to Company , then click Enable Features . Then click Suite Cloud . If you want to use token-based authentication, in the Manage Authentication group, select the Token-based Authentication checkbox. If you want to use OAuth 2.0 authentication, in the Manage Authentication group, select the OAuth checkbox. Create Role Create a role with the permissions for the objects you would like to use. You can find available permissions in the Transactions and Lists sections. To allow API calls, grant the following permissions in the Setup section: REST Web Services User Access Tokens Custom Record Types Custom Lists Custom Fields Grant the SuiteAnalytics Workbook permission in the Reports section. Create Integration Record To create an integration record, perform the following: Point to Setup , then to Integrations , then Manage Integrations , and then click New . In the opened page, enter Name for the application, for example, Skyvia . If you plan to use token-based authentication, make sure that under Token-based Authentication : The Token Based Authentication checkbox is selected. The TBA: Authorization Flow checkbox is not selected. If you plan to use OAuth 2.0 authentication, perform the following: Under OAuth 2.0 , select the Authorization Code Grant checkbox. In the Redirect URI box, enter https://app.skyvia.com/oauthcallback/netsuite Select the REST Web Services checkbox. Make sure that the Public Client check box is not selected. Click Save . From the confirmation screen, copy the generated Consumer Key / Client ID , and Consumer Secret / Client Secret . Note that they are displayed only once on this page. If you need them again, you will have to re-generate them, which make the old ones to stop working. You may reuse these values if you need multiple Skyvia connections to this NetSuite account. Create Access Token. This is needed only for token-based authentication. Perform the following actions to create an access token and obtain TokenID and Token Secret: Point to Setup , then to Users/Roles , then Access Tokens , and then click New . Select the Integration record, User, and Role created on the previous steps. Click Save . From the confirmation screen, copy the generated Token ID and Token Secret. Note that Token ID and Token Secret are displayed only once on this page. If you need them again, you will have to re-generate them, which make the old ones to stop working. Creating Connection When creating a NetSuite connection , you need to specify different connection parameters depending on the authentication kind you select. Token-based Authentication To create a NetSuite connection using token-based authentication, perform the following steps: In the Authentication list, select Token-Based . Paste the copied Consumer Key / Client ID to the Consumer Key box. Paste the copied Consumer Secret / Client Secret to the Consumer Secret box. Paste Token ID and Token Secret that you obtain when creating an access token to the Token and Token Secret boxes respectively. Enter the obtained NetSuite Account ID . Specify the Account Date Format . Specify your Account Time Zone . OAuth 2.0 Authentication To create a NetSuite connection using OAuth 2.0 authentication, perform the following steps: In the Authentication list, select OAuth 2.0 . Paste the copied Consumer Key / Client ID to the Client ID box. Paste the copied Consumer Secret / Client Secret to the Client Secret box. Click Sign In with NetSuite . Enter your NetSuite credentials and click Log In Click Continue to allow Skyvia access to NetSuite. NetSuite Account ID appears automatically after the connection token is formed. Specify the Account Date Format . Specify your Account Time Zone . Additional Connection Parameters Metadata Cache You can specify the period of time, after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Note that reading metadata for NetSuite takes especially long time, about 5 minutes or so, so connecting the first time may take a while. If you disable metadata caching or set the Metadata cache parameter to a low value, Skyvia can experience slowdowns when querying NetSuite metadata. Company Language If Multiple Languages feature is enabled in NetSuite, and the language of the connection user differs from the language of the company, you must specify Company Language . NetSuite requires passing company language with each API request. By default, connection user language is used. Analytics Custom Field Definitions Use this parameter to define custom fields configuration for Transactions, TransactionLines, RevenueElements, SystemNotes, and SystemNotesV2 objects. For more details, see Custom Fields Definitions Config section. After adding or changing this parameter, reset the Metadata Cache . Connector Specifics Custom Fields Definitions Config To define custom fields, you need the following values: TableName : The name of the object where the custom fields are located. For example, TransactionLines . Name : A user-friendly name for the custom field. DbType : The data type of the custom field. Supported data types: Boolean, Double, Decimal, Date, DateTime, Int32, Int64, String . APIName : The ID of your custom field. For example, to find field ID for TransactionLines , see the screenshot below: Custom Fields Definition Example We created the Rev Rec Start and Rev Rec End custom fields in NetSuite. To let Skyvia parse them, we use this config: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17 [\n {\n \"TableName\": \"TransactionLines\",\n \"CustomFields\": [\n {\n \"Name\": \"Rev Start Date\",\n \"APIName\": \"custcol_rev_start_date\",\n \"DbType\": \"Date\"\n },\n {\n \"Name\": \"Rev End Date\",\n \"APIName\": \"custcol_rev_end_date\",\n \"DbType\": \"Date\"\n }\n ]\n }\n] You can use this example to write your own config. Replace example values with your own. Nested Array Fields Many NetSuite objects have fields, storing array data. Such fields fully support Nested object mapping in Import. You can see our tutorials on QuickBooks and Salesforce, demonstrating how to use nested object mapping: How to Import Data from Salesforce Opportunities to QuickBooks Invoices How to Import QuickBooks Invoices to Salesforce Opportunities Easy Importing Invoices and Customers Between QuickBooks Online Accounts How to Import Products with Variants from the relational Database into Shopify When selecting data from these fields, they return their contents as JSON arrays. Not Supported Objects Skyvia does not support the following NetSuite objects: billingaccount , priceplan , subscriptionchangeorder , subscription , subscriptionline , usage . Supported Actions Skyvia supports all the common actions for NetSuite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/nimble_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Nimble [Nimble](https://www.nimble.com/) is an online customer relation management system with focus on relationship. Data integration : Skyvia supports importing data to and from Nimble, exporting Nimble data to CSV files, replicating Nimble data to relational databases, and synchronizing Nimble data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Nimble. Query : Skyvia Query supports Nimble. Establishing Connection To create a connection to Nimble, sign in with Nimble and select an account. Creating Connection To connect to Nimble, perform the following steps: Click Sign In with Nimble . In the opened window, enter your Nimble credentials and click Sign In . Click Authorize . Connector Specifics Object Peculiarities Contacts The Contacts object has the following fields that are read-only: FirstName , LastName , CompanyName , Rating , LeadStatus , and LeadSource . \nTo assign data, use the fields with the \u201cAdd\u201d suffix: FirstNameAdd , LastNameAdd , CompanyNameAdd , RatingAdd , LeadStatusAdd , and LeadSourceAdd . The values for these fields must be in JSON format. For example, for the FirstNameAdd field, the value could be: [{\"value\": \"John\", \"modifier\": \"\"}] . The Tags field in the Contacts object is read-only, so the TagsAdd field must be used to assign or modify tags for a contact. Unlike other fields, the value for TagsAdd should be a simple list of tags separated by commas, not in JSON format. \nThe TagsAdd field cannot be used in an UPDATE operation. ContactNotes When loading data into the ContactNotes object, provide the ContactIds field as a JSON array of contact IDs. If you include multiple IDs for a source record, separate records will be created in the ContactNotes object for each contact. However, only one record will appear in the execution log. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Contacts , ContactNotes , and Tasks objects. Skyvia supports Synchronization for the Contacts object. DML Operations Support Operation Object INSERT, UPDATE, DELETE Contacts INSERT, DELETE ContactNotes INSERT Tasks Stored Procedures Skyvia represents part of the supported Nimble features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateContactTags To update contact tags, use the following command. This will override the existing tags with the new ones provided. call UpdateContactTags(:ContactId,:Tags) PARAMETER NAME DESCRIPTION ContactId Contact identifier Tags Contact tags. The value for the Tags parameter must be specified as a JSON array of tag strings: [\"tag1\", \"tag2\"] Supported Actions Skyvia supports all the common actions for Nimble." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/odata_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources OData Endpoint Skyvia allows you to connect to any data source that provides OData interface for data access over the web. Data integration : Skyvia supports importing data to and from OData endpoints, exporting their data to CSV files, and replicating their data to relational databases. Backup : Skyvia Backup via OData is not supported. Query : Skyvia Query supports OData endpoints. Establishing Connection To create a connection to OData endpoint, you need to have the server URL and know its authentication method. None Authentication Connection To connect to OData endpoint, specify the Server URL and set Authentication to None . Basic Authentication Connection To connect to OData endpoint with Basic authentication method, specify these parameters: Specify the Server URL. Select Basic Authentication. Enter your User and Password for OData endpoint. Additional Connection Parameters URL Parameters Custom query string parameters for HTTP requests. The value should be encoded as a part of a URL and have the usual query string format: param1=value1¶m2=value2¶m3=value3 Connector Specifics Query Provide constant values for the columns when executing the UPDATE statements in Query : 1\n2\n3 UPDATE table_name\nSET column_name = 'constant_value'\nWHERE condition; You cannot use expressions that involve the current values of columns, such as adding to the existing value or referencing another column. Incremental Replication and Synchronization Skyvia doesn\u2019t know which OData endpoint fields store record creation and modification timestamps. Skyvia does not support Replication with Incremental Updates for OData endpoints. Skyvia does not support Synchronization for OData endpoints. Supported Actions Skyvia supports all the common actions for OData endpoints." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/okta_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Okta [Okta](https://www.okta.com/) is an Identity-as-a-Service (IDaaS) platform, which provides you and your colleagues with access to all other (company) software with one login. Okta is available on your computer, laptop, mobile phone or tablet, allowing you to access your applications anytime and anywhere. Okta helps IT become more secure, make people more productive, and maintain compliance. Data integration : Skyvia supports importing data to and from Okta, exporting Okta data to CSV files, replicating Okta data to relational databases, and synchronizing Okta data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Okta. Query : Skyvia Query supports Okta. Establishing Connection To create a connection to Okta, you need to specify your subdomain and API key. Getting Credentials API Key To locate an API Key, perform the following steps: Go to your Okta account and click Security on the left. Click API and switch to the Tokens tab. Click Create token . Name your token and click Create token . Copy the token value and save it. The token value is available only once during the creation. Save it in the safe place to access it later. Subdomain To find your subdomain, go to Okta and click on your account name. Subdomain is a part of your organization URL. \nFor example, if your organization URL is trial-1366324.okta.com , Subdomain is trial-1366324 . Creating Connection To connect to Okta, enter your Subdomain and API Key. Connector Specifics Object Peculiarities Policies We have divided Policies into four separate objects depending on their type: OktaSignOnPolicies , PasswordPolicies , MFAEnrollPolicies , AuthorizationServerPolicies . EnrolledFactors When importing data into this object, one of the required fields is FactorType , which has a set of values. Each of the factors must be first activated for the account. For this, you need to log in to your Okta account as an administrator, select Security -> Multifactor tab and activate the necessary factors from the list on the left. Only after that these types of factors can be used for enroll. Depending on the selected FactorType , you need to map the columns of the profile group. They are different for each type. Check it out [here](https://developer.okta.com/docs/reference/api/factors/) . For example, for the \u201cquestion\u201d type, you need to set Profile_Question and Profile_Answer . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Okta objects: ApplicationCSRs, ApplicationKeyCredentials, Applications, AuthorizationServers, AuthorizationServerPolicies, AuthorizationServerPolicyRules, EnrolledFactors, EventHooks, Groups, GroupRules, InlineHooks, IdentityProviders, IDPDiscoveryPolicies, MFAEnrolledPolicies, OktaSignOnPolicies, SMS_Templates, PasswordPolicies, Schemas, TrustedOrigins, Users, UserRoles . Skyvia supports Synchronization for such Okta objects: Applications, AuthorizationServers, AuthorizationServerPolicies, AuthorizationServerPolicyRules, EventHooks, Groups, InlineHooks, MFAEnrolledPolicies, OktaSignOnPolicies, SMS_Templates, TrustedOrigins, Users . DML Operations Support Operation Object INSERT, UPDATE, DELETE Applications, ApplicationUsers, AuthorizationServers, AuthorizationServerPolicies, AuthorizationServerPoliciyRules, AuthorizationServerClaims, AuthorizationServerScopes, EventHooks, Groups, InlineHooks, MFAEnrollPolicies, OktaSignOnPolicies, SMS_Templates, TrustedOrigins INSERT, DELETE ApplicationGroups, ClientApplications, EnrolledFactors, GroupMembers, PasswordPolicies, UserRoles INSERT, UPDATE Users UPDATE, DELETE IDPDiscoveryPolicies DELETE GroupRules, IdentityProviders INSERT LinkedObjects Supported Actions Skyvia supports all the common actions for Okta." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/onfleet_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Onfleet [Onfleet](https://onfleet.com/) is a last mile delivery solution for different companies, specializing in food and beverage, retail, e-commerce, furniture, pharmacy, etc. Data integration : Skyvia supports importing data to and from Onfleet, exporting Onfleet data to CSV files, replicating Onfleet data to relational databases, and synchronizing Onfleet data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Onfleet backup. Query : Skyvia Query supports Onfleet. Establishing Connection To create a connection to Onfleet, you simply need to specify the API Key. You can create the API Key if you are an organization admin in the Onfleet app. To create it, perform the following steps Sign in to the [Onfleet dashboard](https://onfleet.com/dashboard#/manage) . Click the Open settings button near the top right corner of the page. In the settings, click API & Webhooks . Under API Keys , click the + button to create a key. Connector Specifics Container* Objects A container is an ordered list of tasks. Tasks always belong to exactly one container. \nThe objects with the Container* prefix in their names correspond to the specific container ContainerOrganization , ContainerTeam , and ContainerWorker . \nTo add tasks to a container, update its Tasks field and map it to a JSON array, which includes index and task Ids. The first element of this array is the index, that corresponds to the place in the list. To add tasks to the beginning of the list set index value to 0. \nTo add tasks to the end of the list, specify index -1. For example, if you want to insert two tasks at position 3 for a given worker, such that all currently assigned tasks at index >= 3 are shifted forward, map the Tasks field the following way. 1 [3,\"l33lg5WLrja3Tft*MO383Gub\",\"tsc4jFSETlXBIvi8XotH28Wt\"] If you omit specifing the index values, the existing tasks will be deleted and replaced by the specified tasks. Destinations Destinations object does not allow querying multiple records. It allows only querying one record at a time by its Id, so you must specify the required Id value in filters or in the SQL WHERE clause.\nYou must specify the address either in the Address_Unparsed field or in the Address_Country, Address_City, Address_Street, and Address_Number fields. Recipients The Recipients object does not allow querying multiple records. It supports querying a single record at a time using filter by its Id or Phone . Tasks When loading data to the Tasks object, you must specify the corresponding recipients and destination WorkerAnalytics Use filters by From and To fields to set period for your query. \nIf you query this object without filters, the result contains data for the previous 7 days. TeamTasks and WorkerTasks Use filters by From and To fields to set period for your query. \nIf you query this object without filters, the result contains data for the previous 7 days.\nIf you set filter by the From field only, query will return data starting from the specified date up to the current date. Incremental Replication and Synchronization Skyvia does not support replication with Incremental Updates for the following Onfleet objects: Hubs , Webhooks , WorkerSchedule , WorkerAnalytics . For other objects, replication with Incremental Updates is supported. It is also not recommended to replicate Destinations and Recipients objects because they don\u2019t support querying multiple records. Skyvia supports Synchronization for the following objects: Administrators , Tasks , Teams , Workers . DML Operations Support Skyvia supports the following DML operations: Operation Object INSERT, UPDATE, DELETE Administrators, Tasks, Teams, Workers INSERT, UPDATE Hubs, Recipients INSERT, DELETE Webhooks INSERT Destinations, WorkerSchedule UPDATE ContainerOrganization, ContainerTeam, ContainerWorker Supported Actions Skyvia supports all the common actions for Onfleet." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/outbrain_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Outbrain [Outbrain](https://www.outbrain.com/) is one of the leading native advertising platforms that helps emerging brands connect with consumers on the open web through engaging ad formats that inspire action. Data integration : Skyvia supports importing data to and from Outbrain, exporting Outbrain data to CSV files, replicating Outbrain data to relational databases, and synchronizing Outbrain data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Outbrain. Query : Skyvia Query supports Outbrain. Establishing Connection To create a connection to Outbrain, you will need to obtain an API token. Getting Credentials To obtain the API token, you have to log in to your Outbrain account and perform the following actions: Click on the user profile menu in the upper-right corner, then select Amplify API Token . Enter your password and click Generate . Copy the API token. If you can\u2019t generate an API token, request access to the Amplify API. See [Outbrain\u2019s documentation](https://developer.outbrain.com/home-page/amplify-api/documentation/#/reference/authentications) for more information. Creating Connection To connect to Outbrain, specify the API token. Connector Specifics Object Peculiarities MarketerCampaigns When importing data to the MarketerCampaigns object, map the Targeting_Platform and Targeting_Browsers fields. PerformanceReporting Group All PerformanceReporting group objects are divided into PerformanceReporting_[metric]Summary and PerformanceReporting_[metric]Results . PerformanceReporting_[metric]Summary contains general metrics for the data over the specified period. PerformanceReporting_[metric]Results has two groups of fields: Metadata , which contains metadata for each record during the specified period, and Metrics , which contains metrics for each specific record within that time frame. To receive records, you need to set a period filtering data by fields From and To . Reports are supported for Campaigns , Publishers , and Platforms objects. MarketerSegments The list of fields required for mapping depends on the SegmentType value. For example, if SegmentType equals LOOK_A_LIKES , you need to map the fields of the LookalikeSettings group. We cannot define a fixed list of required fields for a given object because this list changes depending on the SegmentType . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Outbrain objects: MarketerBudgets , MarketerCampaigns , MarketerConversionEvents , Marketers , MarketerSegments , PromotedLinks . Skyvia supports Synchronization for the following Outbrain objects: MarketerBudgets , MarketerCampaigns , MarketerConversionEvents , MarketerSegments . DML Operations Support Operation Object INSERT, UPDATE, DELETE MarketerBudgets INSERT, UPDATE MarketerCampaigns , MarketerConversionEvents , MarketerSegments INSERT PromotedLinks UPDATE Marketers Supported Actions Skyvia supports all the common actions for Outbrain." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/outreach_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Outreach [Outreach](https://www.outreach.io/) is a cloud sales engagement solution that helps sales teams to enhance sales performance and optimize sales processes. Data integration : Skyvia supports importing data to and from Outreach, exporting Outreach data to CSV files, replicating Outreach data to relational databases, and synchronizing Outreach data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Outreach. Query : Skyvia Query supports Outreach. Establishing Connection To create a connection to Outreach, you need to log in to Outreach with your credentials using OAuth 2.0 authentication. Creating Connection To connect to Outreach, perform the following steps: In the connection editor, click Sign in with Outreach Enter your Email. Enter your Password Connector Specifics Object Peculiarities Filtering Specifics The following fields support the = operator. Object Fields Prospects EngagedAt, EngagedScore, ExternalSource, FirstName, GithubUsername, LastName, LinkedInId, LinkedInSlug, SharingTeamId, StackOverflowId, TimeZone, Title, TouchedAt, TwitterUsername, CreatedDate, UpdatedDate Accounts Name, BuyerIntentScore, CustomId, Domain, Named, SharingTeamId, TouchedAt, CreatedDate, UpdatedDate CallDispositions Name, Outcome, Order, CreatedDate, UpdatedDate CallPurposes Name, Order, CreatedDate, UpdatedDate Calls From, Outcome, RecordingUrl, State, To, UserCallType, CreatedDate, UpdatedDate ContentCategories Name, CreatedDate, UpdatedDate EmailAddresses Email, EmailType, Order, Status, StatusChangedAt, UnsubscribedAt Events Name, EventAt, CreatedDate Favorites TargetType, CreatedDate, UpdatedDate Mailboxes Email, UserId, CreatedDate, UpdatedDate Mailings BouncedAt, DeliveredAt, ClickedAt, MailingType, MessageId, NotifyThreadScheduledAt, NotifyThreadStatus, OpenedAt, RepliedAt, ScheduledAt, State, StateChangedAt, UnsubscribedAt, CreatedDate, UpdatedDate Opportunities MapLink, MapStatus, Name, Probability, SharingTeamId, TouchedAt, CreatedDate, UpdatedDate OpportunityStages Name, Color, Order, IsClosed, CreatedDate, UpdatedDate Personas Name, Description, CreatedDate, UpdatedDate PhoneNumbers Number, Order, PhoneType, Status, StatusChangedAt, UpdatedDate Profiles Name, SpecialId, CreatedDate, UpdatedDate Roles Name, CreatedDate, UpdatedDate Rulesets Name, AutoResumeOotoProspects, IncludeUnsubscribeLinks, PermitDuplicateProspects, SequenceExclusivity, CreatedDate, UpdatedDate SequenceStates CallCompletedAt, ClickCount, DeliverCount,OpenCount, PauseReason, RepliedAt, ReplyCount, State, StateChangedAt, CreatedDate, UpdatedDate Sequences Name, ClickCount, DeliverCount, EnabledAt, LastUsedAt, LockedAt, OpenCount, ReplyCount, ShareType, ThrottleCapacity, ThrottleMaxAddsPerDay, CreatedDate, UpdatedDate Snippets Name, ShareType ,CreatedDate, UpdatedDate Stages Name, Order ,CreatedDate, UpdatedDate Tasks AutoskipAt, DueAt, ScheduledAt, State, StateChangedAt, TaskType, CreatedDate, UpdatedDate Teams Name, CreatedDate, UpdatedDate Templates Name, ArchivedAt, ClickCount, DeliverCount, LastUsedAt, OpenCount, ReplyCount , ShareType, CreatedDate, UpdatedDate Users FirstName, LastName, CurrentSignInAt, Locked, Username, CreatedDate, UpdatedDate AuditLogs Action, EventName, RequestId, Result, Timestamp The CreatedDate and UpdatedDate fields in the Prospects, Accounts, CallDispositions, CallPurposes, Calls, ContentCategories, ContentCategoryMemberships, ContentCategoryOwnerships, Events, Favorites, Mailboxes, Mailings, Opportunities, OpportunityStages, Personas, PhoneNumbers, Profiles, Roles, Rulesets, SequenceStates, Sequences, Snippets, Stages, Tasks, Teams, Templates, Users objects also natively support the following operators , < , <= , > , >= . Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Records with Empty Name values Outreach objects may contain records with empty Name values. These records are not returned in the query results by default. To get them, apply a filter by Id to your query.\u00a0\nTo get such records from the Accounts object, you can use either a filter by Id or by Named = \u2018false\u2019. Replication can not build the foreign key relations for the records with empty name values. If you enable the Create Foreign Keys option in your Replication, it will return errors for these records. Object Relationships There are three types of relationships between Outreach objects: One-to-many One-to-many relationships are represented by read-only array fields. Link Link relationships are stored in the string read-only fields containing the link to the related record. One-to-one Skyvia represents One-to-one relationships by two fields with -Id and -Type suffixes in their names. For example, the relationship of the Accounts object with the Users object is stored in the CreatorId and CreatorType fields. \nSuch relationships support import. You must map both -Id and -Type fields to import the relation successfully. Custom Fields Objects Opportunities, Accounts, Prospects contain 150 predefined custom fields with standard naming, like Custom1, Custom2, Custom3 , etc. The Users object contains five predefined custom fields. Custom fields support insert and update operations. DML Operations Support Operation Object INSERT, UPDATE, DELETE Accounts, CallDispositions, CallPurposes, ContentCategories, EmailAddresses, Mailboxes, Opportunities, OpportunityStages, Personas, PhoneNumbers, Profiles, Prospects, Recipients, Roles, Rulesets, Sequences, SequenceTemplates, Snippets, Stages, Tasks, Teams, Templates INSERT, DELETE Calls, ContentCategoryMemberships, ContentCategoryOwnerships, Favorites, SequenceStates INSERT, UPDATE Users INSERT Events, Mailings Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Outreach objects. Skyvia supports Synchronization for all the Outreach objects except AuditLogs, Calls, ContentCategoryMemberships, ContentCategoryOwnerships, Events, Favorites, Mailings, SequenceSteps, SequenceStates , and TaskPriorities . Supported Actions Skyvia supports all the common actions for Outreach." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/paddle_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Paddle [Paddle](https://www.paddle.com/) is a payment infrastructure provider designed for SaaS and software businesses. Data integration : Skyvia supports importing data to and from Paddle, exporting Paddle data to CSV files, replicating Paddle data to relational databases, and synchronizing Paddle data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Paddle. Query : Skyvia Query supports Paddle. Establishing Connection To create a connection to Paddle, specify the environment and provide an authentication token. Getting Credentials To obtain the authentication token, log in to Paddle. In the Paddle interface, go to the left-hand menu panel, click Developer Tools , and select Authentication : On the opened page, find the API Keys section and click Generate API key : Creating Connection To connect to Paddle, follow these steps: Select the Environment type. Enter the authentication token generated in the previous step. Connector Specifics Object Peculiarities Notifications Notifications older than 90 days are not retained. To limit the search period, use filtering by the From and To fields. If a query is executed without specifying conditions, these fields return empty values. You can set filtering either for the From or To fields individually, or use both together. Example of filter setting: 1\n2\n3 SELECT t.* FROM Notifications AS t WHERE (t.To = '2025-01-15 12:00:00' AND t.\"From\" = '2025-01-10 12:00:00') Notifications Log Notifications are retained for 90 days. If you try to list logs for a notification that is no longer available, Paddle returns an error. Subscriptions For UPDATE operations, mapping of the ProrationBillingMode field is required. When performing a SELECT request, this field always returns an empty value. Additionally, ensure proper mapping of the Items field, as the array of included objects may vary depending on the selected item type. For more details, see the [official documentation](https://developer.paddle.com/api-reference/subscriptions/update-subscription) . Transactions Proper mapping of the Items field is required, as the array of included objects may vary depending on the selected item type. For more details, see the [official documentation](https://developer.paddle.com/api-reference/transactions/create-transaction) . Nested Objects The Items field of the Subscriptions and Transactions objects is a nested object. To insert or update nested values in the Items field during import, enable the Nested Objects option in the import configuration settings. Filtering Specifics The Paddle API supports the following native filters: Object Fields and Operators Adjustments Action ( = ), CustomerId ( = , IN), Status ( = , IN), SubscriptionId ( = , IN), TransactionId ( = , IN) Customers Email ( = , IN) CustomerAddresses CustomerId ( = , IN) CustomerBusinesses CustomerId ( = , IN) CustomerPaymentMethods AddressId ( = , IN), CustomerId ( = , IN) Discounts Code ( = , IN) Notifications NotificationSettingId ( = , IN), Status ( = , IN) NotificationSettings Active ( = ), TrafficSource ( = ) Prices ProductId ( = , IN) Products TaxCategory ( = , IN) Reports Status ( = , IN) Simulations NotificationSettingId ( = , IN), Status ( = , IN) Subscriptions CollectionMode ( = ), AddressId ( = , IN), CustomerId ( = , IN), Status ( = , IN) Transactions CollectionMode ( = ), CustomerId ( = , IN), InvoiceNumber ( = , IN), Origin ( = , IN), Status ( = , IN), SubscriptionId ( = , IN), CreatedDate ( >= , > , <= , < ), UpdatedDate ( >= , > , <= , < ), BilledAt ( >= , > , <= , < ) Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Native Sorting The Paddle API natively supports sorting for the following fields in the specified objects: Object Field Sorting Order Adjustments Id asc/desc Customers Id asc/desc Discounts CreatedDate, Id asc/desc Notifications Id asc/desc NotificationSettings Id asc/desc Prices BillingCycle_Frequency, BillingCycleInterval, Id, ProductId, Quantity_Maximum, Quantity_Minimum, Status, TaxMode, UnitPrice_Amount, UnitPrice_CurrencyCode asc/desc Products CustomData, Description, Id, ImageUrl, Name, Status, TaxCategory, CreatedDate, UpdatedDate asc/desc Reports Id asc/desc Simulations Id asc/desc Subscriptions Id asc/desc Transactions Id, BilledAt, CreatedDate, UpdatedDate asc/desc Note that the Paddle API does not support native sorting by multiple fields simultaneously. You can only sort records natively by a single field. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: Adjustments, CustomerAddresses, CustomerBusinesses, CustomerPaymentMethods, Customers, Discounts, Prices, Products, Reports, SimulationRuns, SimulationRunEvents, Simulations, Subscriptions, Transactions . Synchronization is available for the objects: CustomerAddresses, CustomerBusinesses, Customers, Discounts, Prices, Products, Simulations, Transactions . Since these objects support only INSERT and UPDATE operations, attempt to delete a record from one of these objects during synchronization will result in an error, such as: \u2018Table Customers does not support DELETE statement.\u2019 DML Operations Operation Object INSERT, UPDATE, DELETE NotificationSettings INSERT, UPDATE CustomerAddresses, CustomerBusinesses, Customers, Discounts, Prices, Products, Simulations, Transactions INSERT Adjustments, Reports, SimulationRuns UPDATE Subscriptions DELETE CustomerPaymentMethods Stored Procedures Skyvia represents part of the supported Paddle features as stored procedures. For example, you can [call a stored procedure](https://docs.skyvia.com/supported-sql-for-cloud-sources/call-statements-and-stored-procedures.html) by specifying it as the command text in the ExecuteCommand action of a Target component in a [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) or in a [Query](https://docs.skyvia.com/query/) . For the Paddle connector, all parameters specified in procedures are required. ReplayNotification To replay a notification, i.e. to resend a previously sent webhook notification, use the command: call ReplayNotification(:notification_id) ReplayEventSimulationRun To re-execute a simulated event run, use the command: call ReplayEventSimulationRun(:simulation_event_id,:simulation_run_id,:simulation_id) ActivateSubscription To activate a trialing subscription, use the command: call ActivateSubscription(:subscription_id) PauseSubscription To pause a subscription, use the command: call PauseSubscription(:subscription_id) ResumePausedSubscription To resume a paused subscription, use the command: call ResumePausedSubscription(:subscription_id) CancelSubscription To cancel a subscription, use the command: call CancelSubscription(:subscription_id) Supported Actions Skyvia supports all the common actions for Paddle." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/pagerduty_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources PagerDuty [PagerDuty](https://www.pagerduty.com/) is a SaaS incident management platform designed to help businesses handle alerts, automate workflows, and notify teams about issues. Data integration : Skyvia supports importing data to and from PagerDuty, exporting PagerDuty data to CSV files, replicating PagerDuty data to relational databases, and synchronizing PagerDuty data with other cloud apps and relational databases. Backup : Skyvia Backup does not support PagerDuty. Query : Skyvia Query supports PagerDuty. Establishing Connection To create a connection to PagerDuty, specify an API Key. Getting Credentials To obtain the API Key, do the following: Go to PagerDuty. Click the user icon in the top right corner of the page. Select API Access keys . Click Create New API Key . Enter the description and click Create Key. Creating Connection To connect to PagerDuty, specify the API Key and select the user from the drop-down list. Additional Connection Parameters Suppress Extended Requests PagerDuty API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD IncidentWorkflows Steps, Team_Id, Team_Type, Team_Summary, Team_Self, Team_HtmlUrl Templates TemplatedFields_EmailBody, TemplatedFields_EmailSubject, TemplatedFields_Message Schedules ScheduleLayers You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities Incidents By default, this object displays records created for the last month when querying. To get data for a longer period, use filters by the CreatedDate field. If you use the > or >= operators, the records created from the CreatedDate till today will be selected. If you combine the < and > operators or the <= and >= operators, the specified period must not exceed 6 months. Otherwise, you get an error. AuditRecords, ScheduleAuditRecords, ServiceAuditRecords, TeamAuditRecords, UserAuditRecords By default, these objects display records created for the last 24 hours when querying. To get records for another period, use filters by the ExecutionTime field with > , >= , < , <= . If you use the > or >= operators, the records created from the ExecutionTime till today will be selected. The specified period must not exceed 31 days. Notifications By default, this object displays records created since 01.01.2009 when querying. To get records for another period, use filters by the StartedAt field. If you use the > or >= operators, the records created from the StartedAt till today will be selected. If you use the < or <= operators, the records created from 01.01.2009 till StartedAt will be selected. ScheduleOverrides By default, this object displays records created from 01.01.2009 to 01.01.2099 when querying. To get records for another period, use filters by the Since , or Until fields. Users The License_Id and License_Type fields return empty results when querying. These fields are used for import only. If you map the License_Id field in import, the License_Type must also be mapped. Filtering Specifics PagerDuty API supports the following native filters: Object Operator Field Incidents = Urgency, Status, IncidentKey > , >= , < , <= CreatedDate LogEntries > , >= , < , <= CreatedDate Notifications > , >= , < , <= StartedAt ScheduleOverrides = Since, Until ScheduleUsersOnCall = Since, Until AuditRecords, ScheduleAuditRecords, ServiceAuditRecords, TeamAuditRecords, UserAuditRecords > , >= , < , <= ExecutionTime Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Custom Objects / Custom Fields PagerDuty supports custom fields of the following types. PagerDuty Type DbType Text String Single Select Enum, String Multiple Select MultiEnum, String Tag Array Url String Checkbox Boolean Date Time DateTime Decimal Decimal Integer Int64 Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the AlertGroupingSettings, EventOrchestrations, IncidentAlerts, IncidentLogEntries, IncidentNotes, Incidents, IncidentWorkflowActions, IncidentWorkflows, IncidentCustomFields, LogEntries, Priorities, Services, Templates , and UserActiveSessions objects. Replication tracks only new records for the following objects: UserActiveSessions, IncidentAlerts, IncidentNotes, IncidentLogEntries, IncidentWorkflows, IncidentWorkflowActions, LogEntries . Skyvia supports Synchronization for the AlertGroupingSettings, \nEventOrchestrations, Services, Templates, Incidents, IncidentCustomFields objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Addons, AlertGroupingSettings, BusinessServices, EscalationPolicies, EventOrchestrations, Extensions, IncidentWorkflows, IncidentWorkflowTriggers, MaintenanceWindows, Services, Templates, UserContactMethods, UserNotificationRules, Users INSERT, UPDATE IncidentCustomFields, Incidents, Schedules INSERT, DELETE ScheduleOverrides, Tags, Teams INSERT BusinessServiceSubscribers, IncidentNotes, IncidentNotificationSubscribers UPDATE ChangeEvents, IncidentAlerts, Standards DELETE TeamMembers Stored Procedures Skyvia represents part of the supported PagerDuty features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddUserToTeam To add a user with a specified role to a team, use the command call AddUserToTeam(:team_id,:user_id,:role) PARAMETER NAME DESCRIPTION Team_id Identifier of the team User_id Identifier of the user Role Name of the role. Valid values are observer, responder, manager RemoveBusinessServiceSubscriber Use the following command to remove a record from the BusinessServiceSubscribers object. call RemoveBusinessServiceSubscriber(:service_id,:subscriber_id,:subscriber_type) RemoveNotificationSubscriber The following command unsubscribes the matching Subscribers from Incident Status Update Notifications. call RemoveNotificationSubscriber(:service_id,:subscriber_id,:subscriber_type) Supported Actions Skyvia supports all the common actions for PagerDuty." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/paymo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Paymo [Paymo](https://www.paymoapp.com/) is a comprehensive platform for project management, tracking time, and billing. Data integration : Skyvia supports importing data to and from Paymo, exporting Paymo data to CSV files, and replicating Paymo data to relational databases. Backup : Skyvia Backup does not support Paymo backup. Query : Skyvia Query supports Paymo. Establishing Connection To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Paymo, you must specify the API Key. Getting Credentials To obtain the API Key, you have to log in to Paymo and perform the following actions: Click the user icon in the bottom left corner of the page and select My settings . Select API Keys and click + Generate New Key . Name the new API Key and click Generate . Copy the newly generated API Key. The API is visible only once when you generate it. You won\u2019t be able to view it again. Creating Connection Paste the obtained value into the API Key box in Skyvia. Connector Specifics Filtering Specifics Paymo API supports the following native filters: Object = Operator = , > , >= , < , <= Operator ClientContacts ClientId, IsMain, Access CreatedDate, UpdatedDate Clients Active CreatedDate, UpdatedDate Discussions ProjectId, UserId CreatedDate, UpdatedDate Estimates ClientId, Status, TemplateId, Active, TaxOnTax Total, Date, CreatedDate, UpdatedDate, Tax, Tax2, TaxAmount, Tax2Amount, Discount, DiscountAmount Expenses ClientId, ProjectId, UserId, Invoiced, Active, Billable Amount, Date, CreatedDate, UpdatedDate InvoiceTemplates IsDefault CreatedDate, UpdatedDate InvoicePayments InvoiceId Amount, Date, CreatedDate, UpdatedDate Invoices ClientId, TemplateId, Status, Active, TaxOnTax, PayOnline Date, DueDate, DeliveryDate, Subtotal, Total, Tax, Tax2, TaxAmount, Tax2Amount, Discount, DiscountAmount, CreatedDate, UpdatedDate Milestones ProjectId, UserId, ReminderSent, Complete DueDate, SendReminder, CreatedDate, UpdatedDate ProjectTemplates CreatedDate, UpdatedDate ProjectStatuses Active, Readonly Seq, CreatedDate, UpdatedDate Projects ClientId, WorkflowId, StatusId, Active, Invoiced, Billable, FlatBilling, BillingType TaskCodeIncrement, BudgetHours, PricePerHour, EstimatedPrice, Price, CreatedDate, UpdatedDate Reprots UserId, Type, Active, DateInterval CreatedDate, UpdatedDate SubTasks TaskId, Complete, ProjectId, UserId, CompletedBy, Seq CompletedOn, CreatedDate, UpdatedDate TaskLists ProjectId, MilestoneId Seq, CreatedDate, UpdatedDate Tasks ProjectId, TasklistId, UserId, Complete, CoverFileId, StatusId, Billable, FlatBilling, Invoiced, InvoiceItemId, RecurringProfileId, BillingType CompletedBy, CompletedOn, Seq, PricePerHour, BudgetHours, EstimatedPrice, Price, DueDate StartDate, CreatedDate, UpdatedDate TimeEntries ProjectId, TaskId, UserId, AddedManually, Billed, IsBulk CreatedDate, UpdatedDate Users Type, Active WorkdayHours, PricePerHour, CreatedDate, UpdatedDate Workflows IsDefault CreatedDate, UpdatedDate WorkflowStatuses WorkflowId Seq, CreatedDate, UpdatedDate Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API calls usage. Object Peculiarities Invoices and Estimates The Discount field only accepts values greater than \u20180\u2019 when inserting or updating.\nIf you try to Insert the \u20180\u2019 value to the Discount field, you get the error \u201cField \u2018discount\u2019 cannot be empty\u201d. TimeEntries With the required TaskId field, you must also map one of the field pairs: the Date and Duration or the StartTime and EndTime when inserting records to this object. Tasks To successfully insert records to this object, you must map either ProjectId or TasklistId . Reports When selecting data from the Reports object, the Clients and Users fields may return the string values (for example, all_archived, all_active or all ) or an array of existing customers or users Ids (for example [\u201c286959\u201d,\u201d287126\u201d] ). To successfully insert or update the records into this object, you have to map the Clients and Users fields to the existing Ids of the customers or users in the following format: 286959,287126 (no quotes, no square brackets). Incremental Replication and Synchronization support All Paymo objects support Incremental Replication. The objects Company, EstimateLineItems, InvoiceLineItems, ProjectStatuses, DiscussionComments, and TaskComments do not support Synchronization. DML Operations Supports Operation Object INSERT, UPDATE, DELETE ClientContacts, Clients, Discussions, Estimates, EstimateTemplates, Expenses, InvoicePayments, Invoices, InvoiceTemplates, Milestones, Projects, ProjectTemplates, Reports, SubTasks, TaskLists, Tasks, TimeEntries, Users, Workflows, WorkflowStatuses UPDATE, DELETE ProjectStatuses UPDATE Company Supported Actions Skyvia supports all the common actions for Paymo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/persistiq_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources PersistIQ [PersistIQ](https://www.persistiq.com/) is an easy-to-use and affordable cloud software for sales email automation designed for small teams. Data integration : Skyvia supports importing data to and from PersistIQ, exporting PersistIQ data to CSV files, and replicating PersistIQ data to relational databases. Backup : Skyvia Backup supports PersistIQ backup. Query : Skyvia Query supports PersistIQ. Establishing Connection To create a connection to PersistIQ, you need to specify the API Key. Getting Credentials To obtain the PersistIQ API key, sign in to PersistIQ, click your profile icon in the bottom left corner of the page and then click Integrations . On the Integrations tab of your PersistIQ settings, copy your API Key. Creating Connection To connect to PersistIQ, enter the obtained API key in the corresponding box in the Connection Editor. Connector Specifics Object Peculiarities Events The Campaign_Id , Lead_Id , and User_Id fields in the Event object are not marked as the foreign keys. Leads If you insert a record to the Leads object with an Email already present there, the new record is not added to Leads, but it is marked as successfully inserted in the log. The StatusId field of the Leads object is used for the UPDATE operation only. It does not contain any data when querying data from Leads. When updating the lead status, you can specify the new value either via the StatusId field or the Status field. Incremental Replication and Synchronization Synchronization and Replication with Incremental Updates enabled are not supported for PersistIQ. DML Operations Support Skyvia supports import operations for such PersistIQ objects. Operation Object INSERT, DELETE CampaignLeads INSERT, UPDATE Campaigns INSERT Leads Supported Actions Skyvia supports all the common actions for PersistIQ." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/pinterest_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Pinterest [Pinterest](https://pinterest.com/) is an image and video sharing social media service for finding fun ideas. Data integration : Skyvia supports importing data to and from Pinterest, exporting Pinterest data to CSV files, replicating Pinterest data to relational databases. Backup : Skyvia Backup does not support Pinterest. Query : Skyvia Query supports Pinterest. Establishing Connection To create a connection to Pinterest, sign in with Pinterest and authorize the app. Creating Connection To connect to Pinterest, perform the following steps: Click Sign In with Pinterest Enter your Pinterest credentials and click Log in . Click Give access . Additional Connection Parameters Suppress Extended Requests For some objects, Pinterest API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Pins ImageBody , VideoCoverBody PinImages Body To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Currently Skyvia supports creating only image pins. It does not support creating video pins. Object Peculiarities Some Pinterest objects can only be accessed through their parent objects. Pinterest API requires: BoardId for Pins and BoardSections ; PinId for PinImages and PinDailyAnalytics ; AdAccountId for Campaigns , AdGroups , Ads , AdAccountDailyAnalytics . Skyvia doesn\u2019t require you to provide the ID of the parent object when querying child objects. If you don\u2019t include parent IDs in your filter, Skyvia will first retrieve all parent object records, get their IDs, and then query the child objects for each parent. For each record like this, one API call is made, which can significantly slow down the querying speed.\nTo avoid this, apply filters to the parent objects. Ads The Ads object might reference the Pin that has been deleted. In this case, the PinId field will point to a record that no longer exists. These records may cause errors in replication integrations due to foreign key violations. Pins The ImageBody field contains the binary data of the image in its original size. The VideoCoverBody field contains the binary data of the video cover image. For non-video pins, this field is always NULL.\nBoth fields require an additional API call for each pin, which can significantly increase query time and API usage. PinImages The PinImages object contains all available pin images in all sizes. Its ImageBody field contains the binary data for the image. The Body field requires an additional API call for each pin, which significantly increases both query time and API usage. Filtering Specifics Pinterest supports the following native filters: Object Operator Field BoardSections = BoardId Pins = Id , BoardId PinImages = PinId Media = MediaId Campaigns = Id , AdAccountId , Status IN Id , Status AdGroups = Id , AdAccountId , Status , CampaignId IN Id , Status , CampaignId Ads = Id , AdAccountId , CampaignId , AdGroupId , Status IN Id , Status , CampaignId , AdGroupId PinDailyAnalytics = Date , AdAccountId , PinId , AppTypes >= , <= , > , < , BETWEEN Date UserDailyAnalytics = Date , AdAccountId , FromClaimedContent , PinFormat , AppTypes >= , <= , > , < , BETWEEN Date AdAccountDailyAnalytics = CampaignId , AdGroupId , AdId , ProductGroupId , Date , AdAccountId , IN CampaignId , AdGroupId , AdId , ProductGroupId >= , <= , > , < , BETWEEN Date Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Campaigns , AdGroups , Ads , Pins . Replication with Incremental Updates for PinDailyAnalytics , UserAccountDailyAnalytics , AdAccountDailyAnalytics is supported only on a daily basis. Skyvia tracks changes in these objects with the Date field, which does not store the time. To prevent duplicates, only records up to the previous day are replicated, with records from the current day being replicated the next day. Skyvia does not support Synchronization for Pinterest. DML Operations Support Operation Object INSERT, UPDATE, DELETE Boards, BoardSections INSERT, DELETE Pins Supported Actions Skyvia supports all the common actions for Pinterest." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/pipedrive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Pipedrive [Pipedrive](https://www.pipedrive.com/en-gb) is a CRM solution for salespeople in scaling companies built around activity-based selling. Data integration : Skyvia supports importing data to and from Pipedrive, exporting Pipedrive data to CSV files, replicating Pipedrive data to relational databases, and synchronizing Pipedrive data with other cloud apps and relational databases. Backup : Skyvia Backup supports Pipedrive backup. Query : Skyvia Query supports Pipedrive. Establishing Connection You can create a connection with Pipedrive using API Token or OAuth 2.0 authentication. Getting Credentials To locate the API Token, perform the following actions: Log in to your Pipedrive account. Click the user icon and select Personal Preferences . Go to the API tab and copy the generated value. Creating API Token Connection In the connection editor, select API Token in the authentication drop-down list. Paste the obtained value to the API Token box. Creating OAuth 2.0 Connection In the connection editor, select OAuth 2.0 in the authentication drop-down list. Click Sign In with Pipedrive . Enter your Pipedrive credentials. Click Continue to the App to grant the requested permissions to Skyvia. Additional Connection Parameters Use Custom Fields An optional parameter that allows working with Pipedrive custom fields when enabled. Suppress Extended Requests Pipedrive API returns only part of the fields when querying multiple records. In order to query the values of additional fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields obtained via extended requests are the following: Object Field Filters Conditions PermissionSets Contents To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects are not available in Skyvia (return empty values) even if they have values in Pipedrive because its API does not return them without extended requests. Connector Specifics Object Peculiarities Read-only Objects Objects, storing Pipedrive metadata, such as ActivityFields, DealFields, NoteFields, OrganizationFields, PersonFields , and ProductFields , are read-only in Skyvia. Products Due to Pipedrive API specifics Skyvia Backup doesn\u2019t undo adding a product price when restoring a modified Product record. For example, you backed up the Products object. Then you added new records to this object and backed it up again. If you try to restore the object to the state before adding new records, these records won\u2019t be removed. Subscriptions Some of the Pipedrive objects can be accessed only via their parent objects. For example, to query Subscriptions , Pipedrive API requires the Id value of the corresponding Deals record. Skyvia does not require filtering by the parent object record Id when querying data from a child object or importing data to these objects. Without filtering by parent record Id, Skyvia first queries all the parent object records, reading each record Id. Then Skyvia looks up child object records to each parent object record Id. This approach allows querying child objects without knowing their parents, but this method consumes time and API calls. It uses at least one API call for every parent object record. Thus, working with these tables may affect performance. We strongly recommend using filters on the parent object fields when querying data from child objects. When updating the Subscriptions object, you must map the EffectiveDate field, additionally to the fields required by default. Complex Structured Objects The following fields in the Pipedrive objects store complex structured data in JSON format. For the API v1: Object Field Persons Email, Phone Products Prices For the API v2: Object Field ProductVariations Prices Products Prices You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can also replicate them into separate tables with our new replication runtime. Use the Unwind component to map them in data flows. Sorting Specifics Pipedrive API supports sorting for the following objects and fields: Sorting for API V1 Object FieldS Deals All except JsonObject and JsonArray fields Organizations All except JsonObject and JsonArray fields Persons All except JsonObject and JsonArray fields Files ID, UserId, DealId, PersonId, OrganizationId, ProductId, CreatedDate, UpdatedDate, FileName, FileType, File size Leads ID, Title, OwnerId, CreatorId, WasSeen, NextActivityId, CreatedDate, UpdatedDate, ExpectedCloseDate Notes ID, UserId, DealId, PersonId, OrganizationId, Content, CreatedDate, UpdatedDate Sorting for API V2 Object Field Deals ID, CreatedDate, UpdatedDate Products ID, Name, CreatedDate, UpdatedDate Stages ID, OrderNumber, CreatedDate, UpdatedDate Pipelines ID, CreatedDate, UpdatedDate Custom Fields The following Pipedrive objects may have custom fields: Activities, Deals, Notes, Organizations, Persons, and Products .\nInformation about custom fields is stored in the corresponding objects with the -Fields suffix. For example, information about Activities custom field is stored in the ActivityFields object. Custom Field Types Pipedrive custom fields may have the following types: Varchar, Varchar_Auto, Text, Double, Monetary, Set, Enum, User, Org, People, Phone, Time, TimeRange, Date, DateRange, Address, Price_List, Lead, Int, Visible_To, Picture. Fields of User, Organization and Person types contain a reference to the corresponding related object.\nEach field can refer to only one related object. Custom fields of the User type can store a reference to the Id field from the Users object.\nFields of the Organization type refer to the Id from the Organizations object.\nFields with the Person type refer to the Id from the Persons object. Querying Data from Custom Fields When querying data from the custom fields of the User, Organization, and Person types, the result returns the JSON object storing the information about the related object. \nFor example, the result of querying the custom field of the Person type looks like this: {\"active_flag\":true,\"name\":\"Ray Kazm\", \n\"email\":[\\{\"label\":\"work\",\"value\":\"ray@raycorp-pipedrive.com\",\"primary\":true}], \n\"phone\":[\\{\"label\":\"work\",\"value\":\"12345678\",\"primary\":true},\n\\{\"label\":\"mobile\",\"value\":\"9876541\",\"primary\":false}], \n\"owner_id\":5219346,\"value\":149} For user convenience, we added extra read-only fields which store only the Id value of the related object. You can recognize such fields by the -Id suffix in their names. For example, a custom field RelatedPerson of type Person contains a JSON object. The extra field that only includes the person Id is called RelatedPersonId . It has the value 149 from the example above. Importing Data to Custom Fields Though the Pipedrive custom fields store the JSON objects, it is enough to map only the related object field Id value for custom fields with User, Organization, and Person types when importing data into such fields. Incremental Replication and Synchronization API v1 Skyvia supports Replication with Incremental Updates for the following Pipedrive objects for the v1 Pipedrive API version: Activities, ActivityFields, ActivityTypes, DealFields, DealFollowers, DealMailMessages, DealParticipants, DealPersons, DealProducts, Deals, Files, Filters, \nLeadLabels, Leads, MailMessagesDrafts, MailMessagesArchive, MailMessagesInbox, MailMessagesSent, MailThreadsArchive, MailThreadsDrafts, MailThreadsInbox, MailThreadsSent, NoteComments(only new records are detected), NoteFields, Notes, OrganizationFields, OrganizationFollowers, OrganizationMailMessages, OrganizationPersons, OrganizationRelationships, Organizations, PersonFields, PersonFollowers, PersonMailMessages, Persons, Pipelines, ProductFields, ProductFollowers, Products, Stages, SubscriptionPayments, Subscriptions, Users, UserSettings, Webhooks . Due to Pipedrive API specifics, replication of the DealFields, OrganizationFields, ProductFields objects may return error \u00abCannot insert the value NULL into column \u2018ID\u2019, table \u2018dbo.DealFields\u2019; column does not allow nulls. INSERT fails. The statement has been terminated.\u00bb The mentioned objects do not have primary keys, thus may cause errors during replication. Skyvia supports Synchronization for the following Pipedrive objects for the v1 Pipedrive API version: Activities, ActivityTypes, DealFields, DealProducts, Deals, Filters, LeadLabels, Leads, NoteComments(only new records are detected), Notes, OrganizationFields, OrganizationRelationships, Organizations, PersonFields, Persons, Pipelines, ProductFields, Products, Stages, Subscriptions, Users. Incremental Replication and Synchronization API v2 Skyvia supports Replication with Incremental Updates and Synchronization for the following Pipedrive objects for the v2 Pipedrive API version: DealProducts, Deals, Pipelines, Products, Stages . DML Operations Support API v1 Operation Object INSERT, UPDATE, DELETE Activities, ActivityTypes, DealFields, DealProducts, Deals, Filters, Goals, LeadLabels, Leads, NoteComments, Notes, OrganizationFields, OrganizationRelationships, Organizations, PersonFields, Persons, Pipelines, ProductFields, Products, Roles, Stages, Subscriptions INSERT, DELETE CallLogs, DealFollowers, DealParticipants, OrganizationFollowers, PersonFollowers, ProductFollowers, Webhooks INSERT, UPDATE Users UPDATE, DELETE MailThreadsSent, MailThreadsArchive, MailThreadsInbox, MailThreadsDrafts DML Operations Support API v2 Operation Object INSERT, UPDATE, DELETE DealProducts, Deals, Pipelines, Products, ProductVariations, Stages Stored Procedures Skyvia represents part of the supported Pipedrive features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddRoleAssignments To assign a user to a role, use the following command. call AddRoleAssignments (role_id, user_id) DeleteRoleAssignments The following command removes the assigned user from a role and adds to the default role. call DeleteRoleAssignments (role_id, user_id) AddInstallmentSubscription The following command adds a new installment subscription. call AddInstallmentSubscription (:deal_id, :currency, :payments, :update_deal_value) PARAMETER NAME DESCRIPTION Deal_id The ID of the deal this installment subscription is associated with. Currency The currency of the installment subscription. Accepts a 3-character currency code. Payments Array of payments. It requires a minimum structure as follows: [{ amount:SUM, description:DESCRIPTION, due_at:PAYMENT_DATE }]. Replace SUM with a payment amount, DESCRIPTION with an explanation string, PAYMENT_DATE with a date (format YYYY-MM-DD). Update_deal_value Boolean. Indicates that the deal value must be set to the installment subscription\u2019s total value. UpdateInstallmentSubscription To update an installment subscription, use the following command. call UpdateInstallmentSubscription (:subscription_id, :payments, :update_deal_value) PARAMETER NAME DESCRIPTION Subscription_id The ID of the subscription. Payments Array of payments. It requires a minimum structure as follows: [{ amount:SUM, description:DESCRIPTION, due_at:PAYMENT_DATE }]. Replace SUM with a payment amount, DESCRIPTION with a explanation string, PAYMENT_DATE with a date (format YYYY-MM-DD). Update_deal_value Boolean. Indicates that the deal value must be set to installment subscription\u2019s total value. CancelRecurringSubscription To cancel the recurring subscription, use the following command. call CancelRecurringSubscription (:subscription_id, :end_date) Supported Actions Skyvia supports all the common actions for Pipedrive." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/pipeliner_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Pipeliner CRM [Pipeliner](https://www.pipelinersales.com/) is an interactive CRM platform for sales and marketing management. Data integration : Skyvia supports importing data to and from Pipeliner CRM, exporting Pipeliner CRM data to CSV files, replicating Pipeliner CRM data to relational databases, and synchronizing Pipeliner CRM data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Pipeliner CRM backup. Query : Skyvia Query supports Pipeliner CRM. Establishing Connection To establish a connection to Pipeliner CRM, you must specify the User Name, Password, Space ID, and Service URL. Getting Credentials To obtain the Pipeliner CRM credentials, you have to log in to Pipeliner CRM and perform the following actions: Click the menu icon in the top left corner of the page and select Administration . In the left menu click Units, Users & Roles . Go to the Applications tab and click Create New . Name your API application and specify your Email if needed. Click on the newly created API application and then click Show API access . Copy the appeared credentials. The credentials appear only once when you create the API application. Copy the credentials and store them somewhere safe. Creating Connection Paste the obtained values into the corresponding boxes in the Skyvia connection editor. Connector Specifics Object Peculiarities By default, the deleted records are unavailable in the results when selecting data from the Pipeliner CRM objects.To include the deleted records from the Pipeliner CRM objects into the selection, use the IncludeDeleted = true filter. Custom Fields The following objects may store custom fields: Accounts, Contacts, Leads, Opportunities, Tasks, Appointments, Products, Projects . Available custom field types are: \nSingle line text, Radio, Auto number, Dropdown, Email, Long text, Lookup, Multi select checkbox, Phone, URL, Date, Date and time, Base currency, Multiple currencies, Float number, Checkbox, Integer number. Multiple Currencies Type Skyvia represents custom fields of the Multiple currencies type by three types of objects: BaseValue \u2014 value converted to the Base Currency based on the prevailing exchange rate. ForeignValue \u2014 value entered in a \u201cforeign\u201d (i.e. not Base) currency. CurrencyId \u2014 displays the currency symbol used by the specified custom field. This object refers to the Id from the Currencies object. Incremental Replication and Synchronization All Pipeliner CRM objects support Incremental Replication. The objects Activities, CompanyEmails, FieldsDataSets, LeadOpportunities, Steps, Timeframes do not support Synchronization. DML Operations Supports Operation Object INSERT, UPDATE, DELETE AccountRoles, Accounts, AccountTypes, ActivityComments, AppointmentReminders, Appointments, Calls, Clients, CloudObjects, Contacts, ContactTypes, Currencies, CurrencyExchangeRates,Fields, LeadProcesses, Leads, LeadTypes, Notes, Opportunities, OpportunityRecurrences, OpportunityTypes, Pipelines, ProductCategories, ProductPriceListPrices, ProductPriceLists, Products, ProductTypes, ProjectObjectives, Projects, ProjectTypes, SalesRoles, SalesUnits, Tags, TaskRecurrences, TaskReminders, Tasks, TaskTypes UPDATE, DELETE Steps Supported Actions Skyvia supports all the common actions for Pipeliner CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/pipeline_crm_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Pipeline CRM [Pipeline CRM](https://pipelinecrm.com/) is a cloud, easy-to-use CRM platform for small and mid-size businesses. Data integration : Skyvia supports importing data to and from Pipeline CRM, exporting Pipeline CRM data to CSV files, replicating Pipeline CRM data to relational databases, and synchronizing Pipeliner CRM data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Pipeline CRM. Query : Skyvia Query supports Pipeline CRM. Establishing Connection To create a connection to Pipeline CRM, you need to enter your API Key and APP Key. Getting Credentials To get your Pipeline CRM APP and API Keys, perform the following steps: Log in to Pipeline CRM. Click on the user icon in the top right corner and select Account Settings . Select API Integrations on the left. On the API Integrations tab, click Add integration to create a new app or copy the APP Key from the existing one. On the API Keys tab, enable API Access with the corresponding button and copy the API Key. Creating Connection Enter the APP and API Key values in the Connection Editor. Additional Connection Parameters Use Custom Fields Select this checkbox to make custom fields available in Skyvia. Connector Specifics Object Peculiarities Activities To successfully insert records into the Activities object, you must map either PersonId, CompanyId , or DealId field. AccountNotifications When updating the ReadAt field values in the records where Seen = false , the Seen field value changes to true during the update operation. Files To successfully insert records into the Files object, you must map either PersonId, CompanyId or DealId field. Filtering In Pipeline CRM the filtering with the = operator is understood as starts with for the following fields: Object Field Companies Name, City, PostalCode, State, Country, Address1, Address2, Description, Phone1, Fax, Email Deals Name, SourceId, ExpectedCloseDate, ClosedTime, Value, DealStageId, Owner_Id People FirstName, LastName, Position, Phone, Type, CompanyName, FullName Users Email Activities DealId, PersonId, CompanyId For example, suppose the Companies object has several records with field Name values Company 001, Company 002, Company 003 and SubCompany 001 . If you use filter Name = Company . Such query will return three records with values Company 001, Company 002, Company 003 . Custom Fields Companies, Deals, and People objects can have custom fields of the following types: \nBoolean, Calculated, Currency, Date, Dropdown, Multi_association, Numeric, Picklist, Single_association, Text. Information about custom fields (their Ids, names, types, etc.) is stored in the separate objects CompanyCustomFields, DealCustomFields and PersonCustomFields . You can INSERT and UPDATE data in the Custom Fields. Multi_association type fields store complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. DML Operations Support Operation Object INSERT, UPDATE, DELETE AccountNotifications, Activities, ActivityCategories, CalendarEntries, Comments, Companies, CompanyCustomFields, DealCustomFields, DealLossReasons, Deals, DealStages, DealStatuses, EventCategories, FileTags, LeadSources, LeadStatuses, People, PerfomanceLanes, PersonCustomFields, PredefinedContactTags, RevenueTypes, Teams, Files UPDATE Users Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Pipeline CRM objects except Users and PerfomanceLanes . Skyvia supports Synchronization for all the Pipeline CRM objects except Users, PerfomanceLanes, AccountNotifications and Files . Supported Actions Skyvia supports all the common actions for Pipeline CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/podio_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Podio [Podio](https://podio.com) is a cloud collaboration service for organizing team communication, automating workflows, managing tasks, etc. Data integration : Skyvia supports importing data to and from Podio, exporting Podio data to CSV files, replicating Podio data to relational databases, and synchronizing Podio data with other cloud apps and relational databases. Backup : Skyvia Backup supports Podio backup. Query : Skyvia Query supports Podio. Establishing Connection To create a connection to Podio, log in with Podio. Creating Connection To create a Podio connection, perform the following steps: Click Sign In with Podio , in the Connection Editor page. Enter your Podio credentials and click Sign in . Podio will display the list of privileges required for Skyvia. Scroll the page to the bottom and click Grant access . Additional Connection Parameters Metadata Cache You can specify the period after which Podio Metadata Cache expires. Extended Dynamic Table Names This parameter adds workspace name to the names of the dynamic Items objects if enabled. Connector Specifics Returning Function Podio API doesn\u2019t support the RETURNING function when you update or delete records in the Applications, Files, Forms, Organizations , and Spaces objects. It means that you receive an error if you execute an UPDATE or DELETE SQL command with a RETURNING statement. Backup Specifics Due to API specifics, the Fields and Participants fields of the Items and Conversations objects don\u2019t allow you to restore data from their backups. Such fields require mapping other nested fields than those available in the backup. \nTo restore data in the Fields field of the Items object, use backup data from the Items object. Object Peculiarities Items The Fields field stores the array of application items in JSON format. The Fields array structure varies depending on the application type. When importing data to the Items object, you must provide JSON values for this field. Required JSON values vary depending on application types. The Items object is related to the Applications object. When you query the Items object, Skyvia performs additional API calls to read the Applications object. If there are more than 100 applications, query performance may decrease significantly and exceed Podio API limits. ItemFields This read-only object stores the records from the Fields field of the Items object in tabular form for user convenience. Items For user convenience, such objects represent the Items object content grouped by a specific application. Such objects support the INSERT, UPDATE and DELETE operations. To additionally distinguish such objects by workspace, enable the Extended Dynamic Table Names parameter in the connection editor. In this case Skyvia will add a workspace name to such object name in the format Items . These objects support filtering by the AppItemId, Title, CreatedVia_AuthClientId and ExternalId with the = operator. We split the complex structured Fields field into separate fields that support INSERT and UPDATE operations. Possible types of these fields are the following. Podio Type DbType Comment Text String Standard length is 1000 characters. If the field name equals memo, note_contains _description, comment, notes, address , or ends with url, reason or keywords , the length extends to 4000 characters. If a field name contains content or html the length is 2147483647 characters Category String Single choice or Multiple choice enum field Member Array Complex structured field including the following nested fields: Name (String, read-only), About (String, read-only), LastSeenOn (DateTime, read-only), ProfileId (Int64), UserId (Int64, read-only) Phone Array Complex structured field including the following nested fields: Type (String), Value (String) Email Array Complex structured field including the following nested fields: Type (String), Value (String) Link Array Complex structured field including the following nested fields: Description (String, read-only), Url (String, read-only), Title (String, read-only) Image Array Complex structured field including the following nested fields: Id (Int64, read-only), Name (String, read-only), Description (String, read-only), Size (Int64, read-only), Link (String, read-only), MimeType (String, read-only) Calculation Field type is inherited from the variable selected when creating the field Read only Location String It has two variants: Single-line address and Multi-line address. In the case of Multi-line address, the field contains nested fields: _City, _Country, _Postal_Code, _State, _Street_Address, _Latitude, _Longitude Relationship. Multiple references String Field has a complex structure and contains two fields: ItemId (Int64), AppName (String, read-only) Relationship.Only one reference Int64 Id which refers to an Items record Date DateTime or Date In case of \u201cShow end date\u201d setting is enabled, the field splits into two fields _Start and _End Number Int64 or double Type is Int64 if the \u201cDisplay whole number\u201d setting is selected. The data type is Double if the \u201cDisplay whole number\u201d setting is NOT selected Progress Int16 Money Decimal In case of \u201cShow end date\u201d setting is enabled, the field splits into two fields _Currency and _Value Duration Int64 OrganizationMembers The OrganizationMembers records are only available for Plus and Premium Podio subscriptions. \nUse filter by OrgId to get data from this object. Tasks Podio API allows querying tasks only with filters by task Id , the related object identifier, like SpaceId , RefData_OrgId , or RefData_AppId , or by a user that created, completed, or is responsible for the task. To query the tasks not associated with any organization, workspace, or item, use the filter by task Id . Otherwise, such tasks are not displayed in query results. Views The Groupings_Groups field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. You can use this field as source in data flows. Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following Podio objects: Batch, ConversationEvents, Conversations, Files, Integrations, Items, _Items, NotificationsLists, Organizations, Spaces, Stream, Tasks, User, Views, TaskComments . Skyvia supports Synchronization for the following objects: _Items, Files, Items, Organizations, Spaces, TaskComments, Tasks . All the mentioned objects have the CreatedDate field and don\u2019t have the UpdatedDate field. Thus, Skyvia can track only new records and can\u2019t track modified records. DML Operations Support Operation Object INSERT, UPDATE, DELETE Applications, AppStore_Shares, Files, Items, Items, Spaces, Tasks INSERT, UPDATE Organizations, Forms INSERT, DELETE Views INSERT Conversations, Integrations Supported Actions Skyvia supports all the common actions for Podio." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/prodpad_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ProdPad [ProdPad](https://www.prodpad.com/) is a product management tool that helps customers cope with product roadmaps, priority charts, customer feedback, and workflows. Data integration : Skyvia supports importing data to and from ProdPad, exporting ProdPad data to CSV files, replicating ProdPad data to relational databases, and synchronizing ProdPad data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ProdPad. Query : Skyvia Query supports ProdPad. Establishing Connection To create a connection to ProdPad, you need to obtain an API key. Getting Credentials To obtain an API key, you have to perform the following steps: Sign in to the ProdPad. Click the user icon in the top right corner of the page and select Profile settings . Select the API Keys tab and copy the API Key. Creating Connection To connect to ProdPad, specify the API key. Additional Connection Parameters Suppress Extended Requests For some objects, ProdPad API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Statuses AccountId , State Feedbacks ContactName , ContactEmail , CompanyId , Company_Name , Company_City , Company_Country , Company_Size , Company_Value , Company_Image , Company_CreatedDate , Company_UpdatedDate , About , JobRole_Id , JobRole_Name , Feedbacks Products Vision , KPIs , Value , Documentation Personas Behaviors , Goals , Constraints To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Ideas To insert data into the Ideas object, map either Title or Description field. VotesMaybe, VotesNay, VotesYea When inserting data to the VotesMaybe , VotesNay , and VotesYea objects, map at least one field from each pair: Voter_id or Voter_email , and IdeaId or Feedback_Id . Feedbacks To insert data into the Feedbacks objects, map the Feedback field and either the ContactName or ContactEmail field. SearchFeedback, SearchPersonas, SearchProducts, SearchIdeas The objects SearchFeedback , SearchPersonas , SearchProducts , SearchIdeas are read only. When querying data from these objects, set the filtering condition by the TextQuery field with the string value to look for in the object.\nYou can use Feedbacks , Personas , Products , or Ideas as values. For example, to query all products where the Description , Vision , KPIs , or Documentation fields contain the value Strategy - Section Details , the query can look like this: 1 SELECT t . * FROM SearchProducts AS t WHERE ( t . TextQuery = 'Strategy - Section Details' ) Incremental Replication and Synchronization Skyvia does not support Replication with Incremental Updates for these objects: Users , FeedbackTags , FeedbackProducts , FeedbackPersonas , FeedbackAttachments , FeedbackExternalLinks , ContactTags , ContactPersonas , ContactExternalLinks , SearchFeedback , SearchPersonas , SearchProducts , SearchIdeas . Objects VotesYea , VotesNay , VotesMaybe , ProductLines , ProductRoadmapLists , ProductLineRoadmapLists contain only the CreatedDate field. Incremental Replication will include only new records for these objects. Skyvia supports Synchronization for these objects: Companies , Contacts , Feedbacks , Ideas , Objectives .\nProdPad objects support only INSERT and UPDATE operations. DELETE operation is not supported. If records are deleted from these objects, the Synchronization integration marks them as failed and returns a \u2018Table tablename does not support the DELETE statement\u2019 error in the integration run results. DML Operations Support Operation Object INSERT, UPDATE Companies , Contacts , Feedbacks , Ideas , Objectives INSERT Users , VotesYea , VotesNay , VotesMaybe Stored Procedures Skyvia represents part of the supported ProdPad features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . UpdateStatusOfIdea To update the workflow status of the idea, use the following command: call UpdateStatusOfIdea(:Ideaid,:status_id) Supported Actions Skyvia supports all the common actions for ProdPad." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/productive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Productive.io [Productive.io](https://productive.io/) is a tool designed to help agencies with sales, budgeting, project management, resource management, reporting and billing. Data integration : Skyvia supports importing data to and from Productive.io, exporting Productive.io data to CSV files, replicating Productive.io data to relational databases, and synchronizing Productive.io data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Productive.io. Query : Skyvia Query supports Productive.io. Establishing Connection To create a connection to Productive.io, specify the Access Token and Organization Id. Getting Credentials Access Token To locate the access token, perform the following actions: Log in to Productive.io, hover over the user icon on the top right and select Settings . Click API Access. On the appeared page, click the Generate new token . Specify the new token name and select the access level. Hover over the new token name and the token value will appear. Click COPY TOKEN Organization Id You can get the Organization Id value on the API Access page. Creating Connection To connect to Productive.io, paste the obtained Access Token and Organization Id to the corresponding boxes in the Connection Editor. Additional Connection Parameters Use Custom Fields Enables working with custom fields of the Bookings, Budgets, Companies, Deals, Expenses, Invoices, People, Projects, Tasks objects. Connector Specifics Import Specifics When you import data to the object contatinng the fields *Type and *Id together, you must map either both these fields or none of them for successful import. \nFor example, the Bookings object contains the EventType and EventId . You must map either both EventType and EventId fields or none of them. Object Peculiarities Comments When you insert a record to the Comments object, it is present in the Comments object, but it doesn\u2019t contain the link to the parent object, which the comment is created for. Thus, this record is not available when you select all the records. You can get the Comment record only by specifying its Id . Reports_FinancialItem By default, data is grouped by the financial_item metric. You can change this behaviour by changing the Group field to one of the following values: person , service , service_type , budget , project , company , pricing_type , project_type , future , organization , financial_item . To group data by multiple metrics, specify the metrics separated by commas. Available date periods for grouping are year , quarter , month , week and day . You can filter these reports by using Date field. For example, you can get data for the period from 02.09.2024 to 30.10.2024 with grouping by each month separately by using filters like this: 1\n2\n3\n4 SELECT t . * FROM Reports_FinancialItem AS t WHERE (( t . \"Date\" < '2024-10-31' AND t . \"Date\" > '2024-09-01' ) AND t . \"Group\" = 'service_type,date:month' ) Reports_Time By default, data is grouped by the person metric. You can change this behaviour by changing the Group field to one of the following values: service , service_type , budget , project , company , event , future , organization , subsidiary , manager , person . To group data by multiple metrics, specify the metrics separated by commas. Available date periods for grouping are year , quarter , month , week and day . You can filter these reports by using Date field. Custom Fields The following objects can have custom fields: Bookings, Budgets, Companies, Deals, Expenses, Invoices, People, Projects, Tasks . Productive.io supports the custom fields of the following types: Productive.io type DbType Text String Multiselect String Dropdown String Person String Date Date Number Decimal Person Field which stores the ID referring to the People object Filtering Specifics Productive.io supports the following native filters: Object Operator Field Activities = TaskId , DealId , InvoiceId , CompanyId , PersonId Comments = DiscussionId Companies = Status Bookings = EventId , PersonId Deals = SalesStatusId , ProjectId , CompanyId , ResponsibleId , SubsidiaryId Payments = ExternalId , InvoiceId Projects = ProjectTypeId TaskLists = ProjectId , BoardId Workflows = Name WorkflowStatuses = Name , CategoryId , WorkflowId Boards = ProjectId TimeEntries = ServiceId , PersonId >= , <= Date Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Custom fields support the following filters: Field Operator Number , Date = , > , < , >= , <= Other fields = Multiselect Not supported Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Activities, Bookings, Budgets, Comments, Companies, Deals, Expenses, Invoices, People, Projects, Tasks objects.\nSkyvia tracks only the new records for the Activities, Companies, Deals, Expenses, People, Projects objects. Skyvia supports Synchronization for the objects Invoices, Bookings, and Tasks objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Bookings, Budgets, Comments, Deals, DealStatuses, Expenses, Invoices, LineItems, Projects, Services, ServiceTypes, Tasks, Workflows, Payments INSERT, UPDATE Boards, Companies, CustomFieldOptions, CustomFields, Events, LostReasons, People, Subsidiaries, TaskLists, WorkflowStatuses UPDATE, DELETE Organizations UPDATE Users Stored Procedures Skyvia represents part of the supported Productive.io features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . DeleteWorkflowStatus Use the following command, to delete the record from the WorkflowStatuses , call DeleteWorkflowStatus(:id,:type,:targetid) PARAMETER NAME DESCRIPTION Id The identifier of the deleted record Type Required constant workflow_statuses Targetid The Id of the workflow status where the tasks from the deleted workflow status will be moved ArchiveCompany Use the following command to archive a company. call ArchiveCompany(:companyId) RestoreCompany Use the following command to restore the archived company. call RestoreCompany(:companyId) ArchiveLostReason To archive the lost reason, use the command call ArchiveLostReason(:lostId) ArchiveWorkflow To archive the workflow, use the following command call ArchiveWorkflow(:workflowId) RestoreWorkflow Use the following command, to restore the workflow call RestoreWorkflow(:workflowId) ArchiveDealStatus To archive the deal status, use the command call ArchiveDealStatus(:dealStatusId) ApproveBooking To approve booking, use the the command call ApproveBooking(:bookingId) UnapproveBooking Use the following command to unapprove booking call UnapproveBooking(:bookingId) ArchiveEvent Use the following command to archive the event ArchiveEvent(:eventId) ApproveExpense To approve the expence use the command call ApproveExpense(:expenseId) UnapproveExpense To unapprove the expence, use the command call UnapproveExpense(:expenseId) ArchiveProject Use the following command to archive the project call ArchiveProject(:projectId) RestoreProject To restore the project, use the command call RestoreProject(:projectId) ArchiveServiceType To archive the service type, use the command call ArchiveServiceType(:serviceTypeId) ArchiveSubsidiary Use the following command to archive the subsisdiary ArchiveSubsidiary(:subsidiaryId) ArchiveTaskList To archive the task list, use the command call ArchiveTaskList(:taskListId) RestoreTaskList Use the following command to restore the task list. call RestoreTaskList(:taskListId) ArchiveBoards To archive the board, use the command call ArchiveBoards(:boardId) RestoreBoards To restore the board, use the command RestoreBoards(:boardId) Supported Actions Skyvia supports all the [common actions](https://docs.skyvia.com/connectors/actions/#common-actions) for Productive.io" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/quickbooksdesktop_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources QuickBooks Desktop [QuickBooks Desktop](https://quickbooks.intuit.com/online/) is an accounting software for managing and controlling finance-related processes for inventory-based businesses that need to scale as they grow. Skyvia supports QuickBooks Desktop 2006 and higher. QuickBooks Desktop connections can work only via Agents . QuickBooks Desktop must be installed on the same computer, on which the Agent application is installed. Data integration : Skyvia supports importing data to and from QuickBooks Desktop, exporting QuickBooks Desktop data to CSV files, replicating QuickBooks Desktop data to relational databases, and synchronizing QuickBooks Desktop data with other cloud apps and relational databases. Backup : Skyvia Backup does not support QuickBooks Desktop backup. Query : Skyvia Query supports QuickBooks Desktop. For QuickBooks Online connector, see QuickBooks Online . Establishing Connection When creating a QuickBooks Desktop connection , you need to select an Agent , installed on the same computer with QuickBooks Desktop instance, and specify the Application Name and Company File . Requirements Skyvia Agent bitness must correspond the bitness of the QuickBooks Desktop instance. Note that Skyvia Agent installs both 64 bit and 32 bit versions on 64 bit systems, so you can run the corresponding Agent app version. Creating Connection To connect to QuickBooks, perform the following steps: Select an Agent , installed on the same computer with QuickBooks Desktop instance. Specify the Application Name , for example, Skyvia . This name will be displayed in the QuickBooks Desktop prompt to grant or refuse access to the data. Specify the path to Company File \u2014 a QuickBooks company file that saves all of your financial info, for example C:\\Companies\\sample\\sample_product-based business.qbw . By default, QuickBooks Desktop stores company files in the C:\\Users\\Public\\Public Documents\\Intuit\\QuickBooks\\Company Files folder. This is not required if you plan to connect to a file, currently opened in QuickBooks Desktop (QuickBooks Desktop must be running, the file must be opened in it, and you must select the Yes, whenever this QuickBooks company file is open mode in QuickBooks Desktop on step 5.). Click Test Connection . Skyvia connects to QuickBooks Desktop, and the corresponding prompt displays in it: Select how Skyvia is allowed to access QuickBooks Data. We recommend selecting Yes, whenever this QuickBooks company file is open or Yes, always; allow access even if QuickBooks is not running . The Yes, whenever this QuickBooks company file is open mode does not require the company file specified, but it requires QuickBooks running. In the Yes, always; allow access even if QuickBooks is not running mode, QuickBooks will run automatically as a background process when you access the QuickBooks data. In the latter mode, you may not be able to open the file in QuickBooks Desktop after Skyvia works loads data from or to it. In such case, after Skyvia finishes loading data, restart the Agent application to correctly close session. Optionally select the Allow this application to access personal data such as Social Security Numbers and customer credit card information checkbox. This allows Skyvia to work with certain fields like the CreditCardInfo* fields in the Customer object. Click Continue , then click Yes , and then click Done . After this, the connection is ready. You can switch back to Skyvia, save it and use it. Additional Connection Parameters Ignore Warning Sometimes, when performing INSERT or UPDATE operation against QuickBooks Desktop, it may not assign values to some of the mapped fields. In such cases, QuickBooks Desktop sends a warning. By default, Skyvia ignores QuickBooks warnings, and doesn\u2019t display them in logs. You can clear the Ignore Warning checkbox in the Advanced Settings in the connection editor to change this behavior. If this checkbox is not selected, QuickBooks warnings are treated as errors. This means that Skyvia treats records, for which QuickBooks Desktop returned a warning, as failed and adds them to error logs, even though the records were added/partially updated in QuickBooks Desktop. Use Custom Fields Select this checkbox to make custom fields available in Skyvia. Revoking Access If you want to revoke granted access and disconnect Skyvia from QuickBooks, you can revoke access in QuickBooks Desktop Preferences. From the Edit menu, select Preferences . In the opened Preferences dialog box, switch to Integrated Applications . There, on the Company Preferences tab, you can see and manage the list of application, for which access is allowed to this company file. You can revoke access by removing the application from the list or see its properties and manage access rights. Connector Specifics Complex Structured Data Some of the QuickBooks tables store complex structured data. These are the following tables: Charge, Check, CreditCardCharge, CreditCardCredit,InventoryAdjustment, Invoice, Estimate, Bill, CreditMemo, JournalEntry, Payment, PurchaseOrder, SalesOrder, SalesReceipt . For example, an invoice or bill can have several lines. Skyvia represents this information in such tables as a JSON field LineItems . Here is an example of the LineItems field value from the Invoice table: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45 [\n {\n \"Id\": \"3B1F-1071532020\",\n \"IsItemGroup\": false,\n \"ItemId\": \"30000-933272655\",\n \"ItemName\": \"Removal\",\n \"Desc\": \"Removal labor - strip damaged roof on garage\",\n \"Quantity\": 46.0,\n \"Rate\": 35.00,\n \"ClassId\": \"40000-933272658\",\n \"ClassName\": \"Remodel\",\n \"Amount\": 1610.00,\n \"SalesTaxCodeId\": \"20000-999022286\",\n \"SalesTaxCodeName\": \"Non\"\n },\n {\n \"Id\": \"3B20-1071532020\",\n \"IsItemGroup\": false,\n \"ItemId\": \"10000-933272655\",\n \"ItemName\": \"Framing\",\n \"Desc\": \"Framing labor - reconstruct low pitch roof\",\n \"Quantity\": 20.0,\n \"Rate\": 35.00,\n \"ClassId\": \"40000-933272658\",\n \"ClassName\": \"Remodel\",\n \"Amount\": 700.00,\n \"SalesTaxCodeId\": \"20000-999022286\",\n \"SalesTaxCodeName\": \"Non\"\n },\n {\n \"Id\": \"3B25-1071532020\",\n \"IsItemGroup\": false,\n \"ItemId\": \"D0000-933272656\",\n \"ItemName\": \"Subs:Roofing\",\n \"Desc\": \"Roofing - weather proofing and shingle materials applied\",\n \"Rate\": 855.00,\n \"ClassId\": \"40000-933272658\",\n \"ClassName\": \"Remodel\",\n \"Amount\": 855.00,\n \"ServiceDate\": \"2007-03-28T00:00:00Z\",\n \"SalesTaxCodeId\": \"20000-999022286\",\n \"SalesTaxCodeName\": \"Non\"\n }\n] For user convenience, lines of these objects are also available as separate records via read-only tables with names containing LineItem suffixes, like InvoiceLineItem , BillLineItem , etc. They allow you to view these lines in a tabular form with Query , export them to CSV with Export , import them from QuickBooks to a cloud application or database, where these lines should be stored in a separate table, etc. Since these tables (that have the LineItem suffix in their name) are read-only, they are not available in Import as a target or in Synchronization. To modify lines of a bill, journal entry, etc., you need to provide values in JSON format for the Lines field of the corresponding main table - Bill, JournalEntry, etc. For example, to import bills from a CSV file to QuickBooks, you need to specify bill lines in the JSON format in one of the CSV file columns and import this file to the Bill table. You need to map the Lines field of the Bill table to this column of the CSV file. You can use our Nested Objects mapping feature in Import integrations. For this, you need to select the Nested Objects checkbox in the integration. Then, in the mapping settings, you can map the fields of the corresponding line items. See see QuickBooks Online tutorials to know how to use this feature. Additionally, you can use Nested types feature of Data Flow . Object Peculiarities Charge The INSERT operation requires mapping either the CustomerName or the CustomerId field and either the ItemName or the ItemId field. Checks The INSERT operation requires mapping either the AccountName or the AccountId field and either the LineItems or the ExpenseLines fields.\nConsider the account type when importing records to this object. If the account doesn\u2019t support checks, you receive an error. CreditCardCharge The INSERT operation requires mapping either the AccountName or the AccountId field, either PayeeEntityId or PayeeEntityName and either the LineItems or the ExpenseLines, fields.\nThis object supports the inserting of records for the credit card account type only. CreditCardCredit The INSERT operation requires mapping either the AccountName or the AccountId field, either PayeeEntityId or PayeeEntityName and either the LineItems or the ExpenseLines, fields. InventoryAdjustment This object has a complex structure and contains nested objects. Consider the inventory adjustment type, when importing data to this object. The inventory adjustment of particular type requires mapping the specific fields: Adjustment type Required Fields for Mapping Quantity Adjustment NewQuantity or QuantityDifference ; SerialNumber or LotNumber ; ExpirationDateForSerialLotNumber ; InventorySiteLocationId or InventorySiteLocationName Value Adjustment NewQuantity or QuantityDifference , NewValue or ValueDifference Serial Number Adjustment SerialNumber, SerialNumberAddedOrRemoved, ExpirationDateForSerialLotNumber; InventorySiteLocationId or InventorySiteLocationName Lot Number Adjustment LotNumber; CountAdjustment; ExpirationDateForSerialLotNumber; InventorySiteLocationId or InventorySiteLocationName When you update the InventoryAdjustment lines, map the following fields: ItemId or ItemName , SerialNumber or LotNumber , CountAdjustment , ExpirationDateForSerialLotNumber , InventorySiteLocationId or InventorySiteLocationName , QuantityDifference , ValueDifference . The following fields are needed for data import, they return null values when querying: NewQuantity, NewValue,SerialNumberAddedOrRemoved, CountAdjustment ReceivePayment If an invoice waits for partial payment, map the TotalAmount field together with the required fields. The TotalAmount value must not exceed the sum of all invoice items. The TotalAmount is not necessary, to create a discount for the invoice. Custom Fields Using custom fields may affect performance for INSERT, UPDATE, and DELETE operations, because Skyvia performes additional API calls. The following QuickBooks objects support custom fields: Customer, Employee, Vendor, Item . Customer The following transaction objects inherit the custom fields from the Customer object: CreditMemo, Estimate, Invoice, SalesOrder, SalesReceipt . For example, there is a custom field CustomField1 in the Customer object. When you create an invoice for a specific customer, this invoice will inherit the CustomField1 values from the Customer object by default. To set another value for a custom field in the invoice instead of the inherited one, use the INSERT or UPDATE operations and set the needed value directly. It won\u2019t affect the original custom field value in the Customer object and other Invoice records. Vendor The object PurchaseOrder inherits the custom fields from the Vendor object. For example, you add a custom field CustomField1 in the Vendor object. When you create a purchase order for a specific vendor, this order will inherit the CustomField1 values from the Vendor object by default. To set another value for a custom field in the purchase order instead of the inherited one, use the INSERT or UPDATE operations and set the needed value directly. It won\u2019t affect the original custom field value in the Vendor object and other PurchaseOrder records. Item You can add custom fields to the following Item related objects: ItemDiscount, ItemFixedAsset, ItemGroup, ItemInventory, ItemInventoryAssembly, ItemNonInventory, ItemOtherCharge, ItemPayment, ItemService . When you create a custom field, it is common for all Item* objects at once. For example, you can\u2019t add the CustomField1 to the ItemDiscount object or the CustomField2 to the ItemFixedAsset . The following Item related objects don\u2019t support custom fields: ItemSalesTaxItemSalesTaxGroup, ItemSites, ItemSubtotal, PayrollItemWage, PayrollItemNonWage . The following LineItem* objects inherit Item custom fields: CreditMemoLineItem, EstimateLineItem, InvoiceLineItem, PurchaseOrderLineItem, SalesOrderLineItem, SalesReceiptLineItem . Incremental Replication and Synchronization Synchronization and Replication with Incremental Updates enabled are not supported for objects without TimeCreated or TimeModified fields. Supported Actions Skyvia supports all the common actions for QuickBooks Desktop. Troubleshooting Retrieving the COM class factory for component with CLSID {45F5708E-3B43-4FA8-BE7E-A5F1849214CB} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). This error means that QuickBooks Desktop is either not installed on the computer with Skyvia Agent running or is incorrectly installed on this computer. Make sure that you have selected the correct agent in the connection. Make sure that the agent app is installed and running on the computer with QuickBooks Desktop, and that it uses the correct key \u2014 the key of the agent, selected in the connection. Make sure you are using the agent app of the same bitness (32bit or 64bit) as your QuickBooks Desktop instance. If everything is correct, this probably means that QuickBooks Desktop is installed incorrectly. Please reinstall QuickBooks Desktop and try again. Could not start QuickBooks This error can be caused by the similar reasons as the previous one, so please check the above solution. Additionally, it can be caused by QuickBooks running on a different user level than the agent app. They both need to be running as a normal user or both running as administrator, but not mixed. You can also see the [QuickBooks documentation](https://help.developer.intuit.com/s/article/Troubleshooting-Could-not-start-QuickBooks) on this issue." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/quickbookstime_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources QuickBooks Time [QuickBooks Time](https://quickbooks.intuit.com/time-tracking/) is a web-based and mobile time tracking and employee scheduling app. Data integration : Skyvia supports importing data to and from QuickBooks Time, exporting QuickBooks Time data to CSV files, replicating QuickBooks Time data to relational databases and synchronizing QuickBooks Time data with other cloud apps and relational databases. Backup : Skyvia Backup does not support QuickBooks Time. Query : Skyvia Query supports QuickBooks Time. Establishing Connection When creating a connection to QuickBooks Time, you need to sign in to QuickBooks Time. Creating Connection To create an QuickBooks Time connection, perform the following steps: Click Sign In with QuickBooks Time . In the opened window, enter your email address. Enter your password. Skyvia will request permission to access your QuickBooks Time account. Click Allow . Connector Specifics Object Peculiarities Timesheets The Timesheets object contains a huge data volume. To reduce querying time and increase performance, you can use the following filters. Field Operator Date >= and <= OnTheClock = UpdatedDate < and > The Timesheets object requires UserId, JobcodeId , and Type fields mapping for successful data import. It also requires mapping other fields, depending on the timesheet type. There are two types of timesheets in the Timesheets object: manual and regular. Regular timesheets require Start and End fields mapping for importing data. Manual timesheets require Duration and Date fields mapping for importing data. TimeOffRequests When importing data to the TimeOffRequests object, provide values for the TimeOffRequestEntries field as JSON array, for example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 \"time_off_request_entries\": [{\n\n \"date\": \"2020-09-20\",\n\n \"entry_method\": \"manual\",\n\n \"duration\": 28800,\n\n \"jobcode_id\": 4057357\n\n }] as well as values for the TimeOffRequestNotes field as such JSON array: 1 [{\"note\": \"Taking a two day weekend to go on vacation.\"}] Reminders When you select data from the Reminders object, you get the data related to the authorized user. To get reminders for other users, you can use filter by the UserIds field. You can specify several user IDs separated by commas in the filter (for example, 2886910, 2886474, 2886548). The UserId field returns empty result by default when querying. DML Operations Support Skyvia supports DML operations for the following QuickBooks Time objects: Operation Object INSERT, UPDATE, DELETE Timesheets INSERT, UPDATE CustomFieldItems, CustomFields, Groups, Jobcodes, Locations, Reminders, ScheduleEvents, TimeOffRequestEntries, TimeOffRequests, Users INSERT, DELETE JobcodeAssignments, Notifications INSERT Geolocations Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for most QuickBooks Time objects except EffectiveSettings, LastModifiedTimestamps . Skyvia supports Synchronization for such QuickBooks Time objects as CustomFieldItems, CustomFields, Groups, Jobcodes, Locations, Reminders, ScheduleEvents, Timesheets, TimeOffRequests, TimeOffRequestEntries, Users . Supported Actions Skyvia supports all the common actions for QuickBooks Time." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/quickbooks_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources QuickBooks Online [QuickBooks Online](https://quickbooks.intuit.com/online/) is a cloud-based accounting solution for managing and controlling finance-related processes. Data integration : Skyvia supports importing data to and from QuickBooks Online, exporting QuickBooks Online data to CSV files, replicating QuickBooks Online data to relational databases, and synchronizing QuickBooks Online data with other cloud apps and relational databases. Backup : Skyvia Backup supports QuickBooks Online backup. Query : Skyvia Query supports QuickBooks Online. For QuickBooks Desktop connector, see QuickBooks Desktop . Establishing Connection When [creating a QuickBooks connection](https://docs.skyvia.com/connections/#creating-connections) , log in with QuickBooks. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Your QuickBooks account login and password are not stored on the Skyvia side. Creating Connection To connect to QuickBooks, perform the following steps: Select the QuickBooks environment type, which can be Production or Sandbox , and click Connect to QuickBooks . Enter your QuickBooks credentials and click the Sign In button. Select a company to query data from (click the link with the corresponding company name) and proceed further. Wait until the token is generated, and then click Create Connection . Connector Specifics Custom Fields Skyvia does not support custom QuickBooks fields having double quotation marks in their names. Complex Structured Data Some of the QuickBooks tables store complex structured data. These are the following tables: Invoice, Estimate, Bill, BillPayment, CreditMemo, JournalEntry, Payment, Purchase, PurchaseOrder, RefundReceipt, SalesReceipt, VendorCredit . For example, an invoice or bill can have several lines. Skyvia represents this information in such tables as a JSON field Line . Here is an example of the Lines field value from the Invoice table: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30 [\n {\n \"Id\": \"1\",\n \"LineNum\": 1.0,\n \"Description\": \"Weekly Gardening Service\",\n \"Amount\": 140.0,\n \"DetailType\": \"SalesItemLineDetail\",\n \"SalesItemLineDetail_ItemRefId\": \"6\",\n \"SalesItemLineDetail_ItemRefName\": \"Gardening\",\n \"SalesItemLineDetail_UnitPrice\": 35.0,\n \"SalesItemLineDetail_Qty\": 4.0,\n \"SalesItemLineDetail_TaxCodeRefId\": \"NON\"\n },\n {\n \"Id\": \"2\",\n \"LineNum\": 2.0,\n \"Description\": \"Pest Control Services\",\n \"Amount\": 35.0,\n \"DetailType\": \"SalesItemLineDetail\",\n \"SalesItemLineDetail_ItemRefId\": \"10\",\n \"SalesItemLineDetail_ItemRefName\": \"Pest Control\",\n \"SalesItemLineDetail_UnitPrice\": 35.0,\n \"SalesItemLineDetail_Qty\": 1.0,\n \"SalesItemLineDetail_TaxCodeRefId\": \"NON\"\n },\n {\n \"Amount\": 175.0,\n \"DetailType\": \"SubTotalLineDetail\"\n }\n] For user convenience, lines of these objects are also available as separate records via read-only tables with names containing LineItem suffixes, like InvoiceLineItem , BillLineItem , etc. They allow you to view these lines in a tabular form with Query , export them to CSV with Export , import them from QuickBooks to a cloud application or database, where these lines should be stored in a separate table, etc. Since these tables (that have the LineItem suffix in their name) are read-only, they are not available in import or synchronization integrations as a target. To modify lines of a bill, journal entry, etc., you need to provide values in JSON format for the Lines field of the corresponding main table - Bill, JournalEntry, etc. For example, to import bills from a CSV file to QuickBooks, you need to specify bill lines in the JSON format in one of the CSV file columns and import this file to the Bill table. You need to map the Lines field of the Bill table to this column of the CSV file. However, for the Invoices table, you can use our Nested Objects mapping feature in Import. For this, you need to select the Use new runtime checkbox in your integration, then select the Nested Objects checkbox in the integration. Then, in the mapping settings, you can map the fields of invoice line items. The tables with the LineItem suffix in their name are available in backups, but you cannot restore data to these tables because they are read-only. Since they store the same information as the Lines field of the corresponding main tables, you don\u2019t actually need to create backup for them. All the information in *LineItem tables is present in the corresponding main tables, and you can backup and restore data in the main table only. Incremental Replication and Synchronization Synchronization and Replication with Incremental Updates enabled are not supported for objects without MetaData_CreateTime or MetaData_LastUpdatedTime fields. Both fields must be present for this functionality. Supported Actions Skyvia supports all the common actions for QuickBooks. Data Integration Tutorials There are several common use cases of QuickBooks integration. How to Import Data from Salesforce Opportunities to QuickBooks Invoices How to Import QuickBooks Invoices to Salesforce Opportunities Easy Importing Invoices and Customers Between QuickBooks Online Accounts" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/recharge_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Recharge [Recharge](https://www.recharge.com/) is the leading subscription payment platform for merchants to set up and manage dynamic recurring billing across web and mobile. Data integration : Skyvia supports importing data to and from Recharge, exporting Recharge data to CSV files, replicating Recharge data to relational databases, and synchronizing Recharge data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Recharge. Query : Skyvia Query supports Recharge. Establishing Connection To connect to Recharge, specify the Subdomain and sign in with Recharge using the OAuth 2.0 authentication method. Creating Connection To create a connection: Click +New and select Connection. Select Recharge from the the list of connectors. Enter your subdomain. If you use BigCommerce custom domain name to connect, and the connection fails, try to enter your primary domain value (for example, yourdomain.com ) as a Subdomain in the Connection Editor. Click Sign In with Recharge . Enter your Recharge login credentials. Authorize Skyvia to access your data. Connector Specifics Object Peculiarities Metafields Each object that supports metafields ( Store, Customers, Subscriptions , etc.) has a corresponding object ( StoreMetafields, CustomerMetafields, etc.) which allows working with metafields directly. DML Operations Support Operation Object INSERT, UPDATE, DELETE Addresses, ChargeMetafields, CustomerMetafields, Customers, Collections, Discounts, Onetimes, OrderMetafields, Plans, Products, StoreMetafields, SubscriptionMetafields, Subscriptions, Webhooks UPDATE, DELETE Orders Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Recharge objects except CustomerDeliverySchedules, Products, Webhooks . Skyvia supports Synchronization for all Recharge objects except CustomerDeliverySchedules, Charges, Orders, Products, Store, Webhooks . Stored Procedures Skyvia represents part of the supported Recharge features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendNotifications Use the following command to send notifications to a specific customer. call SendNotifications(:customer_id, :type, :template_type, :template_vars) PARAMETER NAME DESCRIPTION Customer_id the specific customer identifier Type The only possible value is email Template_type Defines the type of notification template. Valid values are upcoming_charge, get_account_access Template_vars An object containing the necessary template variables for specific email template type: The address_id associated with the indicated customer and the notification object. The charge_id of the Charge for which the notification should be sent. For example {\"address_id\":1234567890, \"charge_id\":9876543210} . CloneOrder To clone the specific order, use the following command. call CloneOrder (:order_id, :scheduled_at) PARAMETER NAME DESCRIPTION Order_id The order identifier Scheduled_at Date in future when this order will be sent ApplyDiscount To apply a discount to an existing charge, use the following command. call ApplyDiscount(:charge_id, :discount_code, :discount_id) RemoveDiscount To remove a discount from a specific charge, use the command call RemoveDiscount(:charge_id) SkipCharge To skip a charge for a specific item or subset of items, use the command call SkipCharge(:charge_id, :subscription_ids) PARAMETER NAME DESCRIPTION Charge_id The charge identifier Subscription_ids The identifier or identifiers of the charge items in the array format. For example, [\"12345678\"] UnskipCharge To cancel skipping a charge for a specific item or subset of items, use the following command. call UnskipCharge(:charge_id, :subscription_ids) RefundCharge To apply a refund to a Charge, use the command call RefundCharge(:charge_id, :amount, :full_refund) PARAMETER NAME DESCRIPTION Charge_id The charge identifier Amount Amount of money that will be refunded Full_refund Accepts the true , and false values After the refund, the Charge will have the status parameter REFUNDED or PARTIALLY_REFUNDED. ProcessCharge To process Charges that are in a queued or error status, use the command call ProcessCharge(:charge_id) CaptureCharge To capture the funds of a previously authorized Charge, use the following command. call CapturePayment(:charge_id) ChangeSubscriptionNextChargeDate The following command updates an existing subscription\u2019s next charge date. call ChangeSubscriptionNextChargeDate(:subscription_id, :date) ChangeSubscriptionAddress Use the following command to update an existing subscription\u2019s address. call ChangeSubscriptionAddress(:subscription_id, :address_id, :next_charge_scheduled_at) CancelSubscription To cancel an active subscription, use the command call CancelSubscription(:subscription_id, :cancellation_reason, :cancellation_reason_comments, :send_email) PARAMETER NAME DESCRIPTION Subscription_id An active subscription identifier Cancellation_reason The reason for subscription cancellation Cancellation_reason_comments The internal comment to cancellation reason Send_email Accepts true or false . Defines whether to send the cancellation email to the customer and store owner. ActivateSubscription To activate a canceled subscription, use the command call ActivateSubscription(:subscription_id) It sets the following canceled subscription attributes to null: \u201ccancelled_at\u201d, \u201ccancellation_reason,\u201d and \u201ccancellation_reason_comments\u201d. Supported Actions Skyvia supports all the common actions for Recharge." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/recurly_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Recurly [Recurly](https://recurly.com/) is a a subscription management and recurring billing platform. Data integration : Skyvia supports importing data to and from Recurly, exporting Recurly data to CSV files, replicating Recurly data to relational databases, and synchronizing Recurly data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Recurly. Query : Skyvia Query supports Recurly. Establishing Connection To create a connection to Recurly, you need to get an API key. Getting Credentials To get your Recurly API Key, perform the following steps: Sign in to Recurly. Click Integrations in the menu on the left. Click API Credentials . Copy your private Api Key. Remember your data center region. You can see it in the message located below your API Key details. Creating Connection To connect to Recurly, perform the following steps: Enter your API key. Select your Region. Connector Specifics Object Peculiarities Invoices Line items are the charges and credits on your customer\u2019s invoices. When data is imported into the Invoices object, an invoice is generated for all pending line items associated with the relevant account. A new record will be created in the Invoices object, and all pending line items will be linked to this newly created invoice with their InvoiceId fields updated. LineItems When importing data to the LineItems object, records are linked to the account, not to the invoice.\nYou can only delete the LineItems that are not linked to the invoice. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all Recurly objects, except AccountBalances . Skyvia supports Synchronization for these objects: Accounts , AddOns , BillingInfos , Coupons , Invoices , Items , Plans , MeasuredUnits , ShippingAddresses , ShippingMethods , Subscriptions . DML Operations Support Operation Object INSERT, UPDATE, DELETE Accounts , AddOns , BillingInfos , Coupons , Items , MeasuredUnits , Plans , ShippingAddresses , ShippingMethods , Subscriptions INSERT, DELETE LineItems , UniqueCouponCodes UPDATE, DELETE Acquisitions , BillingInfo INSERT, UPDATE Invoices INSERT CouponRedemptions Stored Procedures Skyvia represents part of the supported Recurly features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CancelSubscription To cancel a subscription, use the command: call CancelSubscription(:subscription_id, :timeframe) PARAMETER NAME DESCRIPTION subscription_id Subscription ID or UUID. timeframe Defines when the subscription will be canceled: immediately or at the end of the billing cycle. ReactivateCanceledSubscription To reactivate a cancelled subscription, use the command: call ReactivateCanceledSubscription(:subscription_id) PauseSubscription To pause a subscription, use the command: call PauseSubscription(:subscription_id, :remaining_pause_cycles) PARAMETER NAME DESCRIPTION subscription_id Subscription ID or UUID. remaining_pause_cycles Number of billing cycles to pause the subscription. ResumeSubscription To resume a subscription, use the command: call ResumeSubscription(:subscription_id) ConvertTrial To convert a trial subscription to an active paid subscription, use the command: call ConvertTrial(:subscription_id) RestoreUniqueCouponCode To restore a unique coupon code, use the command: call RestoreUniqueCouponCode(:unique_coupon_code_id) CollectPendingInvoice To force collect a pending or past due invoice, use the command: call CollectPendingInvoice(:invoice_id) MarkOpenInvoiceAsFailed To mark an open invoice as failed, use the command: call MarkOpenInvoiceAsFailed(:invoice_id) MarkOpenInvoiceAsSuccessful To mark an open invoice as successful, use the command: call MarkOpenInvoiceAsSuccessful(:invoice_id) ReopenClosedInvoice To reopen a closed invoice, use the command: call ReopenClosedInvoice(:invoice_id) VoidCreditInvoice To void a credit invoice, use the command: call VoidCreditInvoice(:invoice_id) RecordExternalPaymentForManualInvoice To record external payment for manual invoice, use the command: call RecordExternalPaymentForManualInvoice(:invoice_id) PARAMETER NAME DESCRIPTION invoice_id Invoice ID. payment_method Payment method used for external transaction. description Transaction description. amount Transaction total amount. collected_at Datetime of transaction. Supported Actions Skyvia supports all the common actions for Recurly." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/repairshopr_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources RepairShopr [RepairShopr](https://www.repairshopr.com/) is an all-in-one computer repair shop software, CRM, and invoice system. Data integration : Skyvia supports importing data to and from RepairShopr, exporting RepairShopr data to CSV files, replicating RepairShopr data to relational databases, and synchronizing RepairShopr data with other cloud apps and relational databases. Backup : Skyvia Backup does not support RepairShopr. Query : Skyvia Query supports RepairShopr. Establishing Connection To create a connection to RepairShopr, you need to get an API token. Getting Credentials To generate an API token for RepairShopr, perform the following steps: Click on the profile menu located in the upper-right corner. Click Settings . Click API Tokens in the Administration section. Click +New Token . Click Custom Permissions . Enter the name of your API token and check the Select All Permissions checkbox. Click Create API token . Copy the generated token. Creating Connection To connect to RepairShopr, specify the API token and your subdomain . Additional Connection Parameters Suppress Extended Requests For some objects, RepairShopr API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Estimates LineItems Invoices LineItems Estimates_LineItems and Invoices_LineItems are based on the Estimates and Invoices objects, and also require extended requests. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Complex Structured Data Some of the RepairShopr objects store complex structured data. These objects are the following: Invoices, Estimates, PurchaseOrders , and Ticket . For example, an invoice or purchase order can have several lines. This information is represented by a JSON field Line or LineItems or other. For user convenience, such fields content is also available in Skyvia as separate read-only objects: Invoices_LineItems, Estimates_LineItems, PurchaseOrders_LineItems, Schedules_Lines, Tickets_Comments, Tickets_LineItems, and Tickets_Attachments . They enable viewing these lines in a tabular form with Query , export them to CSV with Export , import them from RepairShopr to a cloud application or database. Object Peculiarities RMMAlerts To resolve an alert, update the Resolved field to equal True . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Appointments, AppointmentTypes, Assets, Contacts, Contracts, Customers, Estimates, Estimates_LineItems, Invoices, Invoices_LineItems, Items, Leads, PaymentMethods, PaymentProfiles, Payments, Phones, PortalUsers, ProductSerials, PurchaseOrders, PurchaseOrders_LineItems, RMMAlerts, Tickets, Tickets_Attachments, Tickets_Comments, Tickets_LineItems, TicketTimers, TimeLogs, Vendors . Skyvia supports Synchronization for Appointments, AppointmentTypes, Assets, Contacts, Contracts, Customers, Estimates, Estimates_LineItems, Invoices, InvoicesLineItems, Leads, PaymentProfiles, Phones, PortalUsers, ProductSerials, Tickets, Tickets_LineItems, Tickets_Timers, Vendors . DML Operations Support Operation Object INSERT, UPDATE, DELETE Appointments, AppointmentTypes, Contacts, Contracts, Customers, Estimates, Estimates_LineItems, Invoices, Invoices_LineItems, PaymentProfiles, Phones, PortalUsers, Schedules, Tickets, Tickets_LineItems, Tickets_Timers, WikiPages, WorksheetResult INSERT, UPDATE Assets, Leads, Products, ProductSerials, Schedules_Lines, Vendors INSERT Payments, PurchaseOrders, PurchaseOrders_LineItems, RMMAlerts, Tickets_Comments DELETE Tickets_Attachments UPDATE TimeLogs Supported Actions Skyvia supports all the common actions for RepairShopr." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/reply_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Reply [Reply](https://reply.io/) is a sales engagement platform that helps you automate and scale multichannel outreach to generate more leads, acquire new customers and grow revenue faster. Data integration : Skyvia supports importing data to and from Reply, exporting Reply data to CSV files, and replicating Reply data to relational databases. Backup : Skyvia Backup does not support Reply. Query : Skyvia Query supports Reply. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Reply, you need to enter your API Key. Getting Credentials To get your Reply API Key, perform the following steps: Sign in to [Reply](https://reply.io/) . Click Settings in the toolbar. Click API Key in the left menu. Copy the API Key. Creating Connection Enter the API Key in the Connection Editor. Connector Specifics Object Peculiarities Campaigns When importing data to the Campaigns object, in addition to the required fields, you must map at least one field with the Settings_ prefix.\nWhen selecting data from the Campaigns , the fields with Settings_ prefix return the empty result when querying. Custom Fields Skyvia supports custom fields for the Contacts and CampaignContacts objects as the CustomFields field storing the list of custom fields in an array format. DML Operations Support Operation Object INSERT, UPDATE, DELETE CampaignSteps, Contacts, WebhookSubscriptions INSERT, UPDATE Campaigns INSERT CampaignContacts Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: Campaigns, Contacts, WebhookSubscriptions . Skyvia does not support Synchronization for Reply. Stored Procedures Skyvia represents part of the supported Reply features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . StartCampaign Use the following command to start the campaign call StartCampaign (:campaign_id) PauseCampaign To pause the campaign, use the command call PauseCampaign (:campaign_id) ArchiveCampaign To archive the campaign, use the command call ArchiveCampaign (:campaign_id) RemoveContactFromCampaign To remove the contact from the campaign recipients list, use the command call RemoveContactFromCampaign(:campaign_id,:email) CreateAndPushContactToCampaign To create a new contact and relate it to the existing campaign call CreateAndPushContactToCampaign(:CampaignId, :Email, :FirstName, :LastName, :Company, :City, : Phone, :CustomFields) You must provide CustomFields field values in the array format, for example [{\"key\": \"CustomField1\", \"value\": \"CustomFieldValue1\"},{\"key\": \"CustomField2\", \"value\": \"CustomFieldValue2\"}] . MarkContactAsFinishedByEmail To mark a contact as finished in all companies by its e-mail address, use the command call MarkContactAsFinishedByEmail(:Email) MarkContactAsFinishedByDomain To mark all contacts as finished in all companies by e-mail domain, use the command call MarkContactAsFinishedByDomain(:Domain) Supported Actions Skyvia supports all the common actions for Reply." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sageaccounting_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Sage Accounting [Sage Accounting](https://www.sage.com/en-gb/) is a cloud-based accounting solution designed for small and medium-sized businesses. It offers features such as invoicing, expense tracking, cash flow management, and financial reporting. Data integration : Skyvia supports importing data to and from Sage Accounting, exporting Sage Accounting data to CSV files, replicating Sage Accounting data to relational databases, and synchronizing Sage Accounting data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Sage Accounting. Query : Skyvia Query supports Sage Accounting. Establishing Connection To create a connection to Sage Accounting, sign in with your Sage Accounting credentials. If you are authorized to access multiple businesses, also select the scpecific business to connect to. Creating Connection To connect to Sage Accounting, perform the following steps: Click Sign In with Sage Accounting in the Connection Editor. Enter your Sage Accounting credentials and click Log In . Select the specific business to connect to from the drop-down list. Connector Specifics Object Peculiarities Attachments The Attachments object contains a binary field named File . If an SQL query includes this binary field, Skyvia will make an extra API request to retrieve the binary contents for each record, which may slow down the data retrieval process. Restrictions by Regions Certain objects are region-specific and are accessible only in the following countries: Spain: PurchaseCorrectiveInvoices , SalesCorrectiveInvoices , CorrectiveReasonCodes France: JournalCodes , BusinessActivityTypes , LegalFormTypes Germany: TaxOffices Nested Objects The connector includes objects with fields that store complex, nested data in JSON format. The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To replicate nested data into separate tables with the new replication runtime, select the Separate Tables option for Unwind Nested Objects . The table below lists objects with specific fields designed to store complex data structures: Object Field Nested Object AllocatedArtefactType ArtefactLinks LinkType AllocatedPaymentArtefactType ArtefactLinks LinkType AnalysisTypes ActiveAreas ValueType AnalysisTypeLevel AnalysisTypeLevelType AnalysisTypeCategories AnalysisTypeCategoryType ComponentTaxRateType Percentages TaxRatePercentageType ContactAllocations AllocatedArtefacts AllocatedArtefactType Links LinkType ContactOpeningBalances TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType ContactPayments AllocatedArtefacts AllocatedPaymentArtefactType Links LinkType ContactPersons ContactPersonTypes ContactPersonTypeType Contacts ContactTypes BaseType MainContactPersonContactPersonTypes ContactPersonTypeType Links LinkType JournalLineType AnalysisTypeCategories AnalysisTypeLineItemType Journals JournalLines JournalLineType LedgerAccounts VisibleScopes ValueType OpeningBalanceJournals JournalLines BaseJournalLineType OtherPaymentLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType OtherPayments PaymentLines OtherPaymentLineItemType PaymentAllocationType Links LinkType ArtefactLinks LinkType Products SalesPrices SalesPriceType PurchaseCorrectiveInvoiceType Links LinkType InvoiceLines PurchaseInvoiceLineItemType TaxAnalysis ArtefactTaxAnalysisType PaymentsAllocations PaymentAllocationType OriginalInvoiceLinks LinkType PurchaseCreditNoteLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType PurchaseInvoiceLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType PurchaseCreditNotes CreditNoteLines PurchaseCreditNoteLineItemType TaxAnalysis ArtefactTaxAnalysisType Links LinkType PaymentsAllocations PaymentAllocationType PurchaseInvoices InvoiceLines PurchaseInvoiceLineItemType TaxAnalysis ArtefactTaxAnalysisType Links LinkType PaymentsAllocations PaymentAllocationType Corrections PurchaseCorrectiveInvoiceType PurchaseQuickEntries AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType PaymentsAllocations PaymentAllocationType PurchasesTaxDeterminations Percentages TaxRatePercentageType ComponentTaxRates ComponentTaxRateType SalesCorrectiveInvoiceType Links LinkType ShippingTaxBreakdown TaxBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType InvoiceLines SalesInvoiceLineItemType TaxAnalysis ArtefactTaxAnalysisType PaymentsAllocations PaymentAllocationType OriginalInvoiceLinks LinkType SalesCreditNoteLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType SalesCreditNotes CreditNoteLines SalesCreditNoteLineItemType TaxAnalysis ArtefactTaxAnalysisType Links LinkType ShippingTaxBreakdown TaxBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType PaymentsAllocations PaymentAllocationType SalesEstimates EstimateLines SalesQuoteLineItemType TaxAnalysis ArtefactTaxAnalysisType InvoiceLinks LinkType Links LinkType ShippingTaxBreakdown TaxBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType ProfitAnalysisLineBreakdown ProfitBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType ProfitAnalysisLineBreakdown ProfitBreakdownType SalesInvoiceLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType SalesInvoices InvoiceLines SalesInvoiceLineItemType TaxAnalysis ArtefactTaxAnalysisType RecurringInvoiceLinks LinkType OriginalQuoteEstimateLinks LinkType Links LinkType ShippingTaxBreakdown TaxBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType PaymentsAllocations PaymentAllocationType Corrections SalesCorrectiveInvoiceType SalesQuickEntries AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType PaymentsAllocations PaymentAllocationType SalesQuoteLineItemType AnalysisTypeCategories AnalysisTypeLineItemType TaxBreakdown TaxBreakdownType BaseCurrencyTaxBreakdown TaxBreakdownType SalesQuotes QuoteLines SalesQuoteLineItemType TaxAnalysis ArtefactTaxAnalysisType InvoiceLinks LinkType Links LinkType ShippingTaxBreakdown TaxBreakdownType BaseCurrencyShippingTaxBreakdown TaxBreakdownType ProfitAnalysisLineBreakdown ProfitBreakdownType SalesTaxDeterminations Percentages TaxRatePercentageType ComponentTaxRates ComponentTaxRateType Services SalesRates RateType StockItems SalesPrices SalesPriceType StockMovements Links LinkType TaxRates Percentages TaxRatePercentageType ComponentTaxRates ComponentTaxRateType TaxTypes AddressRegions BaseType TaxRates BaseType Transactions OriginLinks LinkType UnallocatedArtefacts Links LinkType Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates only for objects that include either an UpdatedAt or CreatedAt field. The supported objects include: Addresses , AnalysisTypeCategories , AnalysisTypes , Attachments , BankAccounts , BankDeposits , BankOpeningBalances , BankReconciliations , BankTransfers , BusinessExchangeRates , CoaAccounts , CoaTemplates , ContactAllocations , ContactOpeningBalances , ContactPayments , ContactPersons , Contacts , ExchangeRates , Journals , LedgerAccountOpeningBalances , LedgerAccounts , LedgerEntries , LiveExchangeRates , OpeningBalanceJournals , OtherPayments , Products , ProductSalesPriceTypes , PurchaseCreditNotes , PurchaseInvoices , PurchaseQuickEntries , SalesCreditNotes , SalesEstimates , SalesInvoices , SalesQuickEntries , SalesQuotes , ServiceRateTypes , Services , StockItems , StockMovements , TaxProfiles , TaxRates , Transactions . Synchronization is typically available for objects that allow Incremental Updates as well as INSERT and UPDATE operations. Filtering Specifics The Sage Accounting API supports the following native filters: Object Fields and Operators Addresses Id ( = ), BankAccountId ( = ), ContactId ( = ), UpdatedAt ( >= , > ) Attachments Id ( = ), AttachmentContextTypeId ( = ), AttachmentContextId ( = ), UpdatedAt ( >= , > ) BankDeposits Id ( = ), Date ( = , <= , < , >= , > ) UpdatedAt ( >= , > ) BankOpeningBalances Id ( = ), BankAccountId ( = ) UpdatedAt ( >= , > ) BankReconciliations Id ( = ), BankAccountId ( = ) UpdatedAt ( >= , > ) BankTransfers Id ( = ), Date ( = , <= , < , >= , > ) UpdatedAt ( >= , > ) BusinessExchangeRates CurrencyId ( = ) ContactAllocations Id ( = ), TransactionTypeId ( = ), UpdatedAt ( >= , > ) ContactOpeningBalances Id , ( = ), ContactId ( = ), UpdatedAt ( >= , > ) ContactPayments Id ( = ), Date ( = , <= , < , >= , > ), TransactionTypeId ( = ), ContactId ( = ), BankAccountId ( = ), UpdatedAt ( >= , > ) ContactPersons Id ( = ), AddressId ( = ), UpdatedAt ( >= , > ) Contacts Id ( = ), Email ( = ), UpdatedAt ( >= , > ) ExchangeRates CurrencyId ( = ) HostedArtefactPaymentSettings Id ( = ), ObjectGuid ( = ) LedgerAccounts Id ( = ), LedgerAccountTypeId ( = ), UpdatedAt ( >= , > ) LedgerEntries Id ( = ), Date ( = , <= , < , >= , > ), LedgerAccountId ( = ), TransactionId ( = ), UpdatedAt ( >= , > ) LiveExchangeRates CurrencyId ( = ) OtherPayments Id ( = ), Date ( = , <= , < , >= , > ), TransactionTypeId ( = ), ContactId ( = ), BankAccountId ( = ), UpdatedAt ( >= , > ) Products Id ( = ), Active ( = ), UpdatedAt ( >= , > ) ProductSalesPriceTypes Id ( = ), Active ( = ) SalesCreditNotes Id ( = ), Date ( = , <= , < , >= , > ), ContactId ( = ), StatusId ( = ), UpdatedAt ( >= , > ), DeletedAt ( > , >= ) SalesInvoices Id ( = ), Date ( = , <= , < , >= , > ), ContactId ( = ), StatusId ( = ), UpdatedAt ( >= , > ), DeletedAt ( > , >= ) ServiceRateTypes Id ( = ), Active ( = ), UpdatedAt ( >= , > ) Services Id ( = ), Active ( = ), UpdatedAt ( >= , > ) StockItems Id ( = ), Active ( = ), UpdatedAt ( >= , > ) StockMovements Id ( = ), Date ( = , <= , < , >= , > ), StockItemId ( = ) Transactions Id ( = ), Date ( = , <= , < , >= , > ), TransactionTypeId ( = ), UpdatedAt ( >= , > ) AddressRegions, AddressTypes, ArtefactStatuses, AttachmentContextTypes, BankAccountTypes, BusinessTypes, CatalogItemTypes, CoaTemplates, ContactPersonTypes, ContactTypes, Countries, CountriesOfRegistration, CountryGroups, Currencies, EuGoodsServicesTypes, EuSalesDescriptions, JournalCodeTypes, LedgerAccountClassifications, LedgerAccountTypes, MigrationTaxReturns, OpeningBalanceJournals, PaymentMethods, QuickEntryTypes, TaxReturnFrequencies, TaxSchemes, TaxTypes, TransactionTypes, UnallocatedArtefacts Id ( = ) AnalysisTypeCategories, AnalysisType, BankAccounts, CoaAccounts, Journals, LedgerAccountOpeningBalances, TaxProfiles, TaxRates Id ( = ), UpdatedAt ( >= , > ) PurchaseCreditNotes, PurchaseQuickEntries, SalesEstimates, SalesQuickEntrie, SalesQuotes Id ( = ), Date ( = , <= , < , >= , > ), ContactId ( = ), StatusId ( = ), UpdatedAt ( >= , > ) DML Operations Operation Object INSERT, UPDATE, DELETE Addresses, AnalysisTypeCategories, Attachments, BankAccounts, BankOpeningBalances, BankTransfers, BusinessExchangeRates, ContactAllocations, ContactOpeningBalances, ContactPayments, ContactPersons, Contacts, LedgerAccountOpeningBalances, OtherPayments, Products, ProductSalesPriceTypes, PurchaseCreditNotes, PurchaseInvoices, PurchaseQuickEntries, SalesEstimates, SalesQuickEntries, SalesQuotes, ServiceRateTypes, Services, StockItems, StockMovements, TaxRates INSERT, DELETE BankDeposits, HostedArtefactPaymentSettings, Journals, OpeningBalanceJournals INSERT, UPDATE BankReconciliations, LedgerAccounts, SalesCreditNotes, SalesInvoices UPDATE AnalysisTypes, TaxProfiles INSERT MigrationTaxReturns The Returning feature in INSERT and UPDATE operations is supported for all tables, including cases where Bulk Update is used. Stored Procedures Skyvia represents part of the supported Sage Accounting features as stored procedures. For example, you can [call a stored procedure](https://docs.skyvia.com/supported-sql-for-cloud-sources/call-statements-and-stored-procedures.html) by specifying it as the command text in the ExecuteCommand action of a Target component in a [Data Flow](https://docs.skyvia.com/data-integration/data-flow/) or in a [Query](https://docs.skyvia.com/query/) . VoidSalesInvoice To nullify or cancel an existing sales invoice within the Sage Accounting system, run the command: call VoidSalesInvoice('dd55314c-e3ba-453e-907b-947d0ec6f37a', 'some void reason') Accepted parameters: PARAMETER NAME DESCRIPTION ID ID of the SalesInvoice object. Required parameter Void reason Required if the status is not DRAFT If an object with the specified ID doesn\u2019t exist, the procedure returns an error. VoidSalesCreditNote To nullify or cancel an existing sales credit note within the Sage Accounting system, run the command: call VoidSalesCreditNote('dd55314c-e3ba-453e-907b-947d0ec6f37a', 'some void reason') Accepted parameters: PARAMETER NAME DESCRIPTION ID ID of the SalesCreditNote object. Required parameter Void reason Required if the status is not DRAFT If an object with the specified ID doesn\u2019t exist, the procedure returns an error. ReleaseSalesInvoice The procedure is used to finalize or post a sales invoice that has been created but is currently in a draft or unposted state. Executing this procedure moves the invoice from draft status to an active state, updating the accounting records to reflect the transaction. To call the procedure, run the command: call ReleaseSalesInvoice('dd55314c-e3ba-453e-907b-947d0ec6f37a') The procedure requires the ID of the SalesInvoice object as a parameter. If an object with the specified ID doesn\u2019t exist, the procedure returns an error. ReleaseSalesCreditNote The procedure is used to finalize or post a sales credit note that is currently in a draft or unposted state. Executing this procedure transitions the credit note to an active status, updating the accounting records to reflect the credit transaction. To call the procedure, run the command: call ReleaseSalesCreditNote('dd55314c-e3ba-453e-907b-947d0ec6f37a') The procedure requires the ID of the SalesCreditNote object as a parameter. If an object with the specified ID doesn\u2019t exist, the procedure returns an error. ReleasePurchaseInvoice The procedure is used to finalize or post a purchase invoice that is currently in a draft or unposted state. Executing this procedure transitions the purchase invoice to an active status, updating the accounting records to reflect the liability. To call the procedure, run the command: call ReleasePurchaseInvoice('dd55314c-e3ba-453e-907b-947d0ec6f37a') The procedure requires the ID of the PurchaseInvoice object as a parameter. If an object with the specified ID doesn\u2019t exist, the procedure returns an error. ReleasePurchaseCreditNote The procedure is used to finalize or post a purchase credit note that is currently in a draft or unposted state. Executing this procedure transitions the purchase credit note to an active status, updating the accounting records to reflect the reduction in liability to a supplier. To call the procedure, run the command: call ReleasePurchaseCreditNote('dd55314c-e3ba-453e-907b-947d0ec6f37a') The procedure requires the ID of the PurchaseCreditNote object as a parameter. If an object with the specified ID doesn\u2019t exist, the procedure returns an error. Supported Actions Skyvia supports all the [common actions](https://docs.skyvia.com/connectors/actions/#common-actions) for Sage Accounting." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Salesforce [Salesforce](https://www.salesforce.com/) is the most popular cloud CRM with a focus on support, sales, and marketing. It gives access to customer data and interaction history and provides a complete customer picture including insights and engage-enhancement strategies. Data integration : Skyvia supports importing data to and from Salesforce, exporting Salesforce data to CSV files, replicating Salesforce data to relational databases, and synchronizing Salesforce data with other cloud apps and relational databases. Backup : Skyvia Backup supports Salesforce backup. If you want to know how Skyvia uses Salesforce API calls, check Salesforce API and API Calls Usage . Establishing Connection While [creating a connection](https://docs.skyvia.com/connections/#creating-connections) , you have to choose the authentication method (User Name & Password or OAuth) and one of three environment types: Production \u2014 to connect to your Salesforce organization. Sandbox \u2014 to connect to your test environment. Custom \u2014 to connect to your custom Salesforce domain. Getting credentials For the User Name & Password authentication, you need to choose Salesforce environment and enter your user name, password, and security token. To generate the security token, do the following: Log in to your [Salesforce account](https://login.salesforce.com/) . Click the profile avatar and choose Settings . Go to My Personal Information \u2192 Reset My Security Token . Check your email for the security token. Creating Connection Depending on the authentication type, the connection creation flow may slightly differ. You may find step-by-step instructions for the Username & Password and OAuth authentications below. Username & Password Authentication Select Environment type. If you select Custom , enter your custom domain URL into the Login URL field. Select Username & Password as your authentication method. Enter a user name, password, and Salesforce security token. Click Create Connection . OAuth Authentication Select Environment type. If you select Custom , enter your custom domain URL into the Login URL field. Select OAuth 2.0 as your authentication method. Click Sign in with Salesforce . Enter your login and password. OAuth token will be generated automatically. Click Create Connection . Additional Connection Parameters Metadata Cache You can specify the period of time, after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Use Bulk API Use Bulk API specifies whether to use Salesforce Bulk API to load data to Salesforce. If you clear this checkbox, Skyvia will use SOAP API to load data to Salesforce instead of Bulk API. SOAP API is slower on large volumes of data. By default, Bulk API is disabled for Salesforce Professional or Trial accounts, and a user cannot enable them manually. You must either clear the Use Bulk API checkbox if you connect to Salesforce Professional or Trial account or ask Salesforce support to enable Bulk API for your account. Skyvia supports Salesforce Bulk API v1 and Salesforce Bulk API v2. It\u2019s generally suggested to use Bulk API v2 unless you have specific reasons to stick to Bulk API v1. Here is the list of benefits of using the newer API version: Improved Performance: Bulk API v2 is designed to provide better performance and scalability. It supports larger batch sizes, allowing you to process more records in a single request, which can significantly reduce the overall processing time for large data sets. Data Serialization: Bulk API v2 introduces data serialization, which reduces the size of data payloads sent over the network. This optimization improves network bandwidth utilization and can result in faster data transfer and processing times. Enhanced Error Handling: Bulk API v2 provides a more detailed error response structure, making it easier to identify and resolve issues during data processing. The improved error handling capabilities help in troubleshooting and improving data quality. Skyvia supports Salesforce Professional or Trial edition or higher. Please note that if you use Salesforce Professional or Trial edition, you need to clear the Use Bulk API checkbox in the connection settings. Command Timeout Specifies the wait time before terminating an attempt to execute a command and generating an error. Command Timeout doesn\u2019t affect the wait time for data fetching. Change this value if command execution takes too long, and this causes timeout errors. Include Deleted Include Deleted determines whether to return deleted records from a recycled bin. Use SystemModstamp for data changes tracking This parameter defines whether to use the SystemModstamp field to track changes in Salesforce. By default, replication, synchronization, and import functionality in Skyvia use the LastModifiedDate field to track changed records. If a record is changed by a trigger or any other automated process, the LastModifiedDate field will not be updated. However, when you select Use SystemModstamp checkbox, any updates by triggers and automation processes will be taken into account. Split batches for update When you run the Update operation, Skyvia gets data from Source in batches and tries to update Salesforce records with this data. If there is more than 12 records with duplicate Ids in one batch, due to Salesforce limitations, operation will fail with an error. Split batches for update option will force Skyvia to check batches for duplicates and split data in more batches allowing no more than 12 records with duplicate Ids per batch. Ignore Blank Values for Update This parameter determines whether the Update operation ignores blank (null) values and keeps data in the corresponding Salesforce fields or it writes null values to Salesforce. Connector Specifics External ID mapping Skyvia supports External ID mapping for Salesforce foreign key fields. External ID in Salesforce is a field that has the External ID attribute, meaning that it contains unique record identifiers from a system outside of Salesforce. Skyvia allows you to map object references using the referenced object External ID field values. For more details see [Mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/index.html) . Not Equals Operator Behaviour When the not equals operator is used in filters for Salesforce, records with an empty (null) value in the compared field are also returned. This behavior is different from the behavior of this operator for the database, but this is the normal Salesforce behavior. Lookup Mapping by Salesforce Id If you use Lookup mapping by a Salesforce Id field, you need to provide 18-character ID values for lookup, not 15-character ones. In other Skyvia features, for example, Query, or when inserting data, Skyvia supports both 18-character and 15-character ID values. Supported Actions Skyvia supports Upsert and Run Report for Salesforce in addition to all the common actions . Troubleshooting Below you may find the list of the most common issues Skyvia users may face with the Salesforce connection. No Such Column Error When you are trying to restore a table in Salesforce Error no such column: \u2026 message indicates that there are discrepancies in metadata between the backup and current Salesforce metadata. [Clear the metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) in your Salesforce connection to fix the problem. TotalRequests Limit Exceeded The TotalRequests Limit Exceeded error notifies that you have reached the Total API Request Limit on the Salesforce side. To fix the error, contact [Salesforce Support](https://help.salesforce.com/s/) . Query Timeout If you receive QUERY_TIMEOUT: Your query request was running for too long error, increase the Command Timeout value in Advanced Settings on your connection page." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections/run-report-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Salesforce Run Report This action runs an existing Salesforce report. Action Settings Setting Description Id Salesforce report Id. Action Parameters Run Report action parameters correspond to the parameters of the report. Result The action returns records, returned by the Salesforce report for the specified parameter values. Example Here is an example of Run Report action in an export ." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections/salesforce_api_and_api_calls.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Salesforce Salesforce API and API Calls Usage API Usage By default, Skyvia uses Salesforce SOAP API to query data from Salesforce and Bulk API to load data to Salesforce . It doesn\u2019t use Bulk API to query data from Salesforce because there is no any performance gain for selecting data. If Bulk API is not available for your account or you don\u2019t want to use it, you can completely disable Bulk API use in the Advanced Settings by clearing the Use Bulk API checkbox. API Calls for Loading Data to Salesforce By default, Skyvia uses Bulk API to load data to Salesforce. It uses 1 API call per a batch of sent records. The number of records in a batch varies, and it greatly depends on the overall structure and complexity of an integration, number of mapped fields, size of values in records, mapping, etc. The maximum size of a batch is 5,000, but usually it is significantly less than 5,000 records. In very complex cases it may be even just a few records. If you opt to use SOAP API for loading data to Salesforce, Skyvia also loads data in batches, and all the information above is true. The only difference is maximum batch size, which is 200 records in case of SOAP API. In import integrations, running on the new data integration runtime, you can use the Batch Size option to better control the size of the batches. Note that if you set this option to a larger value than Salesforce API allows, the batch is split into multiple internal batches automatically. API Calls for Querying Data from Salesforce When Salesforce is used as a source, Skyvia uses the following number of API calls to query Salesforce data: 1 API call per 2,000 records without custom long text area fields or with 1 such field; 1 API call per 200 records when 2 or more custom long text fields are queried; 1 API call per 1 Attachment record. These numbers do not depend on the integration structure and mapping, they only depend on how Salesforce returns records. A synchronization or replication queries modified Salesforce records and deleted Salesforce records separately, so it uses at least 2 API calls per task. API Calls in Lookups to Salesforce When you use lookup mapping in your integrations, where a lookup object is a Salesforce object, we use an additional API call per a batch of maximum 50 records processed by a task for each lookup, even for multiple lookups to the same Salesforce object. The actual size of this batch depends on the overall structure and complexity of a integration, number of mapped fields, size of values in records, mapping, etc. If the above numbers, lengths, and complexity are high, the actual size of a batch may be less than 50. If you use a two-level lookup , 1 call per used lookup per batch is used. The Use Cache lookup option may significantly affect API calls usage . Depending on the number of records loaded by the task and number of records in the lookup object, it may decrease or increase the number of API calls used. When this checkbox is selected, Skyvia queries all the records from the lookup object first, spending API Calls as described above, in the API Calls for Querying Data section. Then lookups are performed over the cache, without using Salesforce API calls. Besides, if you use multiple lookups to the same Salesforce object in one task, records from this object are queried to the cache only once. Please note that with this checkbox selected, 1 API call is used per a batch of 2,000 or 200 records (see above), and the total number of records from the lookup object determines the number of these batches. When it is NOT selected, 1 API call is used per a batch of up to 50 records, and the number of records processed by the task , determines the number of these batches. Integration Scheduling Considerations Import or export with Salesforce as a source uses at least 1 API call per task even when it does not process any records. If you schedule it to run every minute, it may use 1,440 API calls per task per day. Replication or synchronization uses at least 2 API calls per task for querying data from Salesforce: one \u2014 to query new and updated records, and another \u2014 to query deleted records. So, if you schedule such an integration to run every minute, it may use 2,880 API calls per task per day, even if no records were modified and synchronized during this day. API Calls in Backup When backing up Salesforce data, Skyvia uses Salesforce API calls as described in the API Calls for Querying Data section above. Please note that every time a full backup is performed, and Skyvia queries all the data from backed up objects (unless you configured data filtering in backup tasks). When restoring Salesforce data, Skyvia uses Salesforce API calls as described in the API Calls for Loading Data to Salesforce section above. Note that if you compare backups and undo changes, batches are counted separately for each of the operations \u2014 inserting, updating, and deleting records. If you, for example, select and undo adding of 1 record, updating of 1 record, and deleting of 1 record, it will be 3 API calls, not one API call per batch of 3 records. API Calls in Query When you query Salesforce data, API calls are used as described in the API Calls for Querying Data section above (in batches of 2,000 or 200 records per API call). However, you should consider one moment \u2014 if a query is not that complex and can be translated to Salesforce SQL directly, only the records returned by the query are counted. If a query is too complex, Skyvia will query all the data from the mentioned objects first and then performs query on its side. Thus, it may use a lot of API calls to query all the data from these objects. When performing an INSERT statement, Skyvia uses one API call. When performing DELETE or UPDATE statement, first Skyvia queries all the records, matching the WHERE condition, as described above. After you apply changes, Skyvia applies them using 1 API call per a batch of up to 5,000 records for Bulk API, or per up to 200 records for SOAP API. API Calls in Connect If you create an OData endpoint to Salesforce data and query its data, API calls are used as described in the API Calls for Querying Data section. As well as in Query, if an OData query is not that complex and can be translated to Salesforce SQL directly, only the records returned by the query are counted. But if a query is too complex, Skyvia will query all the data from the mentioned objects first and then performs query on its side. Thus, it may use a lot of API calls to query all the data from these objects. When you update Salesforce data via OData endpoint, no batches are used. 1 API call is used for creating, updating, or deleting each record." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections/upsert-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Salesforce Upsert This action uses Salesforce native UPSERT mechanism, which allows inserting records if such record does not exist in Salesforce, and updating the record if such a record is found. Salesforce uses External ID fields to check whether a matching record already exists in Salesforce. External ID in Salesforce is a custom field that has the \u201cExternal ID\u201d attribute and uniquely identifies records. There can be multiple External Id fields in a Salesforce object. Action Settings Setting Description Table An object to load records to. External Id The object External ID field to use for matching records. Action Parameters Upsert action parameters correspond to the fields of target table. You must map at least the parameters corresponding to the required target table fields, and the parameter corresponding to the selected External ID field. Result The records are inserted or updated in the target table. Example Here is an example of Upsert action in the Target component of Data Flow . This example loads Accounts to Salesforce." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/scoro_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Scoro [Scoro](https://www.scoro.com/) is an all-in-one business management software that combines project management with time and team management, sales, billing, etc. Data integration : Skyvia supports importing data to and from Scoro, exporting Scoro data to CSV files, replicating Scoro data to relational databases, and synchronizing Scoro data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Scoro. Query : Skyvia Query supports Scoro. Establishing Connection Skyvia supports two kinds of authentication for Scoro: API Key and Basic. To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to Scoro, you must select Authentication to use and specify your Scoro SubDomain . For Basic authentication, you need to specify your User name and Password and select your Company Account . For API Key authentication, you need to specify the API Key and your Company Account Id . Note that when using Basic authentication, Skyvia can access less data from Scoro. We recommend to use API Key authentication. Getting Credentials To obtain credentials, in your Scoro account perform the following steps: Click Settings and then click Preferences . Point to Site settings and click Integrations . Click Scoro API . Here you can find the required parameters. The SubDomain can be obtained from the Company base URL . This is the part of the URL between http:// and .scoro.com/api/v2 . So if your Company base URL is https://testcompany.scoro.com/api/v2 , your subdomain is testcompany . The API Key can be found below. Under the API Key, you can see the corresponding company accounts and their Company Account Ids (in the company_account_id column). Creating Connection Using API Key Authentication To create a connection to Scoro using Basic authentication, perform the following steps: Enter your Scoro SubDomain . In the Authentication list, select API Key . Enter your API Key . Enter your Company Account Id . Creating Connection Using Basic Authentication To create a connection to Scoro using Basic authentication, perform the following steps: Enter your Scoro SubDomain . In the Authentication list, select Basic . Enter your User login and Password . Select your Company Account from the list. Connector Specifics Object peculiarities Contacts, People, and Companies Skyvia presents Scoro Contacts as two objects: People and Companies . It also provides a read-only Contacts object with all the records. All three of these objects have a binary PictureContent field. When this field is queried, Skyvia performs an additional API request for each record, so for better performance, it\u2019s better not to include this field in queries. Files The Files object has a binary Content field. When this field is queried, Skyvia performs an additional API request for each record. When loading data to this field is performed in chunks of 5 Mb of base64 encoded data per request. Thus, several API requests can be performed to import one File record. ContactRelations The Insert operation for this object actually performs Upsert. Returning option ( Returning in Import and RETURNING SQL keyword) for this object does not actually queries the inserted data. Instead if returns the values, that Skyvia loads to Scoro. LaborCost The Insert operation for this object actually performs Upsert. TriggersAndActions This object is read-only on Skyvia. Not Supported objects Skyvia does not support the following Scoro objects: LocalPriceLists , Notifications , Bookmarks . Nested Objects The connector includes objects with fields that store complex, structured data in JSON format. The Nested Objects mapping feature allows you to insert or update these nested values when configuring import. To replicate nested data into separate tables with the new replication runtime, select the Separate Tables option for Unwind Nested Objects . The table below lists objects with specific fields designed to store complex data structures: Object or Nested Object Field Nested Object BusinessEntity *Address * AddressType ActiveCurrencies ValueType People *Address * AddressType Addresses AddressType Mobile ValueType Phone ValueType Email ValueType EmailForInvoice ValueType Website ValueType Skype ValueType Fax ValueType Twitter ValueType LinkedIn ValueType Tags ValueType ContactUsers ValueType RelatedCompanies RelatedCompaniesType Companies *Address * AddressType Addresses AddressType Mobile ValueType Phone ValueType Email ValueType EmailForInvoice ValueType Website ValueType Skype ValueType Fax ValueType Twitter ValueType LinkedIn ValueType Tags ValueType ContactUsers ValueType Contacts Address AddressType Addresses AddressType Mobile ValueType Phone ValueType Email ValueType EmailForInvoice ValueType Website ValueType Skype ValueType Fax ValueType Twitter ValueType LinkedIn ValueType Tags ValueType ContactUsers ValueType RelatedCompanies RelatedCompaniesType CalendarEvents Users UserType Resources ResourcesType Tasks *RelatedUsers * ValueType RelatedUsersEmails ValueType TimeEntries TimeEntriesType Invoices QuoteId ValueType OrderId ValueType PrepaymentId ValueType CreditedInvoices ValueType Lines InvoiceLinesType Prepayments Lines PrepaymentLinesType ScheduledInvoices Lines ScheduledInvoiceLinesType ReceiptGroups *Receipts * ReceiptsType Quotes Lines QuoteLinesType Orders Lines OrderLinesType Bills Lines BillLinesType Expenses Lines ExpenseLinesType PurchaseOrders Lines PurchaseOrderLinesType Products Pictures ValueType OrderId ValueType Projects Phases PhasesType Tags ValueType ProjectUsers ProjectUsersType ProjectAccounts ValueType Users UserGroupsIds ValueType TriggersAndActions Activities ActivitiesType Actions ActionsType Roles PriceLists PriceListsType Users ValueType TimeOffs UsersDates UsersDatesType UsersType UserGroupsIds ValueType InvoiceLinesType Dates KeyValueType ScheduledInvoiceLinesType Dates KeyValueType QuoteLinesType Dates KeyValueType OrderLinesType Dates KeyValueType UsersDatesType Dates DatesType Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: Activities , BillComments , Bills , Bookings , CalendarComments , CalendarEvents , Categories , ClientProfiles , Companies , CompanyComments , Contacts , EventResources , Expenses , Files , FinanceAccounts , InvoiceComments , Invoices , OrderComments , Orders , People , PeopleComments , PrepaymentComments , Prepayments , Products , ProjectComments , Projects , PurchaseOrders , QuoteComments , Quotes , ReceiptGroups , Roles , ScheduledInvoices , Statuses , TaskComments , Tasks , TimeEntries , TimeOffs , Users . Note that only newly created objects are detected of the Files object because it has only the CreatedDate field, and don\u2019t have the UpdatedDate field. Synchronization is typically available for objects that allow Incremental Updates as well as INSERT and UPDATE operations. Thus, the following objects support synchronization: Bills , Bookings , CalendarEvents , ClientProfiles , Companies , Expenses , FinanceAccounts , Invoices , Orders , People , Prepayments , Products , Projects , PurchaseOrders , Quotes , ReceiptGroups , ScheduledInvoices , Statuses , TaskComments , Tasks , TimeEntries , TimeOffs . Filtering Specifics The Scoro API supports the following native filters: Object Fields and Operators AccountingObjects Id ( = ), ObjectSymbol ( = ), Name ( = ), ParentGroupName ( = ) Activities Name ( = ), ParentName ( = ), IsGroup ( = ), IsActive ( = ), UpdatedDate ( = , < , <= , > , >= ) BillComments UpdatedDate ( = , < , <= , > , >= ) Bills Id ( = ), DocumentNumber ( = ), IsChargeable ( = ), DateOfPayment ( = , < , <= , > , >= ), PaymentType ( = ), RecognitionDate ( = , < , <= , > , >= ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), VATSum ( = ), Sum ( = ), CompanyAddressCountry ( = ), CompanyAddressCounty ( = ), CompanyAddressMunicipality ( = ), CompanyAddressCity ( = ), CompanyAddressStreet ( = ), CompanyAddressZipCode ( = ), CompanyAddressFullAddress ( = ), Currency ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_bill ( = ) Bookings Id ( = ) CalendarComments UpdatedDate ( = , < , <= , > , >= ) CalendarEvents Id ( = ), EventName ( = ), EventType ( = ), Address ( = ), StartDatetime ( = , < , <= , > , >= ), EndDatetime ( = , < , <= , > , >= ), FullDayEvent ( = ), Status ( = ), IsPersonal ( = ), ProjectName ( = ), CompanyName ( = ), DurationPlanned ( = ), BillableHours ( = ), DurationActual ( = ), OwnerEmail ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_date ( = ), fk_dateTime ( = , < , <= , > , >= ) ClientProfiles Id ( = ) Companies Id ( = ), Name ( = ), SearchName ( = ), BankAccount ( = ), Comments ( = ), VATNumber ( = ), Timezone ( = ), IsSupplier ( = ), IsClient ( = ), CatName ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_textComp ( = ) CompanyComments UpdatedDate ( = , < , <= , > , >= ) Contacts Id ( = ), Name ( = ), LastName ( = ), SearchName ( = ), BankAccount ( = ), Birthday ( = , < , <= , > , >= ), Position ( = ), Comments ( = ), VATNumber ( = ), Sex ( = ), Timezone ( = ), IsSupplier ( = ), IsClient ( = ), CatName ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ) Depot Id ( = ), DepotName ( = ), Comments ( = ) EventResources Id ( = ), ResourceName ( = ), ResourceColor ( = ), UpdatedDate ( = , < , <= , > , >= ) Expenses Id ( = ), Date ( = , < , <= , > , >= ), DocumentNumber ( = ), IsChargeable ( = ), IsReimbursable ( = ), Status ( = ), Sum ( = ), CompanyAddressCountry ( = ), CompanyAddressCounty ( = ), CompanyAddressMunicipality ( = ), CompanyAddressCity ( = ), CompanyAddressStreet ( = ), CompanyAddressZipCode ( = ), CompanyAddressFullAddress ( = ), Currency ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_Expe ( = ) Files Id ( = ), FileName ( = ), Hash ( = ), FileLocation ( = ), Repository ( = ), FileType ( = ), IsPrivate ( = ), IsPublic ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), DateTimeAccessed ( = , < , <= , > , >= ) FinanceAccounts Id ( = ), AccountSymbol ( = ), Name ( = ), ParentName ( = ), IsActive ( = ), IsSalesAccount ( = ), IsPurchasesAccount ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ) InvoiceComments UpdatedDate ( = , < , <= , > , >= ) Invoices Id ( = ), PaymentType ( = ), Fine ( = ), PrepaymentSum ( = ), Currency ( = ), PaymentFee ( = ), IsRoleBased ( = ), DocumentNumber ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), Sum ( = ), VATSum ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_inv_only ( = ) LaborCost UserId ( = ), Date ( = ), HourCost ( = ) OrderComments UpdatedDate ( = , < , <= , > , >= ) Orders Id ( = ), ShipmentDate ( = , < , <= , > , >= ), IsRoleBased ( = ), DocumentNumber ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), VATSum ( = ), Sum ( = ), CompanyAddressCountry ( = ), CompanyAddressCounty ( = ), CompanyAddressMunicipality ( = ), CompanyAddressCity ( = ), CompanyAddressStreet ( = ), CompanyAddressZipcode ( = ), CompanyAddressFullAddress ( = ), Currency ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_order ( = ) PDFTemplates Name ( = ), Type ( = ), Status ( = ) People Id ( = ), Name ( = ), LastName ( = ), SearchName ( = ), BankAccount ( = ), Birthday ( = , < , <= , > , >= ), Position ( = ), Comments ( = ), Sex ( = ), Timezone ( = ), IsSupplier ( = ), IsClient ( = ), CatName ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), IsDeleted ( = ), fk_testCap ( = ), fk_text ( = ), fk_checkable_textbox ( = ), fk_checkbox ( = ), fk_dropdownList ( = ), fk_date ( = ), fk_duration ( = ), fk_dateTime ( = , < , <= , > , >= ), fk_number_total ( = ), fk_number_avr ( = ), fk_duration_total ( = ), fk_duration_avg ( = ), fk_duration_all ( = ), fk_text_html ( = ), fk_number_all ( = ), fk_number_range ( = ), fk_number_range_2 ( = ), fk_money ( = ), fk_numbering ( = ), fk_numbering_startVal ( = ), fk_imageId ( = ), fk_relation_comp ( = ), fk_relation_cont ( = ), fk_image1Id ( = ), fk_newNumbering ( = ) PeopleComments UpdatedDate ( = , < , <= , > , >= ) PermissionSets Id ( = ) PrepaymentComments UpdatedDate ( = , < , <= , > , >= ) Prepayments Id ( = ), Fine ( = ), PrepaymentPercent ( = ), IsRoleBased ( = ), DocumentNumber ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), Sum ( = ), VATSum ( = ), Currency ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_prep ( = ) PriceLists Id ( = ), Name ( = ), Currency ( = ), AccountId ( = ) ProductGroups Id ( = ), Name ( = ), Comments ( = ), PcsPerPackage ( = ), AccountName ( = ) Products Id ( = ), Code ( = ), Name ( = ), Price ( = ), BuyingPrice ( = ), Description ( = ), Description2 ( = ), Unit ( = ), Amount ( = ), Amount2 ( = ), Tag ( = ), Url ( = ), DefaultType ( = ), UseSupplier ( = ), IsActive ( = ), IsService ( = ), NoDiscounts ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_prod ( = ) ProjectComments UpdatedDate ( = , < , <= , > , >= ) ProjectPhases Type ( = ), Title ( = ), StartDate ( = , < , <= , > , >= ), EndDate ( = , < , <= , > , >= ), Ordering ( = ) Projects Id ( = ), DocumentNumber ( = ), ProjectName ( = ), IsPersonal ( = ), IsPrivate ( = ), Color ( = ), Status ( = ), StatusName ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Duration ( = ), AccountId ( = ), BudgetType ( = ), IsDeleted ( = ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_proj ( = ) PurchaseOrders Id ( = ), DeliveryDate ( = , < , <= , > , >= ), ConfirmedDate ( = , < , <= , > , >= ), DocumentNumber ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), Sum ( = ), VATSum ( = ), Currency ( = ), Date ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_purch_ord ( = ) QuoteComments UpdatedDate ( = , < , <= , > , >= ) Quotes Id ( = ), ConfirmedDate ( = , < , <= , > , >= ), EstimatedClosingDate ( = , < , <= , > , >= ), VATSum ( = ), Sum ( = ), Currency ( = ), ShipmentDate ( = , < , <= , > , >= ), EstimatedDuration ( = ), DocumentNumber ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), Date ( = , < , <= , > , >= ), Deadline ( = , < , <= , > , >= ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_quote ( = ) ReceiptAccounts Name ( = ), Type ( = ) ReceiptGroups Date ( = , < , <= , > , >= ), Sum ( = ), Currency ( = ), SalesDocType ( = ), IsDeleted ( = ), UpdatedDate ( = , < , <= , > , >= ) RelationTypes RelationName ( = ), RelationSideOne ( = ), RelationSideTwo ( = ), RelationObjectType ( = ) RevenueForecastPeriods Id ( = ) Roles Id ( = ), IsActive ( = ), Name ( = ) RolesWithPrices RoleName ( = ), PricelistName ( = ), SellingPrice ( = ), Currency ( = ) ScheduledInvoices Id ( = ), Fine ( = ), DeadlineDays ( = ), GenerateType ( = ), CalculateLineDates ( = ), LastGenerationDate ( = ), MarkPaidFromPrepayment ( = ), AutoSend ( = ), Lang ( = ), MailFromName ( = ), MailFrom ( = ), Cc ( = ), Bcc ( = ), UseSignature ( = ), IsRoleBased ( = ), Discount ( = ), Discount2 ( = ), Discount3 ( = ), Sum ( = ), VATSum ( = ), Currency ( = ), Status ( = ), Description ( = ), AccountId ( = ), IsSent ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_inv_only ( = ) SentEmails Datetime ( = , < , <= , > , >= ), SentFrom ( = ), SentTo ( = ), Type ( = ), Subject ( = ), SendType ( = ), IsSent ( = ), IsDraft ( = ) Statuses StatusId ( = ), StatusName ( = ), Color ( = ), Module ( = ), UpdatedDate ( = , < , <= , > , >= ) TaskComments UpdatedDate ( = , < , <= , > , >= ) Tasks Id ( = ), EventName ( = ), IsCompleted ( = ), DateTimeCompleted ( = , < , <= , > , >= ), DurationActual ( = ), DateTimeDue ( = , < , <= , > , >= ), Status ( = ), StatusName ( = ), Sortorder ( = ), ActivityType ( = ), IsPersonal ( = ), ProjectName ( = ), DurationPlanned ( = ), BillableHours ( = ), OwnerEmail ( = ), IsDeleted ( = ), CreatedDate ( = , < , <= , > , >= ), UpdatedDate ( = , < , <= , > , >= ), fk_testCap ( = ), fk_number_total ( = ), fk_task ( = ) TimeEntries Id ( = ), Description ( = ), EventType ( = ), StartDatetime ( = , < , <= , > , >= ), EndDatetime ( = , < , <= , > , >= ), Duration ( = ), BillableDuration ( = ), IsCompleted ( = ), IsConfirmed ( = ), IsBillable ( = ), CompletedDatetime ( = , < , <= , > , >= ), IsLocked ( = ), TimeEntryDate ( = , < , <= , > , >= ), IsSubmitted ( = ), IsDeleted ( = ) TimeOffs Id ( = ), Type ( = ) TriggersAndActions Id ( = ), Module ( = ), Name ( = ), Status ( = ), IsShared ( = ) UserGroups Id ( = ), GroupName ( = ), Comment ( = ), IsHidden ( = ) Users Id ( = ), UserName ( = ), FirstName ( = ), LastName ( = ), FullName ( = ), Initials ( = ), Email ( = ), Status ( = ), Birthday ( = , < , <= , > , >= ), Category ( = ), Position ( = ), CountryId ( = ), Gsm ( = ), TimeZone ( = ), UpdatedDate ( = , < , <= , > , >= ) VATCodes Id ( = ), VATCode ( = ), Percent ( = ), VATName ( = ), IsSales ( = ), IsPurchases ( = ), IsActive ( = ), IsNotApplicable ( = ) DML Operations support Operation Object INSERT, UPDATE, DELETE Bills, Bookings, CalendarEvents, ClientProfiles, Companies, Expenses, FinanceAccounts, Invoices, Orders, People, Prepayments, Products, Projects, PurchaseOrders, Quotes, ReceiptGroups, ScheduledInvoices, Tasks, TimeEntries, TimeOffs, VATCodes INSERT, DELETE BillComments, CalendarComments, CompanyComments, Files, InvoiceComments, OrderComments, PeopleComments, PrepaymentComments, ProjectComments, QuoteComments, TaskComments INSERT, UPDATE PriceLists, ProductGroups INSERT ContactRelations, LaborCost Supported Actions Skyvia supports all the common actions for Scoro." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/segment_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Segment [Segment](https://segment.com/) is a customer data platform (CDP) designed for collecting, cleaning, and controlling customer information. Data integration : Skyvia supports importing data to and from Segment, exporting Segment data to CSV files, replicating Segment data to relational databases, and synchronizing Segment data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Segment. Query : Skyvia Query supports Segment. Establishing Connection To create a connection to Segment, specify the API region and token. Getting Credentials API Token To locate API Token, you must do the following: Log in to Segment. Click Settings -> Workspace settings Go to the Access Management tab and switch to the Tokens tab. Click Create Token . Specify the description and assign the token permissions. Creating Connection To connect to Segment, perform the following steps: Select the API Region . US Region value is selected by default. Paste the obtained API Token into the corresponding box in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For the Users object, Segment API returns only part of the fields when querying multiple records. To query values of the lacking Permissions field, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. To reduce the number of API calls, select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities InsertDestinationFunctions, DestinationFunctions, SourceFunctions The Code and Settings fields in the InsertDestinationFunctions, DestinationFunctions, and SourceFunctions are not displayed in the results when querying. These fields are used for import only. DailyPerSourceAPICallsUsage, DailyWorkspaceAPICallsUsage, DailyPerSourceMTUUsage, DailyWorkspaceMTUUsage You must set the filter by the Period field, when getting data from these objects. A valid value contains the month and year for which you want to get data. \nFor example, 2023-04 . Every record of these objects corresponds to one day of the month specified in the filter. If you set the current month, the records from the first day to the current date of the month will return in the result. If you don\u2019t set the filter by Period , you get an error: Error occurred while reading \u2018DailyPerSourceAPICallsUsage\u2019 object: \u201cperiod\u201d is required. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the TrackingPlans, InsertDestinationFunctions, DestinationFunctions, SourceFunctions , and SourceRegulations . Skyvia tracks only the new records for all the mentioned objects except the TrackingPlans , where the created and updated records are tracked. Skyvia supports Synchronization for the TrackingPlans, DestinationFunctions, InsertDestinationFunctions, SourceFunctions . DML Operations Support Operation Object INSERT, UPDATE, DELETE DestinationFunctions, Destinations, Groups, InsertDestinationFunctions, SourceFunctions, Sources, TrackingPlans, Transformations, Warehouses INSERT, DELETE Labels, SourceRegulations Supported Actions Skyvia supports all the common actions for Segment." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sendcloud_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Sendcloud [Sendcloud](https://www.sendcloud.com/) is a shipping platform that helps e-commerce businesses manage deliveries and provides seamless shipping workflow. Data integration : Skyvia supports importing data to and from Sendcloud, exporting Sendcloud data to CSV files, replicating Sendcloud data to relational databases, and synchronizing Sendcloud data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Sendcloud. Query : Skyvia Query supports Sendcloud. Establishing Connection To create a connection to Sendcloud specify the public and private keys. Getting Credentials To get the public and private keys, go to Sendcloud and do the following: Click the gear icon on the top and select Integrations . Click Connect under Sendcloud API . Name your integration, specify other integration settings if needed, and click Save . Copy the private and public keys values and keep them somewhere. The public and private keys are available only once during creation. Creating Connection To connect to Sendcloud, paste the obtained keys to the corresponding boxes in the Connection Editor. Additional Connection Parameters Suppress Extended Requests For the Pickups objects, Sendcloud API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Pickups Contract, City, Address, Address_2, PostalCode, Quantity, CompanyName, Name, Email, Reference, SpecialInstructions, Telephone, TotalWeight You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities Parcels To successfully import data to the Parcels object, you can use two mapping options: Specify the street name and the house number together in the Address field. Specify the street name in the Address field and the house number in the AddressDivided_HouseNumber field. The ParcelItems field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in import integrations to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. TrackingInformation To query data from this object, use the filter by the TrackingNumber field. The Statuses field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in the import integrations to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. Shipments The ParcelItems field in the Shipments object stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Parcels, Pickups, Returns, Integrations and Shipments objects.\nIncremental Replication tracks only the newly created records for the Pickups and Returns . Skyvia supports Synchronization for the Parcels objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Parcels INSERT Pickups UPDATE Integrations DELETE Shipments Supported Actions Skyvia supports all the common actions for Sendcloud." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sendgrid_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SendGrid [SendGrid](https://sendgrid.com/) is a cloud-based platform intended for delivering transactional and marketing emails. Data integration : Skyvia supports importing data to and from SendGrid, exporting SendGrid data to CSV files, replicating SendGrid data to relational databases, and synchronizing SendGrid data with other cloud apps and relational databases. Backup : Skyvia Backup does not support SendGrid. Query : Skyvia Query supports SendGrid. Establishing Connection To create a connection to SendGrid, you need to specify an API Key. Getting Credentials To get your API Key, perform the following steps: Sign in to SendGrid. Click Settings in the menu on the left. In the drop-down list, click API Keys and copy your API Key. If you don\u2019t have an API Key in your SendGrid account yet, you need to create it and select API Key permissions. Creating Connection Paste the obtained API Key to the corresponding box in the Connection Editor. Additional Connection Parameters Use Custom Fields Select this checkbox to make custom fields available in Skyvia. Connector Specifics Object Peculiarities Contact Due to SendGrid API peculiarities, the INSERT and UPDATE operations work the following way. When you import a record with an existing Email value, the integration updates the corresponding target record. When you import a record with a new Email value, the integration inserts a new record to the target. It takes several minutes for changes to display in the Contact object. The ListIds field supports the INSERT operation and does not support the UPDATE operation. \nTo update the ListIds field, you can use the Insert action, map the existing value to the Email field and map the ListId field. EmailStatistics Objects To get data from the -EmailStatistics objects ( GlobalEmailStatistics, EmailStatisticsByCountry, EmailStatisticsByDeviceType etc. ), you must set the filter by StartDate with = (equals) operator only. Optionally, you can add the filter by EndDate . StartDate and EndDate fields are used for filtering only and do not display values when querying. Suppressions To import data to the Suppressions object, map the Email field to the values in the array format, for example: [\"test1@example.com\",\"test2@example.com\"] Custom Fields When you get the data from the CustomFields field of the Contacts object, the data is returned in form of a JSON array with pairs of \u201ccustom field name \u201d:\u201dcustom field value\u201d. To successfully import data to custom fields, you must pass the custom field Id instead of name, in the following format: \u201ccustom field Id \u201d:\u201dcustom field value\u201d. DML Operations Support Operation Objects INSERT, UPDATE, DELETE Alerts, API Keys, AuthenticatedDomains, BrandedLinks, Campaign, CancelSheduledSends, ContactLists, Contacts, CustomFields, Designs, Lists, Segments, Senders, SingleSends, Subusers, Templates, TransactionalTemplateVersions, UnsubscribeGroups, VerifiedSenders UPDATE, DELETE CampaignSchedules, Teammates INSERT, DELETE ContactCustomFields, Contacts, ContactSegments, PendingTeammates, ReverseDNS, Suppressions, Unsubscribes DELETE Blocks, Bounces, ContactListRecipients, InvalidEmails, SpamReports INSERT InboundParseSettings, SubuserMonitorSettings Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates selected for SendGrid objects, which have either the UpdatedDate or CreatedDate field. Thus, such objects as Alerts, Blocks, Bounces, Contacts, ContactListRecipients, Designs, DesignsPre-built, InvalidEmails, Senders, Segments,SingleSends, SpamReports, Suppressions, Templates, TransactionalTemplateVersions, Unsubscribes can be replicated with incremental updates. Skyvia supports Synchronization for such SendGrid objects as Alerts, Designs, Segments, Senders, SingleSends, Templates, TransactionalTemplateVersions . Stored Procedures Skyvia represents part of the supported SendGrid features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . If you want to omit the unnecessary parameters, specify the empty value '' for text or null for numbers. MailSend To send emails using SendGrid API, use the command call MailSend(:Personalizations, :FromEmail, :FromName, :TemplateId, :ReplyToEmail, :ReplyToName, :ReplyToList, :Subject, :Content, :Attachments, :Headers, :CustomArgs, :Categories, :SendAt, :BatchId, :Asm_GroupId, :Asm_GroupsToDisplay, :IPPoolName, :MailSettings_BypassListManagement_Enable, :MailSettings_BypassSpamManagement_Enable, :MailSettings_BypassBounceManagement_Enable, :MailSettings_BypassUnsubscribeManagement_Enable, :MailSettings_Footer_Enable, :MailSettings_Footer_Text, :MailSettings_Footer_HTML, :MailSettings_SandboxMode_Enable, :TrackingSettings_ClickTracking_Enable, :TrackingSettings_ClickTracking_EnableText, :TrackingSettings_OpenTracking_Enable, :TrackingSettings_OpenTracking_SubstitutionTag, :TrackingSettings_SubscriptionTracking_Enable, :TrackingSettings_SubscriptionTracking_Text, :TrackingSettings_SubscriptionTracking_HTML, :TrackingSettings_SubscriptionTracking_SubstitutionTag, :TrackingSettings_GAnalytics_Enable, :TrackingSettings_GAnalytics_UtmSource, :TrackingSettings_GAnalytics_UtmMedium, :TrackingSettings_GAnalytics_UtmTerm, :TrackingSettings_GAnalytics_UtmContent, :TrackingSettings_GAnalytics_UtmCampaign) PARAMETER NAME DESCRIPTION Personalizations An array of messages and their metadata. Each object within the personalizations array can be thought of as a mail envelope\u2014it defines who should receive an individual message and how that message should be handled. See the examples below . FromEmail The email address from which messages are sent. This address should be a verified sender in your Twilio SendGrid account. FromName A name or title associated with the email address such as \u201cSupport\u201d or \u201cAlex\u201d. TemplateId An email template ID. ReplyToEmail An email address to which a message is sent. ReplyToName A name or title associated with the email address such as \u201cAlex\u201d. ReplyToList An array of recipients to whom replies will be sent. Each object in this array must contain a recipient\u2019s email address. Subject The global or message level subject of your email. Content An array of objects, each containing a message body\u2019s content and MIME type. Attachments An array of objects where you can define any attachments to be included with the message. Headers A collection of JSON property name and property value pairs allowing you to specify handling instructions for your email. CustomArgs Values that are specific to the entire send that will be carried along with the email and its activity data. Categories An array of category names assigned to this message. SendAt Date and time that specifies when your email should be sent. BatchId An ID representing a batch of emails to be sent at the same time. Asm_GroupId The unsubscribe group to associate with this email. Asm_GroupsToDisplay An array containing the unsubscribe groups that you would like to be displayed to a recipient on the unsubscribe preferences page. IPPoolName The IP Pool that you would like to send this email from. MailSettings_BypassListManagement_Enable Allows you to bypass all unsubscribe groups and suppressions to ensure that the email is delivered to every single recipient. MailSettings_BypassSpamManagement_Enable Allows you to bypass the spam report list to ensure that the email is delivered to recipients. MailSettings_BypassBounceManagement_Enable Allows you to bypass the bounce list to ensure that the email is delivered to recipients. MailSettings_BypassUnsubscribeManagement_Enable Allows you to bypass the global unsubscribe list to ensure that the email is delivered to recipients. MailSettings_Footer_Enable Indicates if this setting is enabled. MailSettings_Footer_Text The plain text content of your footer. MailSettings_Footer_HTML The HTML content of your footer. MailSettings_SandboxMode_Enable Indicates if this setting is enabled. TrackingSettings_ClickTracking_Enable Indicates if this setting is enabled. TrackingSettings_ClickTracking_EnableText Indicates if this setting should be included in the text/plain portion of your email. TrackingSettings_OpenTracking_Enable Indicates if this setting is enabled. TrackingSettings_OpenTracking_SubstitutionTag Allows you to specify a substitution tag that you can insert in the body of your email at a location that you desire. TrackingSettings_SubscriptionTracking_Enable Indicates if this setting is enabled. TrackingSettings_SubscriptionTracking_Text Text to be appended to the email with the subscription tracking link. TrackingSettings_SubscriptionTracking_HTML HTML to be appended to the email with the subscription tracking link. TrackingSettings_SubscriptionTracking_SubstitutionTag A tag that will be replaced with the unsubscribe URL. TrackingSettings_GAnalytics_Enable Indicates if this setting is enabled. TrackingSettings_GAnalytics_UtmSource Name of the referrer source. TrackingSettings_GAnalytics_UtmMedium Name of the marketing medium. TrackingSettings_GAnalytics_UtmTerm Used to identify any paid keywords. TrackingSettings_GAnalytics_UtmContent Used to differentiate your campaign from advertisements. TrackingSettings_GAnalytics_UtmCampaign The name of the campaign. Personalizations value example with template: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 [{\n \"to\": [{\n \"email\": \"alex@example.com\",\n \"name\": \"Alex\"\n },\n {\n \"email\": \"bola@example.com\",\n \"name\": \"Bola\"\n }\n ],\n \"dynamic_template_data\": {\n \"subject\": \"Hello, Alex\",\n \"customer_name\": \"Alex\",\n \"confirmation_number\": \"123abc456def789hij0\"\n }\n}] Personalizations value example without template: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34 [{\n \"to\": [{\n \"email\": \"alex@example.com\",\n \"name\": \"Alex\"\n },\n {\n \"email\": \"bola@example.com\",\n \"name\": \"Bola\"\n }\n ],\n \"cc\": [{\n \"email\": \"charlie@example.com\",\n \"name\": \"Charlie\"\n }],\n \"bcc\": [{\n \"email\": \"dana@example.com\",\n \"name\": \"Dana\"\n }]\n },\n {\n \"from\": {\n \"email\": \"sales@example.com\",\n \"name\": \"Example Sales Team\"\n },\n \"to\": [{\n \"email\": \"ira@example.com\",\n \"name\": \"Ira\"\n }],\n \"bcc\": [{\n \"email\": \"lee@example.com\",\n \"name\": \"Lee\"\n }]\n }\n] Command example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30 call MailSend(\n '[\n {\n \"to\": [\n {\n \"email\": \"alex@example.com\",\n \"name\": \"Alex\"\n },\n {\n \"email\": \"bola@example.com\",\n \"name\": \"Bola\"\n }\n ],\n \"dynamic_template_data\": {\n \"subject\": \"Hello, Alex\",\n \"customer_name\": \"Alex\",\n \"confirmation_number\": \"123abc456def789hij0\"\n }\n }\n ]', \n 'annak@rndwork.com', 'DecsCorp', \n 'd-7c8108b5268e4ac1a4b08ba0aef0a472', \n null, null, null, null, null, null, \n null, null, null, null, null, null, \n null, null, null, null, null, null, \n null, null, null, null, null, null, \n null, null, null, null, null, null, \n null, null, null, null, null, null\n) Supported Actions Skyvia supports all the common actions for SendGrid." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sendinblue_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Brevo [Brevo](https://www.brevo.com/) (formerly Sendinblue) is a cloud-based email marketing solution for businesses that want to automate email marketing campaigns with a limited budget. The solution includes email marketing, transactional email, marketing automation, customer relationship management, SMS marketing, and more. Data integration : Skyvia supports importing data to and from Brevo, exporting Brevo data to CSV files, replicating Brevo data to relational databases, and synchronizing Brevo data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Brevo. Query : Skyvia Query supports Brevo. Establishing Connection You need the API Key to create a connection with Brevo. Getting Credentials To get your API Key, perform the following steps: Sign in with Brevo. Click your user icon in the top right corner. In the drop-down list, click SMTP & API and switch to the API Keys tab. Click Generate a new API key . Enter the API key name and click Generate . Copy your API Key and store in a safe place. Note that this is the only time you can see it, and you will need to generate a new one if you lose it. Creating Connection Enter the obtained API Key to the corresponding box in the Connection Editor. Additional Connetion Parameters Suppress Extended Requests The Statistics_TransacAttributes field in Contacts object and DynamicList, CreatedDate, StartDate, EndDate, CampaignStats fields in ContactLists object are additionally processed with extended requests. Enable Suppress Extended Requests to reduce the number of API calls and increase the speed of processing Contacts and ContactLists objects. Note that enabling this parameter will disable the incremental replication of the ContactLists object. Use Custom Fields Use Custom Fields parameter defines whether Skyvia will process the custom fields of the Contacts object. Enable it to Insert and Update custom field values. Brevo has precreated Email, Lastname, Firstname, and SMS custom fields. If you disable Use Custom Fields you will get access only to the fields listed previously and you will be able to find them with the Attribute_ prefix. If you enable Use Custom Fields parameter you will be able to work with all customs fields with regular naming. Connector Specifics Object Peculiarities Read-only objects The following Brevo objects are read-only: TransactionalEmailActivity_AggregatedPerDay, SMSActivity_AggregatedPerDay, SMSActivity_UnaggregatedEvents , and TransactionalEmailActivity_UnaggregatedEvents . BlockedDomain The BlockedDomains object has a single Domain field. It contains the list of all the blocked domains in JSON array format: [\"example1.com\", \"example2.com\"] . To extend the list of blocked domains, use the INSERT operation. To remove the domain from the list, use the DELETE operation. For example, your list of blocked domains looks like [\"example1.com\", \"example2.com\"] . Suppose you want to block one more domain with the example3.com value. Use the INSERT operation in the integration and map the Domain field to the example3.com value. As a result, your list looks like [\"example1.com\", \"example2.com\", \"example3.com\"] . BlockedContacts The BlockedContacts object does not have an Id field. The primary key is the Email field. Contacts_Attributes The Contacts_Attributes object does not have an Id field. The primary keys are the Name and Category fields. Contacts To insert data to this object, map either the Email field or the SMS nested attribute in the Attributes field. For example, {\"SMS\":\"012345678910\"} . The ListIds field stores the identifiers of all the lists the contact belongs to in the JSON array format: [1,2,3] . To extend the array of IDs, use the UPDATE operation in the integration and map the ListIds field to the value you want to add. \nFor example, the ListIds field contains the value [1,2] . Suppose you want to add 3 to the array. To add an item to the list, use the UPDATE operation and map the ListIds field to the [3] value. As a result, the ListIds value becomes [1,2,3] . To delete the specific IDs from the array, map the UnlinkListIds field to the value you want to remove.\nFor example, if you want to remove [5] from the [3,4,5] ListIds array, map the UnlinkListIds to [5] . As a result, the ListIds value becomes [3,4] . Activity Objects The TransactionalEmailActivity_AggregatedPerDay, SMSActivity_AggregatedPerDay, SMSActivity_UnaggregatedEvents , and TransactionalEmailActivity_UnaggregatedEvents objects are read-only. When you query the TransactionalEmailActivity_AggregatedPerDay and SMSActivity_AggregatedPerDay objects, the result contains data for the last ten days by default. To get data for another period, set the filters by the startDate and endDate fields. The period specified in the filters must be at most 30 days. When you query the SMSActivity_UnaggregatedEvents and TransactionalEmailActivity_UnaggregatedEvents objects, the result contains data for the last 30 days by default. To get data for another period, set the filters by the startDate and endDate fields. DML Operations Support Skyvia supports the following DML operations for Brevo. Operation Object INSERT, UPDATE, DELETE Companies, ChildrenAccounts, Contacts, ContactsAttributes, Contacts_Folders, Contacts_Lists, Deals, Email_Campaigns, Senders, SMS_Campaigns, Tasks, Templates, WebhooksMarketing, WebhooksTransactional, WhatsAppCampaigns INSERT, DELETE Domains, BlockedDomains INSERT, UPDATE CouponCollections INSERT SenderDomains_ChildrenAccounts DELETE BlockedContacts Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following Brevo objects: Companies, Contacts, Contacts_Lists, ContactSegments, Email_Campaigns, SMS_Campaigns, Template, WebhooksMarketing, WebhooksTransactional, WhatsAppCampaigns . Incremental Replication tracks the new records only for the Deals, Tasks, Categories , and CouponCollections objects. Skyvia supports Synchronization for the following Brevo objects: Companies, Email_Campaigns, SMS_Campaigns, Template, WebhooksMarketing, WebhooksTransactional, WhatsAppCampaigns . Stored Procedures Skyvia represents part of the supported Brevo features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SendEmailCampaignImmediately The following command sends an email campaign immediately based on campaign identifier. call SendEmailCampaignImmediately(:campaignId) SendEmailCampaignTestList The following command sends the email campaign to the testing list of emails. call SendEmailCampaignTestList(:campaignId,:emailTo) PARAMETER NAME DESCRIPTION CampaignId Campaign identifier EmailTo List of campaign testing emails in the array format. For example, [\u201cemail1@mail.com\u201d,\u201demail2@mail.com\u201d] UpdateEmailCampaignStatus Use the following command to update the campaign status. call UpdateEmailCampaignStatus(:campaignId,:status) PARAMETER NAME DESCRIPTION CampaignId Campaign identifier Status The valid status values are suspended, archive, darchive, sent, queued, replicate, replicateTemplate, draft . The replicateTemplate status is available only for template type campaigns. SendSmsCampaignImmediately The following command sends an sms campaign immediately based on campaign identifier. call SendSmsCampaignImmediately(:campaignId) SendSmsCampaignTestList The following command sends the email campaign to the testing list of emails. call SendSmsCampaignTestList(:campaignId,:phoneNumber) PARAMETER NAME DESCRIPTION CampaignId Campaign identifier PhoneNumber Mobile number of the recipient with the country code. This number must belong to one of your contacts in Brevo account and must not be blacklisted. For example, 33689965433 UpdateSmsCampaignStatus Use the following command to update the campaign status. call UpdateSmsCampaignStatus(:campaignId,:status) PARAMETER NAME DESCRIPTION CampaignId Campaign identifier Status The valid status values are suspended, archive, darchive, sent, queued, replicate, replicateTemplate, draft . T replicateTemplate status is available only for template type campaigns. Supported Actions Skyvia supports all the common actions for Brevo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sendpulse_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SendPulse [SendPulse](https://sendpulse.com) is a cloud-based multi-channel marketing platform for sending marketing messages via email, SMS, Viber, etc. Data integration : Skyvia supports importing data to and from SendPulse, exporting SendPulse data to CSV files, and replicating SendPulse data to relational databases. Backup : Skyvia Backup supports SendPulse backup. Query : Skyvia Query supports SendPulse. Establishing connection To create connection with SendPulse, you need to specify the Client ID and Client Secret . Getting Credentials To get your ID and Secret for SendPulse REST API, perform the following steps: Sign in to SendPulse. Click your user icon in the top right column. In the opened menu, click Account Settings . Switch to the API tab. The ID and Secret values will be generated automatically. and you will be able to copy them and paste into Skyvia connection editor. Creating Connection To connect to the SendPulse, enter the obtained Client ID and Client Secret in the corresponding boxes. Suppress Extended Requests SendPulse API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Campaigns Body To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities ListEmails The ListEmails object requires mapping the ListId and Emails fields. \nSeparate users may have different sets of custom fields, represented as Variables field values in JSON format. _ListEmails For user convenience Skyvia adds the dynamic objects _ListEmails to our SendPulse connector. Such objects clone the ListEmails object with one difference. They represent the Variables field nested items as separate fields and support the INSERT, UPDATE, and DELETE operations. For example, you have a CustomList1 list with the mailing address and home phone fields. \nThese fields are nested into the Variables field in JSON format. You are unable to modify the Variables field and update values. For user convenience our SendPulse connector automatically adds the dynamic object CustomList1_ListEmails object with\nthe separate mailing address and home phone fields. The Variables field is absent. SendPulse supports the following types for such fields. SendPulse Type DbType String String, 255 characters Number Double Date Date If you map null to a field, the updated values remain unchanged. SendPulse API may process Skyvia requests for records insert to * _ListEmails* object with a delay. If you get the 'Return result is empty' error after inserting a new email record, try to insert this record again. SMS_PhoneNumberInfo Use filter by the PhoneNumber field to query data from this object. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following SendPulse objects: MailingLists, Templates, SMS_Campaigns, ViberCampaigns . Skyvia supports Synchronization for the objects MailingLists and ViberCampaigns . Skyvia detects only new records for the mention objects as they have only creation timestamp fields and don\u2019t have a modification timestamp fields. DML Operations Support SendPulse objects support the following DML Operations. Operation Object INSERT, UPDATE, DELETE *MailingLists, _ListEmails* INSERT, DELETE Campaigns, ListEmails, Senders, SMS_Campaigns INSERT PushCampaigns, SMTPEmails, Templates DELETE Emails Supported Actions and Actions Specifics Skyvia supports all the common actions for SendPulse." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/servicenow_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ServiceNow [ServiceNow](https://www.servicenow.com/) is an intelligent platform for digital transformation that enables companies to digitize any process across their organization with pre-built and customizable workflow solutions. Data integration : Skyvia supports importing data to and from ServiceNow, exporting ServiceNow data to CSV files, replicating ServiceNow data to relational databases, and synchronizing ServiceNow data with other cloud apps and relational databases. Backup : Skyvia Backup does not support ServiceNow. Query : Skyvia Query supports ServiceNow. Establishing Connection To create a connection to ServiceNow, specify your ServiceNow subdomain and select the authentication type. Authentication type determines what credentials you need to specify. Getting Credentials Subdomain The subdomain is a part of your ServiceNow instance URL. For example, for the URL https://dev123456.service-now.com/, the instance name will be dev123456 . Client ID and Client Secret for OAuth Authentication Go to [ServiceNow](https://www.servicenow.com/) . Make sure the OAuth 2.0 plugin is active in ServiceNow. It is active by default on new and upgraded instances. If it is not active, activate it manually. See the details on how to do it [here](https://docs.servicenow.com/bundle/vancouver-platform-security/page/administer/security/task/t_ActivateOAuth.html) . Ensure the com.snc.platform.security.oauth.is.active property is set to true . See the information on how to set up this property [here](https://docs.servicenow.com/bundle/vancouver-platform-security/page/administer/security/task/t_SetTheOAuthProperty.html) Navigate to All -> System OAuth -> Application Registry and then click New . Select Create an OAuth API endpoint for external clients . In the opened form specify the following information: Name \u2014 Unique Name that identifies the application. Client ID \u2014 This will be auto-generated by the instance. Client Secret \u2014 This will be auto-generated by the instance. Refresh Token Lifespan \u2014 you may leave the default value 8,640,000 seconds (100 days). Access Token Lifespan \u2014 you may leave the default value 1800 seconds (30 Minutes). Redirect URL \u2014 https://app.skyvia.com/oauthcallback/servicenow Copy the Client ID and Client Secret generated after submitting the form. Creating Connection To connect to ServiceNow, perform the following steps. Enter your ServiceNow subdomain. Select the authentication type. Enter the credentials depending on the authentication type. Basic Authentication Enter your ServiceNow username and password into corresponding boxes in the Connection Editor. OAuth 2.0 Authentication Enter the obtained Client ID and Client Secret into corresponding boxes. Additional Connection Parameters Metadata Cache You can specify the time after which Metadata Cache expires. Connector Specifics Object Peculiarities There are more than 2500 objects in the ServiceNow connector. ServiceNow has two types of objects: standard objects and the Attachments object. Working with Attachments Users can add file attachments to the standard objects in ServiceNow. All the attachments are stored in the Attachments object and related to the standard object they belong to. The related object stores the corresponding Attachments record sys_id in the attachment field. To add an attachment using Skyvia, perform the following actions. Insert a record to the Attachments object first. Get the sys_id of the newly inserted record. Add the obtained sys_id of the newly created Attachments record to the attachment field of the related object. Incremental Replication and Synchronization Skyvia supports Synchronization and Replication with Incremental Updates for ServiceNow objects, which contain the CreatedDate or UpdatedDate fields. DML Operations Support Skyvia supports the following DML operations for ServiceNow. Operation Object INSERT, UPDATE, DELETE Standard Objects INSERT, DELETE Attachments Supported Actions Skyvia supports all the common actions for ServiceNow" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sharepointlists_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SharePoint Lists [SharePoint Lists](https://support.microsoft.com/en-us/office/what-is-a-list-in-microsoft-365-93262a88-20ad-4edc-8410-b6909b2f59a5) is a connector for working with lists in SharePoint. SharePoint is a cloud-based content management and collaboration platform developed by Microsoft. Data integration : Skyvia supports importing data to and from SharePoint Lists, exporting SharePoint Lists data to CSV files, replicating SharePoint Lists data to relational databases, and synchronizing SharePoint Lists data with other cloud apps and relational databases. Backup : Skyvia Backup does not support SharePoint Lists. Query : Skyvia Query supports SharePoint Lists. Establishing Connection To create a connection to SharePoint Lists, specify your domain, enter your Microsoft credentials, and select your SharePoint site. Creating Connection To connect to SharePoint Lists, perform the following steps: Enter your SharePoint domain and click Sign In with Microsoft . Enter your email. Specify your password. Give Skyvia your permission to access your data. Select your site from the dropdown list. Additional Connection Parameters Metadata cache You can specify the time when the [Metadata Cache](https://docs.skyvia.com/connections/metadata-cache.html) expires. Connector Specifics Data Structure The main objects in the connector are objects, which represent the lists in SharePoint UI.\nIf a list supports attachments, Skyvia creates a separate Attachments object for every such list. \nIf a list supports comments, Skyvia creates a separate Comments object for every such list. \nThe Lists and Users objects are read-only. Filtering Specifics SharePoint Lists API supports the following native filters: Object Field or field type Operator Attachments FileName, ItemId = Comments CommentId ItemId = Lists, Users Id = Id = CreatedDate, UpdatedDate = , != , < , <= , > , >= , IS NULL, IS NOT NULL The dynamic fields support the following filters. Field Type Operator Attachments, YesNo = , != Number, Currency, Counter, Date and Time(Include Time checkbox enabled) = , != , < , <= , > , >= , IS NULL, IS NOT NULL LookupId, PersonId, Choice, OutcomeChoice = , != , IS NULL, IS NOT NULL Text, Date and Time (Include Time checkbox disabled) IS NULL, IS NOT NULL Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Nested Objects The following fields store complex structured data in JSON format. Object Field Nested Object objects Lookup fields with allowed multiple selection LookupType Managed Metadata(Taxonomy) fields with allowed multiple selection TaxonomyType Person fields with allowed multiple selection UserType Comments Mentions CommentMentionsType You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Lists, , and Comments objects. Incremental replication detects only new records for the Lists object. Skyvia supports Synchronization for the objects that support the INSERT and UPDATE operations and contain either CreatedDate or UpdatedDate fields. DML Operations Support Operation Object INSERT, UPDATE, DELETE objects, Comments objects INSERT, DELETE Attachments objects Supported Actions Skyvia supports all the common actions for SharePoint Lists" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/shippo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Shippo [Shippo](https://goshippo.com/) is a multi-carrier shipping software provider for e-commerce businesses that offers discounted shipping rates and the ability to track packages, schedule pickups, and print shipping labels. Data integration : Skyvia supports importing data to and from Shippo, exporting Shippo data to CSV files, and replicating Shippo data to relational databases. Backup : Skyvia Backup does not support Shippo. Query : Skyvia Query supports Shippo. Establishing Connection To create a connection , sign in with Shippo. Creating Connection To connect to Shippo, perform the following steps: Click Sign In with Shippo . Click Log in text at the bottom of the form. Enter your email and password, and click Log in . Connector Specifics Skyvia supports only INSERT operations for Shippo objects. Object Peculiarities TrackingStatus To get data from this object, apply a filter on both the Carrier and TrackingNumber fields. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Addresses, Batches, CustomDeclarations, CustomItems, Manifests, Parcels, Refunds, Shipments, and Transactions . Skyvia does not support Synchronization for Shippo. DML Operations Support Operation Object INSERT Addresses, Batches, CarrierAccounts, CustomDeclarations, CustomItems, Manifests, Orders, Parcels, Refunds, Shipments, TrackingStatus, Transactions Supported Actions Skyvia supports all the common actions for Shippo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/shipstation_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources ShipStation [ShipStation](https://www.shipstation.com) is a cloud platform that allows private individuals and companies to accept payments over the Internet. Data integration : Skyvia supports importing data to and from ShipStation, exporting ShipStation data to CSV files, replicating ShipStation data to relational databases, and synchronizing ShipStation data with other cloud apps and relational databases. Backup : Skyvia Backup supports ShipStation backup. Query : Skyvia Query supports ShipStation. Establishing Connection To create a connection to ShipStation, specify an API Key and API Secret. Getting Credentials To locate API key and API Secret, do the following: Go to your ShipStation account. Click the gear icon in the upper left corner. Click API Settings -> Generate API Keys . Copy the generated API Key and API Secret. Creating Connection To connect to ShipStation, enter the obtained API Key and API Secret to the corresponding boxes in the Connection Editor. Connector Specifics Filters Support Use these filters to improve performance and save API calls. \nYou can use filters with other fields or operators, but it may increase API calls usage. ShipStation API supports filtering for the following objects and fields: Orders Field Operator OrderDate , PaymentDate , CreatedDate, UpdatedDate > , >= , < , <= OrderStatus, CustomerUsername = OrderItems Field Operator OrderDate, OrderPaymentDate, OrderCreatedDate, OrderUpdatedDate > , >= , < , <= OrderStatus = Shipments Field Operator OrderId, CarrierCode, TrackingNumber = ShipDate <= , >= CreatedDate > , >= , < , <= ShipmentItems Field Operator ShipDate <= , >= ShipmentCreatedDate > , >= , < , <= Packages Field Operator CarrierCode = Services Field Operator CarrierCode = Customers Field Operator State , CountryCode = StoreRefreshStatus Field Operator StoreId = Object Peculiarities Orders The Items field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. If you map the Items field to constant or expression in your integration, adjust mapping according to the following format. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18 [\n {\n \"OrderItemId\": 102165090,\n \"LineItemKey\": \"48510350416394\",\n \"Sku\": \"B0006ZZGZI\",\n \"Name\": \"Texas Instruments TI-83 Plus Programmable Graphing Calculator\",\n \"ImageUrl\": \"http://ecx.images-amazon.com/images/I/21S41356K0L.jpg\",\n \"Weight_Value\": 8,\n \"Weight_Units\": \"ounces\",\n \"Quantity\": 1,\n \"UnitPrice\": 45,\n \"TaxAmount\": 0,\n \"ShippingAmount\": 4.95,\n \"Adjustment\": false,\n \"CreatedDate\": \"2013-08-01T07:24:09Z\",\n \"UpdatedDate\": \"2013-08-01T07:24:09Z\"\n }\n] OrderItems The Orders object has a complex structure. Its Items field contains the nested array storing order details.\nFor user convenience, we created a separate OrderItems object storing the Items field content in user-friendly format. Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following ShipStation objects: Customers, Fulfillments, OrderItems, Orders, Products, Shipments, Stores, Warehouses . Skyvia can track only new records for the Fulfillments, Shipments , and Warehouses objects. Skyvia supports Synchronization for the Orders and Warehouses objects. Skyvia can track only new records for the Warehouses object Supported Actions Skyvia supports all the common actions for ShipStation" }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Shopify [Shopify](https://www.shopify.co.uk/) is an e-commerce platform that allows anyone to sell online, at their retail location and everywhere in between. Data integration : Skyvia supports importing data to and from Shopify, exporting Shopify data to CSV files, replicating Shopify data to relational databases, and synchronizing Shopify data with other cloud apps and relational databases. Backup : Skyvia Backup supports Shopify backup. Query : Skyvia Query supports Shopify. Establishing Connection While creating a connection , you have to select the authentication method (OAuth 2.0 or Admin API Access Token) Getting Access Token for Admin API Access Token Authentication Access token is not required for OAuth authentication. It is required only if you want to use the Admin API Access Token Authentication. To obtain the access token, you need to create a [custom app](https://help.shopify.com/en/manual/apps/app-types/custom-apps) . Sign in to your admin Shopify account and perform the following steps: In the menu on the left, click Apps . Click App and sales channel settings . On the top of the page, click Develop apps . On the top of the page, click Create app . Specify custom app name, for example, SkyviaCustomApp . Select the App developer account and click Create app . Click Configure Admin API scopes . Select checkboxes for the scopes you want to work with. You may select all the scopes. Below the list of scopes, click Save . Click the API Credentials tab. On this tab, under Access tokens , click Install app . Click Install to confirm the app installation. Click Reveal token once . Copy the revealed token and store it somewhere safe. Creating Connection OAuth 2.0 Authentication. To establish connection using OAuth 2.0 authentication, specify your store URL and sign in with Shopify using your credentials: Specify your Shopify Store full domain name, not just the store part of it. For example, mystore-300.myshopify.com or https://mystore-300.myshopify.com Click Sign In with Shopify . In the opened window, enter your email and click the Next button. Afterwards, enter your password and click the Log in button. To get started, you will need to verify your email. Click the Verify email address button. Admin API Access Token Authentication To establish connection using Admin API Access Token authentication, perform the following steps: Specify your Shopify Store full domain name, not just the store part of it. For example, mystore-300.myshopify.com or https://mystore-300.myshopify.com In the Authentication box, select Admin API Access Token . Paste your custom app Access Token . Connector Specifics Object Relations Some of the Shopify objects can be accessed only via their parent objects. For example, to query CustomerAddresses , Shopify API requires the ID of the corresponding Customer record. To get ProductVariants records, Shopify API requires the ID of the corresponding Product record. Skyvia does not require the ID of the parent object from user. If you don\u2019t specify the IDs of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their IDs and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes much time and consumes many API calls. It uses at least one API call for every parent object record (for example, product). Thus, working with ProductVariants can be very slow. Because of this, it is strongly recommended to use filters on the parent object fields when querying data from such child objects. This reduces the number of parent object records, for which child object data must be queried. Complex Structured Objects Some of the Shopify tables can store complex structured data. Skyvia represents this information in such tables as a JSON field Lines. You can Import data directly to the nested fields of such tables. To do this you can use our Nested Objects mapping feature in Import. Just select the Nested Objects checkbox in the integration to enable the feature. Then, in the mapping settings, you can map the fields of the nested items. Supported Actions Skyvia supports all the common actions for Shopify." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/slack_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Slack [Slack](https://slack.com/intl/en-ua/?eu_nc=1) is a cloud team collaboration service with a number of tools for online calls, file sharing, etc. Data integration : Skyvia supports importing data to and from Slack, exporting Slack data to CSV files, and replicating Slack data to relational databases. Backup : Skyvia Backup does not support Slack. Query : Skyvia Query supports Slack. Establishing Connection To create a connection to Slack, just sign in with your Slack account. Creating Connection To connect to Slack, perform the following steps. Click Sign In with Slack in the Connection Editor. In the opened window, enter your Slack workspace name and click the Continue button. Enter your Slack credentials and click Sign in . Skyvia will request permission to access Slack workspace. Click Allow to give permission. Suppress Extended Requests For some objects, Slack API returns only part of the fields when querying multiple records. In order to query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD DirectChannels LastRead, UnreadCount, UnreadCountDisplay, IsOpen, Latest_ClientMessageId, Latest_Type, Latest_Text, Latest_UserId, Latest_Timestamp, Latest_Team, Latest_Blocks To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such Slack objects: DirectChannels, FileComments, Files, GroupChannels, MultipatryDirectChannels, Pins_DirectChannels, Pins_GroupChannels, Pins_MultipartyDirectChannels, Pins_PublicChannels, PublicChannels, RemoteFiles, Stars, Users, DirectChannelsHistory\nDirectChannelsReplies, GroupChannelsHistory, GroupChannelsReplies, MultipatryDirectChannelsHistory, MultipatryDirectChannelsReplies, PublicChannelsHistory, PublicChannelsReplies . Skyvia doesn\u2019t support Synchronization for Slack objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE DirectChannelsHistory, GroupChannelsHistory, MultipartyDirectChannelsHistory, PublicChannelsHistory INSERT, DELETE Reminders INSERT GroupChannels, PublicChannels DELETE FileComments, Files UPDATE GroupChannelsPurposes, GroupChannelsTopics, PublicChannelsPurposes, PublicChannelsTopics Supported Actions Skyvia supports all the common actions and the custom SendMessage for Slack." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/slack_connections/send-message-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Slack SendMessage This action sends a message to a channel. Action Settings Setting Description ConversationId An encoded ID or channel name that represents a channel, private group, or IM channel to send the message to. See [Slack documentation](https://api.slack.com/methods/chat.postMessage#channels) for more details. Text Message content ThreadId Provide another message\u2019s ts value to make this message a reply. Avoid using a reply\u2019s ts value, use its parent instead. (Optional) Action Parameters SendMessage action parameters correspond to the fields of the target object. You must map at least the required parameters. Result The message is posted to the specified channel." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/smartsheet_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Smartsheet Smartsheet is a cloud service for work management and collaboration. Data integration : Skyvia supports importing data to and from Smartsheet, exporting Smartsheet data to CSV files, replicating Smartsheet data to relational databases, and synchronizing Smartsheet data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Smartsheet. Query : Skyvia Query supports Smartsheet. Establishing Connection When [creating a connection](https://docs.skyvia.com/connections/#creating-connections) , specify the data center and log in with Smartsheet. Creating Connection To connect to Smartsheet, perform the following steps: Select the Data Center. Click Sign In with Smartsheet in the connection editor in Skyvia. Enter your Smartsheet credentials or select another available service to log in. Give Skyvia permission to perform actions in Smartsheet. Click Create Connection when the token is generated. Additional Connection Parameters Typed Sheet Cells This parameter defines what data types to assign for the fields in the *_Rows objects. If the Typed Sheet Cells parameter is enabled, Skyvia assigns the actual data types for such fields. If it is disabled, Skyvia assigns String data type to such fields, regardless of their actual data types. Suppress Extended Requests Smartsheet API returns only part of the fields for some objects when querying multiple records. To query the values of additional fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. The list of such fields is the following: OBJECT FIELD Workspaces Favorite, Folders, Permalink, Reports, Sheets, Sights, Templates Sheets Columns, Rows, CrossSheetReferences, DependenciesEnabled, Discussions, EffectiveAttachmentOptions, Favorite, GanttEnabled, HasSummaryFields, IsMultiPicklistEnabled, Source_Id, Source_Type, Summary_Fields, TotalRowCount, Version, Workspace_AccessLevel, WorkspaceId, Workspace_Favorite, Workspace_Folders, Workspace_Name, Workspace_Permalink, Workspace_Reports, Workspace_Sheets, Workspace_Sights, Workspace_Templates SentUpdateRequests ColumnIds, RowIds, IncludeAttachments, IncludeDiscussions Groups Data Report Scope_Sheets, Scope_Workspaces, SourceSheets, ProofsId, Proofs_OriginalId, Proofs_Name, Proofs_Type, Proofs_DocumentType, Proofs_ProofRequestUrl, Proofs_Version, Proofs_LastUpdatedAt, Proofs_LastUpdatedBy_Name, Proofs_LastUpdatedBy_Email, Proofs_IsCompleted, AccessLevel, Attachments, Columns, CrossSheetReferences, DependenciesEnabled, Discussions, EffectiveAttachmentOptions, Favorite, GanttEnabled, HasSummaryFields, Permalink, ProjectSettings_LengthOfDay, ProjectSettings_NonWorkingDays, ProjectSettings_WorkingDays, ReadOnly, ResourceManagementEnabled, Rows, ShowParentRowsForFilters, Source_Id, Source_Type, Summary_Fields, TotalRowCount, UserPermissions_SummaryPermissions, UserSettings_CriticalPathEnabled, UserSettings_DisplaySummaryTasks, Version, Workspace_AccessLevel, WorkspaceId, Workspace_Favorite, Workspace_Folders, Workspace_Name, Workspace_Permalink, Workspace_Reports, Workspace_Sheets, Workspace_Sights, Workspace_Templates UpdateRequests RowIds, ColumnIds, IncludeAttachments and IncludeDiscussions To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in Smartsheet because its API does not return them without extended requests. Connector Specifics Object Peculiarities Sheets To insert data into the Sheets object, you must map either the FromId or the Columns field in addition to the required field Name . This object has a complex structure. It\u2019s Columns and Row fields store data in JSON format. For user convenience, Skyvia represents data from the Sheets records as separate objects with *_Rows suffix in their names. *_Rows Objects Each object with *_Rows suffix in its name corresponds to the record in the Sheets object. \nFor example, you have a sheet with the name MySheet . This sheet has a corresponding record in the Sheets object and a corresponding object MySheet_Rows . The *_Rows objects support INSERT, UPDATE and DELETE operations. Skyvia supports the Incremental Replication and Synchronization for such objects. Fields in the *_Rows objects may have a predefined datatype and may contain text simultaneously. It may cause error when selecting data from such fields.\nFor user convenience, Skyvia allows you to define data types of such fields in the connector using the Typed Sheet Cells connection parameter. This option determines Skyvia\u2019s behavior when data doesn\u2019t match the predefined type. \nSkyvia\u2019s behavior depends on the type of field. Smartsheet Data type Data type when the Typed Sheet Cells enabled Data type when the Typed Sheet Cells disabled Text/Number, Latest Comment, Duration, Predecessors, Dropdown List, and Symbols Actual data type Actual data type Date, Abstract, Datetime, and Checkbox Actual data type String, max length 4000 characters. Contact List (Multiple) String String, max length 4000 characters Contact List (Single)* String String For example, the Status column in the table below is checkbox(boolean) but contains text data. If you enable the Typed Sheet Cells option, Skyvia assigns the boolean type for the Status field. If you try to select data from the table, you get an error \u201cCannot cast the value \u2018John Smith\u2019 in the \u2018Status\u2019 field and the row number 6 to the Boolean data type. Please correct the value or turn off the \u2018Typed Sheet Cells\u2019 option\u201d. To avoid this error, disable the Typed Sheet Cells option. Then, Skyvia assigns String data type to such fields, regardless of the actual data type, and allows you to select data. Reports The Start field is used for filtering and does not return any data when selecting data from it. Thus, it is not necessary to add this field to the selection. \nIf you set the filter by the Start field, the result will contain data modified on the specified date or later. UpdateRequests and SentUpdateRequests The planned update requests are stored in the UpdateRequests object.\nAfter the update requests are sent, they are displayed in the SentUpdateRequests object. When inserting into the UpdateRequests object, specify the Schedule_StartAt field value in the DateTime format with the time value 00:00:00 . For example 11/23/2022 00:00:00 . Events When selecting data from the Since and StreamPosition , the returned result is empty for these fields, When querying data from the Events object, set the filter by one of these fields, not by both. \nOnly the = (equal to) operator is supported. Columns There is a limitation for the INSERT/UPDATE operations against the Columns object. when mapping the Validation field by the Constant, use the True value only. Do not map this field if the record must have the False value. Rows To import data to this object, you must map the SheetId and Cells fields. Cells value must be mapped in the following format: {\"ColumnId\":111111,\"Value\":\"Example for Value\"} , where the ColumnId is the ID of the specific field from the Columns object. Obtaining SheetId and ColumnId You can execute the following command to obtain the SheetId and ColumnId for further use in integrations: 1\n2\n3\n4\n5 SELECT Sheets_Columns_SheetId . Name , Columns . SheetId , Columns . Id , Columns . Title FROM Columns LEFT OUTER JOIN Sheets AS Sheets_Columns_SheetId ON Columns . SheetId = Sheets_Columns_SheetId . Id WHERE ( Sheets_Columns_SheetId . Name = 'New Sheet UI' ) Stored Procedures Skyvia represents part of the supported Smartsheet features as stored procedures.\nYou can [call a stored procedure](https://docs.skyvia.com/supported-sql-for-cloud-sources/call-statements-and-stored-procedures.html) , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . DeactivateUser To deactivate the user, use command call DeactivateUser(:userId) The specified user cannot access Smartsheet after the procedure execution. ReactivateUser To reactivate the specified user, use command call ReactivateUser(:userId) The user becomes able to access Smartsheet after the procedure execution. MakeAlternateEmailPrimary To make the existing alternative email the primary email for the specified user, use command call MakeAlternateEmailPrimary(:userId,:alternatEmailId) CreateRowDiscussion To create a new discussion on a row, use the command call CreateRowDiscussion (:sheetId,:rowId,:text) AddGroupMembers The following command allows to add a new member to a group using email. call AddGroupMembers(:groupId,:email) DeleteGroupMembers To delete a member from a group, use the command call DeleteGroupMembers(:groupId,:userId) DML Operations Operations Objects INSERT, UPDATE, DELETE Columns, DiscussionComments, Groups, Sheets, UpdateRequests, Users, Webhooks, Workspaces INSERT, DELETE AlternateEmails, Discussions, Favorites, Rows UPDATE, DELETE AutomationRules, Dashboards, Proofs, SheetShares, WorkspaceShares INSERT CrossSheetReferences, HomeFolders, ProofDiscussions, WorkspaceFolders, WorkspaceShares DELETE Attachments Incremental Replication and Synchronization Skyvia supports the Replication with Incremental Updates for the following Smartsheet objects: WorkspaceShares, Sheets, SheetShares, SheetSummaryFields, Attachments, UpdateRequests, Groups, ProofAtttachments, Reports . Incremental Replication detects only the new records for the Attachments, ProofAttachments and Sheets objects. Skyvia supports Synchronization for the UpdateRequests , and Groups objects. Supported Actions Skyvia supports all the common actions for Smartsheet." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/smartsuite_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SmartSuite [SmartSuite](https://www.smartsuite.com/) is a collaborative work management platform designed to help teams plan, track, and manage workflows, including projects, ongoing processes, and daily tasks at all organizational levels. Data integration : Skyvia supports importing data to and from SmartSuite, exporting SmartSuite data to CSV files, replicating SmartSuite data to relational databases, and synchronizing SmartSuite data with other cloud apps and relational databases. Backup : Skyvia Backup does not support SmartSuite. Query : Skyvia Query supports SmartSuite. Establishing Connection To create a connection to SmartSuite, specify Workspace Id and \tAPI Key Getting Credentials Workspace Id Log in to your [SmartSuite workspace](https://app.smartsuite.com/authenticate/login) . Once logged in, look at the URL displayed in your browser. Your Workspace Id is the first eight characters directly after https://app.smartsuite.com/ . For example, for the URL https://app.smartsuite.com/abc01cba/home the Workspace Id is abc01cba . API Key To locate the API Key, do the following. Log in to your [SmartSuite workspace](https://app.smartsuite.com/authenticate/login) . Click on the user icon. Select API Key from the menu. Scroll to the API Key section. Click Generate API Key . Copy the generated API Key. Creating Connection To connect to SmartSuite, enter the obtained Workspace Id and API Key into the corresponding boxes. Connector Specifics Data Structure Object Types SmartSuite contains objects of the following types: Solution objects . These objects represent the solution tables in your SmartSuite workspace. Skyvia represents them as separate objects with solution names as a prefix and table names as a suffix in their names. For example, you have a solution named MySolution in SmartSuite UI with Products , Vendors , and Clients tables. Skyvia displays them as the MySolution_Products, MySolution_Vendors, and MySolution_Clients objects. System solution objects . There are two system solution objects, System_Members and System_Teams . These objects are read-only. Files objects . If a solution table contains fields with files, Skyvia creates a separate object for every such field. Such objects include the Content field, which stores binary content. Comments objects . Skyvia creates a comments object for each solution table. Such objects have the *Comments suffix in their names. Static objects . There are two static objects: Solutions and Applications . Object Peculiriaties Tag Fields SmartSuite has two types of tags: tags defined in the field settings and tags added to a cell. If a tag is defined in the settings, SmartSuite API returns such tag in the readable format when querying. If a tag was added directly to a cell, SmartSuite API returns its ID when querying. For example, [\"668e47c40feb5984c22c836a\",\"predefinedtag\"] , the first tag was added to a cell, and the second one was defined in the field settings. To get all tags in the readable format, define them all in the field settings. Nested Objects The advanced types of SmartSuite fields store complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. These types are listed below. SmartSuite Field Type Comment IP Address In case Multiple entries allowed Checklist Checklist_Items field in the solution objects Color Picker In case Multiple entries allowed Reactions The Reactions field in the *Comments objects Dependency Dependency_Predecessor , Dependency_Successor , Files and Images File fields Phone In case Multiple entries allowed Time Track Logs Time Tracking Log TimeTrackLogs field Votes Vote_Votes field Assign To, Linked Record In case Multiple entries allowed. Also for the Followers field in the *Comments object, the Members, Teams fields in the Solutions and Applications objects. Email, Link In case Multiple entries allowed Filtering Specifics SmartSuite API supports the following native filters: Files and *Comments objects support filter by the RecordId field with the = operator. The Solutions and Applications objects support filter by the Id field with the = operator. The solution objects support the following filters SmartSuite Type Operator Date IS NULL , IS NOT NULL Link, Email, Color Picker = , IS NULL , IS NOT NULL (when a Single entry is allowed) Number Slider, Rating, Percent Complete, Count, Auto Number, Duration, Currency = , > , >= , < , <= , IS NULL , IS NOT NULL Record Id = Status, Single Select = , IS NULL , IS NOT NULL Text, Title, Text Area IS NULL , IS NOT NULL Assign To, Linked Record = , IN , IS NULL , IS NOT NULL Yes/ No = Link, Email, ColorPicker = Use these filters to improve performance and save API calls. You can use filters with other fields or operators, which may increase API call usage. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all SmartSuite objects. However, it detects only new records for the Applications object. Skyvia supports Synchronization for objects that support the INSERT and UPDATE operations and have fields with creation or modification timestamps. DML Operations Support SmartSuite objects support the following DML operations. Operation Object INSERT, UPDATE, DELETE Solutions objects INSERT System_Teams , Files objects, Comments objects Supported Actions Skyvia supports all the common actions for SmartSuite." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/square_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Square [Square](https://squareup.com/us/en) is a cloud-based POS system that allows processing payment transactions, updates and securely storing sales history in the cloud. Square helps you to make sales anywhere from any device, managing your business remotely. The system also integrates with a wide range of partner cloud apps and is a right online solution for small, medium and large businesses. Data integration : Skyvia supports importing data to and from Square, exporting Square data to CSV files, replicating Square data to relational databases and synchronizing Square data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Square. Query : Skyvia Query supports Square. Establishing Connection To create connection with Square, you need to select environtment type to use ( Production or Sandbox ) and log in with Square. Creating Connection To connect to Square, perform the following steps: In the Connection Editor, select Environtment \u2014 Production or Sandbox. Click Sign In with Square . In the opened window, enter your Square credentials and click Sign in . Connector Specifics Orders To increase performance when selecting all the orders, use filter by the LocationId field. Orders may have complex structure. The JSON fields LineItems, Fulfillments, Discounts, Taxes, ServiceCharges store the nested objects. To modify the records of the nested objects inside the Orders object, you can use our Nested Objects mapping feature in Import. For this, you need to select the select the Nested Objects checkbox in the integration. Then, in the mapping settings, you can map the fields of order line items, fulfillments, discounts, etc. DML Operations Support Operation Objects INSERT, UPDATE, DELETE BreakTypes, CatalogCategory, CatalogCustomAttributeDefinition, CatalogDiscount, CatalogItem, CatalogItemOption, CatalogItemOptionValue, CatalogItemVariation, CatalogMeasurementUnit, CatalogModifierList, CatalogPricingRule, CatalogProductSet, CatalogQuickAmountsSettings, CatalogSubscriptionPlan, CatalogTax, CatalogTimePeriod, CustomerGroups, Customers, Invoices INSERT, UPDATE Locations, Orders INSERT, DELETE DisputeEvidence INSERT DeviceCodes, GiftCards, GiftCardActivities, Payments, Refunds DELETE CatalogImage UPDATE WorkweekConfigs Incremental Replication and Synchronization Skyvia supports Incremental Replication for all Square objects, except BankAccounts, DisputeEvidence, InventoryChanges,InventoryCount, Merchants, Settlements, TeamMemberBookingProfiles . Skyvia supports Synchronization for all Square objects, except Locations, BankAccounts, DisputeEvidence, GiftCards, GiftCardActivities, InventoryChanges, InventoryCount, Merchants, Settlements, TeamMemberBookingProfiles . Stored Procedures LinkCustomerToGiftCard The following command links a gift card to a customer. call LinkCustomerToGiftCard(:gift_card_id, :customer_id) For example, call LinkCustomerToGiftCard('gftc:516e7cfb350a457e92b6221938eabe35', '042ZBE9R64X3VAPRGC33D02RA8') UnlinkCustomerToGiftCard The following command unlinks a gift card from a customer. call UnlinkCustomerToGiftCard(:gift_card_id, :customer_id) For example, call UnlinkCustomerToGiftCard('gftc:516e7cfb350a457e92b6221938eabe35', '042ZBE9R64X3VAPRGC33D02RA8') Supported Actions Skyvia supports all the common actions for Square." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/starshipit_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Starshipit [Starshipit](https://starshipit.com/) is a provider of integrated and automated shipping and tracking solutions for various online businesses. Data integration : Skyvia supports importing data to and from Starshipit, exporting Starshipit data to CSV files and replicating Starshipit data to relational databases. Backup : Skyvia Backup does not support Starshipit. Query : Skyvia Query supports Starshipit. Establishing Connection To create a connection to Starshipit, you need to specify API Key and Subscription Key. Getting Credentials To obtain an API Key and Subscription Key, do the following: Go to Starshipit. Click Settings in the left menu and select API . Copy the generated API Key and Subscription Key values. Creating Connection To connect to Starshipit, enter the obtained API Key and Subscription Key to the corresponding boxes in the Connection Editor. Connector Specifics Object Peculiarities The UnshippedOrders object returns only unshipped orders created or updated within the last 24 hours. To get records for a different period, you need to filter data by the SinceOrderDate and SinceLastUpdated fields. These fields are used only for filtering, they are empty when querying data from this object. The conditions with the equals , greater than and greater than or equals operators work correctly. The ShippedOrders object returns only shipped orders updated within the last 24 hours. To get records for a different period, you need to filter data by the SinceLastUpdated field. This field is used only for filtering, it is empty when querying data from this object. The conditions with the equals , greater than and greater than or equals operators work correctly on this field. The PrintedOrders and UnmanifestedOrders objects return only orders created within the last 24 hours. To get records for a different period, you need to filter data by the SinceCreatedDate field. This field is used only for filtering, it is empty when querying data from this object. The conditions with the equals , greater than and greater than or equals operators work correctly on this field. The Orders object returns orders with the New status. Incremental Replication and Synchronization Skyvia doesn\u2019t support Synchronization and Replication with Incremental Updates for Starshipit objects. DML Operations Support Skyvia supports the following import operations for Starshipit objects Operation Object INSERT, UPDATE, DELETE Orders INSERT, UPDATE Addresses INSERT Manifests Supported Actions Skyvia supports all the common actions for Starshipit." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/streak_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Streak [Streak](https://www.streak.com/) is a CRM platform for Gmail that adds features to send personalized emails and manage your conversations. Data integration : Skyvia supports importing data to and from Streak, exporting Streak data to CSV files, replicating Streak data to relational databases, and synchronizing Streak data with other cloud apps and relational databases. Backup : Skyvia Backup supports Streak. Query : Skyvia Query supports Streak. Establishing Connection To create a connection to Streak, you need to obtain an API key. Getting Credentials Click the Streak button. Click Integrations in Streak menu. Click Custom integrations . Click Create new key . Copy the generated key. Creating Connection To connect to Streak, specify the API key. Connector Specifics Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for objects that have UpdatedDate or CreatedDate fields. Skyvia supports Synchronization for Boxes and Pipelines objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Boxes, Comments, Pipelines, PipelineFields, Stages, Meetings INSERT, UPDATE PipelineWebhooks, Tasks, TeamWebhooks INSERT, DELETE BoxEmailFilters INSERT Threads UPDATE BoxFields Supported Actions Skyvia supports all the common actions for Streak." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/stripe_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Stripe [Stripe](https://stripe.com) is a cloud platform that allows private individuals and companies to accept payments over the Internet. Data integration : Skyvia supports importing data to and from Stripe, exporting Stripe data to CSV files, replicating Stripe data to relational databases, and synchronizing Stripe data with other cloud apps and relational databases. Backup : Skyvia Backup supports Stripe backup. Query : Skyvia Query supports Stripe. Establishing Connection To create a connection with Stripe, specify the API Key or Secret Key. Getting Credentials API Key You can use different types of API keys for Stripe connection. Publishable Key \u2014 an automatically generated REST API key for connecting to Stripe. Secret Key - API Key used for full access to Stripe objects. Restricted Key - API Key used for custom access configuration. More details on Stripe API keys are available [here](https://stripe.com/docs/keys) . To get the API key, perform the following steps: Go to Stripe. Click Developers and move to API Keys tab. Generate an API key of the needed type or copy the existing one. Creating Connection Enter the obtained API Key to the corresponding box to connect to Stripe. Additional Connection Parameters Connected Account Id Enter this parameter if you want to retrieve data from the connected account. The connected account ID value should look like acct_XXXXXXXXXXXXXXXX . When the connected account ID is specified, all the queries and data-related operations in Skyvia are executed against the connected account instead of the main account. Connector Specifics Object Peculiarities Accounts There are three types of Stripe accounts: standard, custom, and express.\nStripe API supports INSERT operation for standard and custom accounts and the UPDATE operation for custom accounts. Plans Stripe does not autogenerate the Id field. You must map it to a constant or a unique field to insert data into this object. Charges To import data to this object, you must map the CustomerId field. It has to be a customer that has at least one card. Transfers To import data to this object, you must specify an ID from the Account object as the DestinationId value.\nMake sure your Stripe account balance covers the transfer amount. Incremental Replication and Synchronization Skyvia fully supports the Incremental Replication for the Products, Orders , and ReportTypes objects. \nIncremental Replication detects only new records for the Accounts, AllSubscriptionItems, AllSubscriptions, ApplicationFeeRefunds, ApplicationFees, BalanceTransactions, Charges, Coupons, CreditNotes, CustomerBalanceTransactions, Customers, CustomerTaxIds, Disputes, EarlyFraudWarnings, Events, FileLinks, Files, InvoiceItems, Invoices, PaymentIntents, PaymentMethods, Payouts, Persons, Plans, Prices, Quotes, Refunds, Reports, Returns, Reviews, SetupIntents, SKUs, SubscriptionItems, Subscriptions, SubscriptionSchedules, TaxRates, Top-ups, TransferReversals, Transfers, ValueListItems, ValueLists , and WebhookEndpoints objects. Skyvia fully supports Synchronization for the Products object. Synchronization tracks only new records for the Accounts, AllSubscriptionItems, AllSubscriptions, Charges, CreditNotes, CustomerBalanceTransactions, Customers, FileLinks, Invoices, InvoiceItems, PaymentIntents, Persons, Plans, Prices, Quotes, SetupIntents, SKUs, SubscriptionItems, Subscriptions, SubscriptionSchedules, TaxRates, Top-ups, Transfers , and ValueLists objects. DML Operations Support Skyvia supports the following DML operations for Stripe objects. Operation Object INSERT, UPDATE, DELETE Accounts, AllSubscriptionItems, BankAccounts, Cards, Customers, ExternalAccounts_BankAccounts, ExternalAccounts_Cards, InvoiceItems, Locations, Persons, Plans, Products, Readers, Skus, SubscriptionItems, ValuesLists INSERT, UPDATE AllSubscriptions, Charges, CreditNotes, CustomerBalanceTransactions, FileLinks, Invoices, PaymentIntents, Prices, Quotes, Subscriptions, TaxRates, Top-ups, Transfers, SubscriptionSchedules INSERT, DELETE Coupons, CustomerTaxIds, ValueListItems INSERT ApplicationFeeRefunds, PaymentMethods, Payouts, Refunds, Reports, SetupIntents, TransferReversals, UsageRecords UPDATE Disputes Supported Actions and Actions Specifics Skyvia supports all the common actions for Stripe." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/sugar_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SugarCRM [SugarCRM](https://www.sugarcrm.com/) is a cloud CRM that includes sales-force automation, marketing campaigns, customer support, collaboration, reporting and other features. Skyvia supports SugarCRM 7.0 or higher. Skyvia does not support SugarCRM Community edition. Data integration : Skyvia supports importing data to and from SugarCRM, exporting SugarCRM data to CSV files, replicating SugarCRM data to relational databases, and synchronizing SugarCRM data with other cloud apps and relational databases. Backup : Skyvia Backup supports SugarCRM. Query : Skyvia Query supports SugarCRM. Establishing Connection To create a connection to SugarCRM, you need to specify the URL, username and password. Creating Connection To connect to SugarCRM, specify these parameters: URL \u2014 The URL address of your CRM. User \u2014 The username to log in with. Password \u2014 The password to log in with. Additional Connection Parameters Platform A SugarCRM platform. This parameter is optional. Read more about [SugarCRM platforms](https://support.sugarcrm.com/documentation/unsupported_versions/sugar_13.3/sugar_developer_guide_13.3/architecture/extensions/platforms/) . Metadata Cache The period for which metadata will be stored. Learn more about [Metadata Cache](https://docs.skyvia.com/connections/metadata-cache.html) . Use Email Relationship By default, Skyvia will manage emails through the following SugarCRM fields: email1 , email_addresses_non_primary , invalid_email , email_opt_out , and email_and_name1 .\nTo switch this behaviour, select the Use Email Relationship checkbox. Skyvia will put all email-related data into a single field, storing emails and their settings as a JSON array. We recommend using this approach instead of the default one. An example of a value for this field: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 [\n {\n \"email_address\": \"jordan_sanders@gmail.com\"\n \"primary_address\": true\n },\n {\n \"email_address\": \"jordan_sanders@devart.com\"\n \"primary_address\": false,\n \"reply_to_address\": true,\n \"invalid_email\": false,\n \"opt_out\": true\n }\n] Allow Untrusted Certificate Skyvia uses secure HTTPS protocol to connect to SugarCRM service. If you need to connect to an unsecure SugarCRM server using an untrusted certificate, select the Allow Untrusted Certificate checkbox. Allow Certificate Name Mismatch If you need to connect to a SugarCRM server even if there is a mismatch between the SSL certificate and the server name, select the Allow Certificate Name Mismatch checkbox. Connector Specifics Account Privileges Some entities are only available when you connect via SugarCRM account with administrative privileges. Many-to-many relations Many-to-many relations are not supported. Filtering Specifics Fields with these types do not support native filtering: Decimal, MultiEnum, Relate, FullName, Encrypt. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for objects that have UpdatedDate or CreatedDate fields. Skyvia supports Synchronization for objects that support the INSERT and UPDATE operations and have the UpdatedDate or CreatedDate fields. Supported Actions Skyvia supports all the common actions for SugarCRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/surveymonkey_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources SurveyMonkey [SurveyMonkey](https://www.surveymonkey.com) is the world\u2019s leading cloud-based platform for online survey. It helps create surveys, collect responses and analyze survey results. Data integration : Skyvia supports data import to SurveyMonkey, data export from SurveyMonkey to a file storage service or FTP, to CSV file(s), data replication from SurveyMonkey to relational databases and data synchronization from/to SurveyMonkey. Backup : Skyvia Backup supports SurveyMonkey backup. Query : Skyvia Query supports SurveyMonkey. Establishing Connection To create a connection to SurveyMonkey Creating Connection To connect to SurveyMonkey, do the following. Click Sign In with SurveyMonkey in the Connection Editor. In the opened window, enter your SurveyMonkey credentials and click Log In . Enter your credentials. Authorize Skyvia to access your SurveyMonkey data. Additional Connection Parameters Suppress Extended Requests SurveyMonkey API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD SurveyPageQuestions Family, Subtype SurveyCollectors Type CollectorMessages RecipientStatus To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Read-only Objects The following SurveyMonkey objects are read-only: BenchmarkBundles, Benchmarks, CollectorMessageStats, ContactFields, QuestionBank, Roles, SurveyCategories, SurveyDetails, SurveyLanguages, SurveyResponsesAnswers, SurveyRollups, SurveyTemplates, SurveyTrends, TeamMembers, Teams, User. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Surveys, SurveyResponses, SurveyCollectors, Organizations, OrganizationMembers, SurveyResponsesAnswers objects. Skyvia Supports Synchronization for the Surveys, SurveyResponses, SurveyCollectors, Organizations, OrganizationMembers objects. DML Operations Support SurveyMonkey objects support the following DML operations. Operation Object INSERT, UPDATE, DELETE CollectorMessages, ContactLists, Contacts, OrganizationMembers, Organizations, SurveyCollectors, SurveyPageQuestions, SurveyPages, SurveyResponses, Surveys, Webhooks INSERT, DELETE CollectorRecipients, OrganizationShares, SurveyTranslations INSERT ListContacts, SurveyFolders Supported Actions and Actions Specifics Skyvia supports all the common actions for SurveyMonkey." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/survicate_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Survicate [Survicate](https://www.survicate.com/) is a customer feedback automation solution that supports email, in-product, web and mobile surveys. Data integration : Skyvia supports importing data to Survicate, exporting Survicate data to CSV files, and replicating Survicate data to relational databases. Backup : Skyvia Backup does not support Survicate. Query : Skyvia Query supports Survicate. Establishing Connection To create a connection to Survicate, you need to obtain an API Key. Getting Credentials To obtain a Survicate API Key, perform the following steps: Sign in to your Survicate account. Click the gear button in the top right corner and select Survey Settings . Click Access Keys tab. Copy your API key. Creating Connection To connect to Survicate, specify the API key. Additional Connection Parameters Suppress Extended Requests For some objects, Survicate API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Surveys AuthorName , AuthorEmail , Folder , FirstResponseAt , LastResponseAt SurveyPoints , SurveyQuestions , Responses and ResponseAnswers get the data from Surveys table, which uses Extended Requests. To query SurveyPoints , Responses and ResponseAnswers object, Survicate API requires the ID of the corresponding Surveys record. Skyvia queries all the Surveys records first, takes their IDs, and then queries child object records for each Surveys record. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Skyvia cannot write data to Survicate, its data is read-only. Object Peculiarities API v1 objects ResponsesV1 , ResponseAnswersV1 , and SurveyPoints objects work via API v1. They are left for compatibility purposes. Note that Survicate API v1 is deprecated, and is not recommended to use for the future projects. Better switch to using the API v2 Responses and ResponseAnswers objects. ResponsesV1 The VisitorId field can be NULL, depending on how the survey was configured. ResponsesV1 has the following standard fields: first_name, last_name, email, organization, department, job_title, phone, website, country, address_one, address_two, city, state, zip, fax, annual_revenue, employees . Other fields are available as a JSON object, stored in the CustomAttributes object. For example, CustomAttributes can have the following value: {\"industry\":\"IT\",\"tags\":\"tag_1\"} . Some of CustomAttributes fields are available in the Responses : Industry, Identity, Comment, PageUrl, Language, Tags, VisitorHash . Incremental Replication and Synchronization Skyvia does not support Replication with Incremental Updates for Survicate. Skyvia does not support Synchronization for Survicate. Supported Actions Skyvia supports all the common actions for Survicate." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/teamwork_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Teamwork [Teamwork](https://www.teamwork.com/) is a project management tool for resources and workload planning, progress and milestones monitoring, and collaboration. Data integration : Skyvia supports importing data to and from Teamwork, exporting Teamwork data to CSV files, replicating Teamwork data to relational databases, and synchronizing Teamwork data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Teamwork. Query : Skyvia Query supports Teamwork. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Teamwork, you need to sign in with Teamwork using your credentials. Creating Connection To connect to Teamwork, perform the following steps: Click Sign In with Teamwork in the Connection Editor. Enter your credentials and click Log In . Connector Specifics Object Peculiarities CalendarEvents The DELETE operation performs \u201csoft delete\u201d. It sets the deleted status for a record. You must apply filters by the StartDate and EndDate fields to select data from the CalendarEvents object. If you don\u2019t use the filter by the EndDate field, its value is set automatically equal to tomorrow\u2019s date by default. Companies When selecting data, the PrivateNotes and TagIds fields return empty values by default. However, they are available for mapping in the integrations. To insert or update data in the TagIds field, map the values in the format [177291] . Risks When importing data to the Risks object, you must map the required ProjectId, Source, and Status fields and at least one of the additional ImpactSchedule, ImpactPerformance and ImpactCost fields. Custom Fields The Tasks and Projects objects support custom fields. \nYou can add custom fields of the following types: Checkbox, Date, Dropdown, Number, Status, Text, and Url. The list of the custom fields is available in the CustomFields object. Custom field values are available in the TaskCustomFields and ProjectCustomFields with links to the Tasks and Projects objects, respectively. When you select data from the TaskCustomFields and ProjectCustomFields , custom field values display in the Value field. However, if you insert or update a custom field value, you must map the ValueNumber (to insert data to the integer field) field or ValueString (to insert data to the string field) field. DML Operations Support Operation Object INSERT, UPDATE, DELETE Companies, CustomFields, Expenses, Links, Notebooks, People, ProjectCustomFields, Projects, Risks, Tags, TaskCustomFields, Tasks, Timers DELETE Activities, TaskLists Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Projects, Tasks, Companies, CustomFields, Messages, Notebooks, TaskLists, Risks, TimeEntries, Timers, Links, ProjectCustomFields and TaskCustomFields . \nReplication only tracks new records for the Links, ProjectCustomFields and TaskCustomFields objects. Skyvia supports Synchronization for the Projects, Tasks, Companies, Notebooks, Risks, Timers and CustomFields objects. Stored Procedures MarkTaskComplete This command moves the task record to the CompletedTasks object: call MarkTaskComplete(:taskId) MarkTaskUncomplete This command switches task status to \u201cincompleted\u201d and moves the Tasks record to the Tasks object: call MarkTaskUncomplete(:taskId) LockNotebook This command Locks a single notebook for editing: call LockNotebook(:notebookId) UnlockNotebook This command unlocks a notebook so you can edit it again: UnlockNotebook(:notebookId) Supported Actions Skyvia supports all the common actions for Teamwork." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/teamwork_crm_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Teamwork CRM [Teamwork CRM](https://www.teamwork.com/) is a CRM tool that helps users effectively manage their sales, focused on ease of use and visibility. Data integration : Skyvia supports importing data to and from Teamwork CRM, exporting Teamwork CRM data to CSV files, replicating Teamwork CRM data to relational databases, and synchronizing Teamwork CRM data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Teamwork CRM. Query : Skyvia Query supports Teamwork CRM. Establishing Connection To create a connection to Teamwork CRM, you need to sign in with your Teamwork credentials. Creating Connection To connect to Teamwork CRM, perform the following steps: Click Sign In with Teamwork CRM Enter your Teamwork CRM credentials and click Log in . Additional Connection Parameters Use Custom Fields Select this checkbox to make Teamwork CRM custom fields available in Skyvia. Connector Specifics Object Peculiarities Users The Users object displays active users by default. If you need to get Users records with other statuses, use the filter by the State field with the following valid values: Active, Inactive, Invited and Deleted . Contacts When importing data to the Contacts object, you must map either EmailAddresses or PhoneNumbers additionally to the required FirstName field. \nAn example of the valid EmailAddresses field value is the following: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 [\n {\n \"address\": \"user1@example.com\",\n \"isMain\": true\n },\n {\n \"address\": \"user2@example.com\",\n \"isMain\": false\n }\n] An example of the valid PhoneNumbers field is the following: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 [\n {\n \"code\": \n {\n \"id\": 0,\n \"type\": \"phone-codes\"\n },\n \"isMain\": true,\n \"number\": \"string\"\n }\n] LostReasons and WonReasons The DELETE operation against LostReasons and WonReasons objects performs a soft delete. It keeps the record itself and sets the Deleted field value to true . If, after that, you try to insert a new record with the same name as the deleted one, the Deleted field value will change to False . Custom Fields The Companies, Deals, Contacts objects contain custom fields. The custom fields list is displayed in the CustomFields object. CustomFields object supports the INSERT, UPDATE, and DELETE operations. The fields Company, Contact, Users in the CustomFields object contain the Id of the related record in the corresponding object. Teamwork CRM custom fields may be of the following types: Teamwork CRM Data Type Skyvia Data Type Company Int32 Contact Int32 Users Int32 Date Date Duration Int16 Email String Multiple options String Number Double Single option String Text(Short) String Text(Long) String Url String Time Time Each object\u2019s custom field values are displayed in the Custom field in the Deals, Contacts and Companies , respectively. \nThe Custom field is available for mapping in the integration tasks with the Insert and Update operation. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Activities, ActivityTypes, Companies, Contacts, Currencies, CustomFields, Deals, Files, LostReasons, Notes, Pipelines, Products, Stages, Users, WonReasons . Skyvia supports Synchronization for the objects Activities, ActivityTypes, Companies, Contacts, CustomFields, Deals, LostReasons, Notes, Pipelines, Products, Stages, WonReasons . DML Operations Support Operation Object INSERT, UPDATE, DELETE Activities, ActivityTypes, Companies, Contacts, CustomFields, Deals, LostReasons, Notes, Pipelines, Products, Stages, WonReasons UPDATE, DELETE Files, Users Stored Procedures Skyvia represents part of the supported Teamwork CRM\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . InitiateUserInvite The following command initiates an invitation for a user. call InitiateUserInvite(:email,:firstname,:lastname,:title) ActivateUser To activate an invited user, use the following command. call ActivateUser(:userId) ReinviteUser To reinvite a user that has already been invited, use the following command. call ReinviteUser(:userId) Supported Actions Skyvia supports all the common actions for Teamwork CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/teamwork_desk_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Teamwork Desk [Teamwork Desk](https://www.teamwork.com/desk/) is a Teamwork integrated help desk tool for customer communication. Data integration : Skyvia supports importing data to and from Teamwork Desk, exporting Teamwork Desk data to CSV files, replicating Teamwork Desk data to relational databases, and synchronizing Teamwork Desk data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Teamwork Desk. Query : Skyvia Query supports Teamwork Desk. Establishing Connection To create a connection to Teamwork Desk, you need to log in to your Teamwork Desk account. Creating Connection To connect to Teamwork Desk, perform the following steps: Click Sign In with Teamwork Desk in the Connection Editor. Enter your credentials and click Log in . Additional Connection Parameters Use Custom Fields Select this checkbox to make Teamwork Desk custom fields available in Skyvia. Connector Specifics Object Peculiarities Tickets When you import data to the Tickets object, you must map either CustomerId or CustomerEmail . Otherwise, Teamwork Desk API returns an error: \u201cdetail\u201d: \u201cmust be set\u201d, \u201csource\u201d: \u201ccustomer\u201d. Teamwork Desk supports creating scheduled tickets (ticket message is sent at a specified time in the future, not immediately). For that, map the nested fields of the ScheduledEmail object ( ScheduledEmail_TimeZoneId, ScheduledEmail_SendAt and ScheduledEmail_StatusId ) together with the required ticket fields. A newly created record appears in the Tickets object after the scheduled email is sent. Customers To successfully insert records to the Customers object, map the Contacts array field that must contain at least one contact with email. An example of the valid Contacts field value is the following: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 [\n{\n \"isMain\": true,\n \"type\": \"email\",\n \"value\": \"andrew_johnson@decs.com\"\n },\n{\n \"isMain\": false,\n \"type\": \"email\",\n \"value\": \"donna_beitz@decs.com\"\n }\n] Timelogs The TicketId, UserId , and TimelogsId are the foreign keys. Teamwork Desk API allows inserting any values to these fields. Inserting incorrect values to the TicketId, UserId and TimelogsId may cause invalid object relations. CreatedById, UpdatedById, DeletedById Fields The fields CreatedById, UpdatedById , and DeletedById are not used as foreign keys for object relations. \nIn some objects, these fields store system information and display values which don\u2019t exist in the Users object, which may lead to errors in the Replication. Custom Fields Skyvia supports custom fields for the Tickets object. \nTeamwork Desk supports the following custom field types: Teamwork Desk Data Type Skyvia Type Single-Line Text String Multi-Line Text String Dropdown String Checkbox String Date Date Numbers Double Teamwork Desk allows users to have multiple Inboxes for a single account. The set of custom fields may vary for tickets from different Inboxes. When querying data from the Tickets object, you get all the existing ticket records from all Inboxes and all the existing custom fields. \nEach Tickets record in the query result belongs to some Inbox and displays the custom field values corresponding to this Inbox. All other custom fields which don\u2019t belong to this Inbox return empty results. Custom field values support Insert and Update operations. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all the objects EXCEPT TicketTags, TicketTasks . Skyvia supports Synchronization for the Companies, Customers, CustomFields, HelpdocArticles, HelpdocArticlesKeywords, HelpdocsCategories, HelpdocsSites, Inboxes, Tags, TicketPriorities, Tickets, TicketSources, TicketStatues, TicketTypes, and Timelogs objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Companies, CustomFields, Customers, HelpdocArticles, HelpdocArticlesKeywords, HelpdocCategories, HelpdocSites, Inboxes, Tickets, Tags, TicketPriorities, TicketSources, TicketStatuses, TicketTypes, Timelogs UPDATE, DELETE Users, Messages Stored Procedures Skyvia represents part of the supported Teamwork Desk features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ReplyToTicket Use the following command to reply to a ticket. call ReplyToTicket(:ticketId, :threadType, :message) PARAMETER NAME DESCRIPTION TicketId The identifier of the ticket you reply to ThreadType Valid values are Note, Message, Forward Message The text of the reply to a ticket Supported Actions Skyvia supports all the common actions for Teamwork Desk." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/tempo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Tempo [Tempo](https://www.tempo.io/) is a time management platform offering various valuable tools such as timesheets, planner, cost tracking tool, resource management, etc. Data integration : Skyvia supports importing data to and from Tempo, exporting Tempo data to CSV files, replicating Tempo data to relational databases, and synchronizing Tempo data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Tempo. Query : Skyvia Query supports Tempo. Establishing Connection To create a connection to Tempo, specify the Jira Cloud instance, Client ID, and Client Secret, and sign in with Tempo. This connector is compatible with Jira Cloud only. Getting Credentials To get a Client ID and Client Secret, register an app in Jira. Go to your Jira Cloud instance. Click Apps -> Tempo . In the left menu, click the gear icon and select OAuth 2.0 applications , and click + New Application . In the form, specify your app name. Specify the https://api.skyvia.com/oauthcallback/tempo URL in the Redirect URIs box. Select the client type and click Create Application . Copy the Client ID and Client Secret . Creating Connection To connect to Tempo, perform the following steps. Specify your Jira Cloud instance name. Specify only the instance name, not the whole Jira URL. For example, if my Jira URL is https:// myinstance .atlassian.net/, the instance name is myinstance . Enter the Client Id. Enter the Client Secret. Click Sign In with Tempo . Click Authorize Access . Click Onwards! . Connector Specifics Object Peculiarities Holidays This object, by default, returns the holidays for the current year when querying. To get holidays for another year, filter by the Year field. \nThis field displays empty results when querying. It exists for filtering only. Worklogs To increase the query performance for this object, use filter by the UpdatedDate field with the >= operator. If you perform the INSERT or UPDATE operation against the records in the closed timesheet, you get an error \u201cThe timesheet status must be open for the period\u201d. PermissionRoles To import data to this object, you must map the AccessEntityIds and the PermissionKeys fields in JSON format. For example, [1] for the AccessEntityIds , and [\u201cpermissions.worklog.view\u201d, \u201cpermissions.worklog.manage\u201d, \u201cpermissions.plan.view\u201d] for the PermissionKeys . If you don\u2019t map the AccessEntityIds and PermissionKeys for the UPDATE operation, their values will be reset to nulls. Objects with Required Period Some Tempo objects require specifying a date range when querying. If you omit filters by date fields when querying data from these objects, you get the error like \u201cThe \u2018PeriodFrom\u2019 field of object \u2018TimesheetApprovalForTeam\u2019 is required for select operation. You must use it in the WHERE clause.\u201d The following objects require using filters when querying. Object Filter by TimesheetApprovalForTeam PeriodFrom, PeriodTo LoggedUserSchedule StartDate, EndDate CurrentTimesheetApproval A filter by the PeriodFrom is only required. Filter by PeriodTo is optional WorkloadShemes The Days field stores complex structured data in JSON format. You can use our Nested Objects mapping feature in Import to insert or update the nested values in such fields. Select the Nested Objects checkbox in import to enable this feature. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates and Synchronization for the Worklogs and Plans objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE AccountCategories, Accounts, Customers, Holidays, HolidaySchemes, PermissionRoles, Plans, Programs, Roles, Skills, TeamMemberships, Teams, WorkAttributes, WorkloadSchemes, Worklogs Stored Procedures Skyvia represents part of the supported Tempo features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . SetDefaultHolidayScheme To Set the default holiday scheme, use the following command. call SetDefaultHolidayScheme(:schemeId) ApproveTimesheet The following command approves a Timesheet for the given User in the given period and returns the approved timesheet. call ApproveTimesheet(:accountId, :from, :to, :comment, :reviewerAccountId) PARAMETER NAME DESCRIPTION AccountId Required. The user identifier From Required. The Start Date in format yyyy-mm-dd To Required. The End Date in format yyyy-mm-dd Comment The comment about the timesheet for approval ReviewerAccountId The reviewer identifier RejectTimesheet The following command rejects a Timesheet for the given User in the given period and returns the rejected Timesheet. call RejectTimesheet(:accountId, :from, :to, :comment, :reviewerAccountId) PARAMETER NAME DESCRIPTION AccountId Required. The user identifier From Required. The Start Date in format yyyy-mm-dd To Required. The End Date in format yyyy-mm-dd Comment The comment about the timesheet for rejection ReviewerAccountId The reviewer identifier ReopenTimesheet The following command reopens a Timesheet for the given User in the given period and returns the reopened Timesheet. call ReopenTimesheet(:accountId, :from, :to, :comment, :reviewerAccountId) PARAMETER NAME DESCRIPTION AccountId Required. The user identifier From Required. The Start Date in format yyyy-mm-dd To Required. The End Date in format yyyy-mm-dd Comment The comment about the timesheet to reopen ReviewerAccountId The reviewer identifier SubmitTimesheet The following command submits a Timesheet for the given User in the given period and returns the submitted Timesheet. call SubmitTimesheet(:accountId, :from, :to, :comment, :reviewerAccountId) PARAMETER NAME DESCRIPTION AccountId Required. The user identifier From Required. The Start Date in format yyyy-mm-dd To Required. The End Date in format yyyy-mm-dd Comment The comment about the timesheet to submit ReviewerAccountId The reviewer identifier SetDefaultWorkloadScheme To set the given Workload Scheme as default, use the following command. call SetDefaultWorkloadScheme(:schemeId) Supported Actions Skyvia supports all the common actions for Tempo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/thinkific_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Thinkific [Thinkific](https://www.thinkific.com/) is a platform where you can create, manage and sell education products like online courses and learning communities. Data integration : Skyvia supports importing data to and from Thinkific, exporting Thinkific data to CSV files, replicating Thinkific data to relational databases, and synchronizing Thinkific data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Thinkific. Query : Skyvia Query supports Thinkific. Establishing Connection To create a connection to Thinkific, you need to specify the subdomain and sign in with Thinkific using your credentials. Getting Credentials To get the Thinkific subdomain, perform the following steps: Log in to Thinkific. Click Settings on the left. Go to the Code & Analitycs tab and click API on the left. Scroll down and copy the Subdomain value. Creating Connection To connect to Thinkific, perform the following steps. Enter the obtained Subdomain value to the corresponding box in the Connection Editor. Click Sign in with Thinkific . Enter your credentials. Connector Specifics Object Peculiarities Users To successfully import data into the Roles field, you must pass its values in the array format, for example, [\u201caffiliate\u201d,\u201dgroup_analyst\u201d] . Valid values for the Roles field are: affiliate, course_admin, group_analyst, site_admin . If you use the filter by Roles with the = (equals) operator when querying, you will get all the records equal to or containing the value specified in the filter. For example, if there is a record with the Roles value [\u201ccourse_admin\u201d,\u201dgroup_analyst\u201d] and a record with the Roles value [\u201caffiliate\u201d,\u201dgroup_analyst\u201d] . If you perform the query 1 SELECT t.* FROM Users AS t WHERE (t.Roles = 'group_analyst') you will get both records in the query result. The Password field values are not displayed by default when querying. However, this field is available for mapping when importing data to this object. Bundles To select data from the Bundles object, you must use the filter by record Id. Otherwise, the query won\u2019t return any records. Promotions The ProductIds field values are not displayed by default when querying. However, this field is available for mapping when importing data to this object. Custom Fields You can add custom fields to the Users object. The custom fields can have the Text(string) and Country(string) types.\nSkyvia supports Thinkific custom fields and displays them as separate fields in the Users object. These fields support the INSERT and UPDATE operations. Stored Procedures Skyvia represents part of the supported Thinkific features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow . AddUserToGroups To add the existing user to the existing group, use the command call AddUserToGroups(:user_id, :group_names) PARAMETER NAME DESCRIPTION User_id The existing user identifier Group_names A list of groups names to add the user into in the following format: [\u201cGroup_001\u201d, \u201cGroup_002\u201d] You can use a call to the stored procedure, for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . DML Operations Support Operation Object INSERT, UPDATE, DELETE Categories, Coupons, Instructors, Promotions and Users INSERT, UPDATE Enrollments INSERT, DELETE Groups INSERT CourseReviews Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Categories, Coupons, CourseReviews, Enrollments, Groups, Instructors, Orders, Products, Promotions and Users objects. \nReplication tracks only the new records for the Categories, Coupons, CourseReviews, Groups, Instructors, Orders, Products, Promotions and Users objects. Synchronization cannot delete the records from the Enrollments object because this object does not support the Delete operation. Synchronization can track only the new records for the Categories, Coupons, Instructors, Promotions, and Users objects. Supported Actions Skyvia supports all the common actions for Thinkific." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/tiktokads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources TikTok Ads [TikTok Ads](https://ads.tiktok.com/i18n/home) is a tool for TikTok business users designed for targeting, ad creation, insight reports, and ad management. Data integration : Skyvia supports importing data to and from TikTok Ads, exporting TikTok Ads data to CSV files, replicating TikTok Ads data to relational databases, and synchronizing TikTok Ads data with other cloud apps and relational databases. Backup : Skyvia Backup does not support TikTok Ads. Query : Skyvia Query supports TikTok Ads. Establishing Connection Skyvia supports both TikTok Ads Production and Sandbox environments. For the production environment, Skyvia uses OAuth authentication. To establish a [connection](https://docs.skyvia.com/connections/#creating-connections) to TikTok Ads you must select the environment. Skyvia uses OAuth authentication for the TikTok Ads Production environment. To connect Skyvia to the Sandbox environment, you need the Token and AccountId. Getting Credentials To obtain the Token and AccountId for connecting to TikTok Ads Sandbox , you need create a new app in TikTok for Business UI. Perform the following steps: Log in to your TikTok account, go to My Apps and click Create New . Fill in the fields and click Confirm . After the app is approved, log in to TikTok again and go to MyApps . Click on your app name, go to the Sandbox Ad Account menu and create the sandbox there. After that, the AdvertiserId will be generated automatically. Generate the Access Token manually by clicking the Generate button. For the Production environment, you don\u2019t need to obtain any additional parameters. Creating Connection Production environment Click Sign In with TikTok . In the opened window, enter your TikTok for Business credentials and click Log In . Grant Skyvia the requested permissions by clicking Confirm . Sandbox environment In the Environment box, select Sandbox . Enter Token and Account Id . Connector Specifics Skyvia has the following TikTok Ads connector specifics: Read-only Objects The tables AdGroupActions, Identities, Locations, DeviceModels, InterestCategories and all the Reports (tables containing the -Report suffix in their names) are read-only in the TikTok Ads connector. Sandbox Environment The following objects are supported in the Sandbox environment: Ads, AdsDailyReport, AdsReport, AdsSKANDailyReport, AdsSKANReport, AdGroups, AdGroupsDailyReport, AdGroupsReport, AdGroupsSKANDailyReport, AdGroupsSKANReport, AdGroupActions, Audiences, Campaigns, CampaignsReport, CampaignsDailyReport, CampaignsSKANDailyReport, CampaignsSKANReport . DML Operations Support Operations Objects INSERT, UPDATE, DELETE AdGroups, Ads, Campaigns DELETE Audiences Reports The reports are present in the TikTok Ads connector as tables with the -Report suffix in their names. \n All reports are divided into separate tables according to the report levels: AdAccount, Campaign, AdGroup, or Ad with the corresponding suffix in the table names. There are several types of reports. Simple reports contain the regular reports, daily reports with the basic set of metrics, and the SKAN reports with additional metrics set. The full metrics set is available in the extended reports. Simple Reports The tables with the -Daily suffix in their names contain the metrics grouped by days. \nThe data selection period is set by default if you do not set filters. The default report end date is equal to the current date. The default report start date is determined by the maximal allowed period (30 days). The tables without the -Daily suffix contain the metrics for the whole period/ selected period. These tables contain the StartDate/EndDate fields. You can set the values for these fields if you set filters. \nThe data selection period is set by default if you do not set filters. The default report end date is equal to the current date. The default report start date is determined by the maximal allowed period (365 days). Simple reports are convenient when you need to query a small data volume. They may timeout when querying significant data volumes. Extended Reports The tables with the -ExtendedReport suffix in their names are read in the asynchronous API mode. \nThe extended reports contain the full list of the fields available in the API. Extended reports are not limited to the data selection period. The minimal possible start date for such reports is 01.01.2012. Incremental Replication and Synchronization The objects Campaigns, AdGroups, Ads, and Audiences contain the UpdatedDate or CreatedDate fields, thus, the Replication with Incremental Updates and Synchronization is supported for these objects. The -Report objects may contain either the StartDate, EndDate, or Date (daily report) fields with data type DATE, not the DATETIME. It means that the replication with Incremental Updates enabled does not retrieve the data for the current date to avoid duplications.\nFor example, running the replication today will return the report with the data selection as of yesterday. Supported Actions Skyvia supports all the common actions for TikTok Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/timely_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Timely [Timely](https://timelyapp.com/) is a fully automatic time-tracking tool for individuals and companies. It records your activity and time spent in different work apps and helps track everything your team works on with minimal effort. Data integration : Skyvia supports importing data to and from Timely, exporting Timely data to CSV files, replicating Timely data to relational databases, and synchronizing Timely data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Timely backup. Query : Skyvia Query supports Timely. Establishing Connection When creating a connection to Timely, you have to log in to Timely via OAuth 2.0. Creating Connection To connect to Timely, perform the following steps: Click Sign In with Timely in the Connection Editor. In the opened window, enter your credentials. Click Authorize . Connector Specifics Object Peculiarities Events The Events object returns data for the period from 2000-01-01 to 2099-01-01 by default, when querying. \nYou can change this period using filters by the Since and Upto fields. You can also select All or Logged events using a filter by the Filter field. The mentioned fields do not return data when querying by default. Use them for filtering only. DML Operations Support Operation Object INSERT, UPDATE, DELETE Events, Forecasts, Labels, Projects, Teams INSERT, UPDATE Clients, Users Incremental Replication Skyvia supports Synchronization for the Clients, Events, Projects, Users, Forecasts objects. Skyvia supports Incremental Replication for the AccountActivities, Accounts, Clients, Events, Projects, Users, Forecasts objects. Supported Actions Skyvia supports all the common actions for Timely." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/tmetric_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources TMetric [TMetric](https://tmetric.com/) is a powerful time management tool that helps companies to increase productivity, monitor team performance, manage projects and tasks. Data integration : Skyvia supports importing data to and from TMetric, exporting TMetric data to CSV files, replicating TMetric data to relational databases, and synchronizing TMetric data with other cloud apps and relational databases. Backup : Skyvia Backup does not support TMetric. Query : Skyvia Query supports TMetric. Establishing Connection To create a connection to TMetric, you need to sign in with TMetric using your credentials and specify the Account Id. Getting Credentials To get the TMetric Account Id, log in to TMetric and copy the Account Id value from the URL.\nFor example, for this URL https://app.tmetric.com/#/tracker/21917/ , the Account Id is 21917 . Creating Connection To connect to TMetric, perform the following steps: Click Sign In with TMetric in the Connection Editor. Enter your credentials and click Log In . Specify the Account Id. Additional Connection Parameters Suppress Extended Requests For the Projects object, TMetric API returns only part of the fields when querying multiple records. In order to query values of additional fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of this object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Projects Members, Groups, WorkTypes, PersonalRates, CommonBillableRate_Amount, CommonBillableRate_Currency To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Time Entries There are three fields, that are used for filtering only and are not returned in query results by default: StartDate, EndDate and UserId . If you use the filter by StartDate in your query, you get the records from the start date to the latest time entry. If you don\u2019t use a filter by UserId , the query returns time entries for the logged-in user by default. If you insert a time entry that overlaps the existing entry, the added entry will overwrite the existing one. When you import a time entry without providing the StartTime value, its StartTime value will be assigned automatically equal to its creation time. \nWhen you import a time entry without providing the EndTime value, it will be added as an active (unfinished) entry. Tasks If you use filter AssigneeId = 0 when querying, you get the unassigned tasks in query results. The Source field is empty in the query results by default. It is used for filtering only. You can set a filter with the following values: Internal (default), External, All . Records in other objects may be related to the internal and external tasks by foreign keys. To preserve such relationships during replication, you must filter tasks by Source = All . You can import the Internal tasks only. Accounts TMetric users may have several workspaces (Accounts). When you query the Accounts objects, you get only the logged-in user account in the query result.\nHowever you can perform insert, update or delete operations against all the existing accounts. DML Operations Support Operation Object INSERT, UPDATE, DELETE Accounts, Integrations, Invoices, Projects, Tags, Tasks, TimeEntries, TimeOffPolicies, Clients UPDATE, DELETE AccountMembers Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Tasks, TimeEntryLatest, and TimeOffPolicies objects. \nReplication tracks only the new records for the TimeOffPolicies object. Skyvia supports Synchronization for the Tasks and TimeOffPolicies objects. \nSynchronization can track only the new records for the TimeOffPolicies objects. Stored Procedures To add a break to a specific time entry, you can use a stored procedure: call AddBreak(:startTime, :endTime) Supported Actions Skyvia supports all the common actions for TMetric." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/todoist_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Todoist [Todoist](https://Todoist.com/) is a to-do list app for task management with task prioritization, reminders, activity history, progress visualization, collaboration, and more. Skyvia and Skyvia integration with Todoist are not created by, associated with, or supported by Doist. Data integration : Skyvia supports importing data to and from Todoist, exporting Todoist data to CSV files, and replicating Todoist data to relational databases. Backup : Skyvia Backup does not support Todoist. Query : Skyvia Query supports Todoist. Establishing Connection To create a connection to Todoist, you have to sign in with Todoist your credentials. Creating Connection To connect to Todoist, perform the following steps: Click Sign In with Todoist in the Conection Editor. Sign in to Todoist. Disconnecting Skyvia from Todoist You can disconnect from Todoist on Skyvia side by deleting all the Todoist connections you have created and all the objects that depend from the connections: integrations, endpoints, queries. Connector Specifics DML Operations Support Skyvia supports the following DML operations for Todoist. Operation Object INSERT, UPDATE, DELETE Labels, ProjectComments, Projects, Sections, TaskComments, Tasks Incremental Replication and Synchronization Skyvia does not support Synchronization and Replication with Incremental Updates for Todoist. Stored Procedures Skyvia represents part of the supported Zoho Billing features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CloseTask Use the following command to close the task. call CloseTask(:TaskId) ReopenTask Use the following command to reopen the closed task. call ReopenTask(:TaskId) Supported Actions Skyvia supports all the common actions for Todoist." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/toggl_track_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Toggl Track [Toggl Track](https://toggl.com/) is a time-tracking and reporting solution that helps individuals and teams manage projects. Data integration : Skyvia supports importing data to and from Toggl Track, exporting Toggl Track data to CSV files, replicating Toggl Track data to relational databases, and synchronizing Toggl Track data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Toggl Track. Query : Skyvia Query supports Toggl Track. Establishing Connection To create a connection to Toggl Track, specify the API Token. Getting Credentials To locate an API Token, go to Toggle Track and do the following. Click the user icon on the bottom left. Select Profile Settings . Scroll to the API Token section and press Click to Reveal . Copy the API Token. Creating Connection To connect to Toggl Track, perform the following steps: Connector Specifics Object Peculiarities Clients When querying, the Clients object returns records with active and archived status. Use filter by the Status field to select records of the particular status. TimeEntries The TimeEntries object returns the records for the last two weeks when querying. To query other records, use filters by the Before field with < or <= operators or a pair of fields StartDate and EndDate with > or >= operators. Groups The Users field value returns in the following JSON format when querying: [{\"user_id\":11058573,\"name\":\"dataloader.io@gmail.com\",\"avatar_url\":\"\",\"inactive\":false,\"joined\":true}] However, to insert or update data in this field, map this field to the array of user IDs in the following format: [123, 456, 789] . Filtering Specifics The Tags, Tasks, Workspaces, and TimeEntries objects support filtering by the UpdatedDate field using the > and >= operators. The maximum period for filtering is three months. Due to Toggl Track API specifics, you can select data for the last three months. If you select data for a date earlier than three months ago, you will get an error message saying \u201cSince cannot be older than 3 months.\u201d DML Operations Support Operation Object INSERT, UPDATE, DELETE Approvals, Clients, Groups, Projects, Tags, Tasks, TimeEntries, WorkspaceprojectUsers INSERT, UPDATE Organizations, Workspaces Incremental Replication The following Toggl Track objects support Incremental Updates: Organizations, Projects, User, Clients, Groups, Tags, Tasks, TimeEntries, TrackReminders, WebTimerTimeEntries, WorkspaceProjectsUsers, Workspaces .\nSkyvia detects only new records for the TrackReminders object. Only the Projects and Organizations objects support Synchronization. The Organizations object supports the INSERT and UPDATE operations but doesn\u2019t support the DELETE operation. Deleted records will fail during Synchronization. Stored Procedures Skyvia represents part of the supported Toggl Track features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . StopTimeEntry The following command stops a workspace time entry. call StopTimeEntry(:workspace_id,:time_entry_id) ArchiveClient The following command archives a workspace client and related projects. call ArchiveClient(:workspace_id,:client_id) UpdateStartDate To update a timesheet start date, use the following command. call UpdateStartDate (:workspace_id, :setup_id, :start_date, :rejection_comment, :status) Parameter Description Workspace_id Numeric ID of the workspace. Setup_id Numeric ID of the timesheet setup. Start_date The time sheet start date in the YYYY-MM-DD format. Rejection_comment String comment Status String Supported Actions Skyvia supports all the common actions for Toggl Track." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/trello_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Trello [Trello](https://trello.com/en) is a web-based task tracking and project management tool, which makes collaboration inside the team and between teams and projects fast and easy. Data integration : Skyvia supports importing data to and from Trello, exporting Trello data to CSV files, replicating Trello data to relational databases. Backup : Skyvia Backup does not support Trello. Query : Skyvia Query supports Trello. Establishing Connection To create connection with Trello, sign in using your credentials. Creating Connection To connect to Trello, perform the following actions: In the Connection Editor click Sign In with Trello . Click Log In . Specify Trello credentials or use another available sign in option: Allow Skyvia access your Trello account Additional Connection Parameters Customized Board Cards Use this parameter to enable board cards custom fields . Suppress Extended Requests For some objects, Trello API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD BoardCheckItems Type, CreationMethod BoardMembers Prefs_ColorBlind, Prefs_Locale, Prefs_MinutesBetweenSummaries, Prefs_MinutesBeforeDeadlineToNotify, Prefs_SendSummaries, Prefs_TimezoneInfo, Prefs_Privacy, Prefs_Timezone, Prefs_TwoFactor, Email To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities When you select data from the BoardChecklists object, the IdChecklistSource field returns empty values. BoardMembers When updating the BoardMembers object, map the Id and BoardId values.\nOne member can belong to several boards. If you update data in this object, the changes apply to all boards, the member belongs to. \nFor example, you want to update the data of Member1 which is a member of the Board1 and Board2 . You specify the Member1 ID and the Board2 ID in the Mapping Definition tab. When the operation is completed, the updated data for both Board1 and Board2 records related to Member1 . When performing Import with Insert operation to BoardMembers , the integtration log won\u2019t display the inserted records Ids. Custom Fields Each board can have a unique set of custom fields. If you have custom fields in your board, Skyvia will create an additional object with a name containing the board name prefix and the *BoardCards suffix. \nSuch object will contain standard fields and the custom fields additionally. \nFor example, you have a board named ClientBoard which contains Text and Checkbox custom fields. Skyvia will clone the BoardCards object by creating the ClientBoard_BoardCards object. \nThis object will contain the BoardCards object fields and the custom fields additionally. Trello supports the following custom field types. Trello Type DbType Text String Number Decimal Dropdown Enum string Checkbox Boolean Date DateTime Incremental Replication and Synchronization Trello objects doesn\u2019t support Incremental Replication and Synchronization. DML Operations Support Skyvia supports DML operations for such Trello objects: Operation Object INSERT, UPDATE, DELETE Boards, BoardCards , *BoardCards , BoardChecklists, BoardLabels, BoardMembers, Organizations, OrganizationMembers INSERT, UPDATE BoardLists INSERT, DELETE BoardCheckItems, CardAttachments, CardLabels, CardMembers UPDATE, DELETE CardCheckItems UPDATE BoardMemberships, Notifications Stored Procedures Skyvia represents part of the supported Trello features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action action in Data Integration or in Query . AddCommentToCard To add a record into the CardActions object where the Type = \u201ccommentCard\u201d , use the command below. Comment text will be added into the Data_Text field. call AddCommentToCard(:cardId,:TextComment) UpdateCommentOnCard To update the Data_Text field in the CardActions object, use the following command. call UpdateCommentOnCard(:cardId,:actionId,:TextComment) DeleteCommentOnCard To delete the Data_Text field in the CardActions object, use the following command. call DeleteCommentOnCard(:cardId,:actionId) . Supported Actions and Actions Specifics Skyvia supports all the common actions for Trello." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/twilio_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Twilio [Twilio](https://www.twilio.com/) is a service that provides programmable communication tools for making and receiving phone calls, sending and receiving text messages, and performing other communication functions using its web service APIs. Data integration : Skyvia supports importing data to and from Twilio, exporting Twilio data to CSV files, replicating Twilio data to relational databases, and synchronizing Twilio data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Twilio. Query : Skyvia Query supports Twilio. Establishing Connection To create a connection to Twilio, you need to get an Account SID and Auth Token . Getting Credentials To locate Account SID and Auth Token, perform the following steps: Log in to your [Twilio account](https://www.twilio.com/login) . In your console dashboard, scroll down until you see the Account Info section. Copy Account SIP and Auth Token . Creating Connection To connect to Twilio, specify the Account SIP and Auth Token . Connector Specifics Object Peculiarities Calls To import data to this object, map the values for either Url , Twiml , or ApplicationSid in addition to the required fields. Messages To import data to this object, map the values for Body or MediaUrl and for From or MessagingServiceSid fields, in addition to the required ones. ConversationParticipants To import data to this object, map the Identity value for the Chat field, MessagingBinding.Address and MessagingBinding.ProxyAddress for the SMS field, in addition to the required ones. IncomingPhoneNumbers To import data to this object, map the values for either PhoneNumber or AreaCode in addition to the required fields. ConferenceParticipants You can add new records using an existing ConferenceSid if the conference does not have a Completed status. To create a new conference, set any value for the ConferenceSid field. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all objects, except for CallEvents, Configuration, ConnectApps, Events, Members, MonitorEvents, UsageRecord . Skyvia supports Synchronization for all objects, except for ConnectedApps, UsageRecords, Keys, MonitorAlerts, MonitorEvents, CallEvents, Conferences, Members, RecordingTranscriptions, Medias, PhoneNumbers, ShortCodes, AlphaSenders, Configuration, ConversationMessageReceipts, UserConversations, ConversationServices, FlowRevisions, ExecutionSteps, Events, SIPCredentialListMappings, SIPIpAccessControlListMappings . DML Operations Support Operation Object INSERT, UPDATE, DELETE Activities, AddressConfiguration, Addresses, Applications, AutopilotAssistants, AutopilotTasks, Calls, ConferenceParticipants, ConversationMessages, ConversationParticipants, Conversations, ConversationUsers, Credentials, Executions, Flows, IncomingPhoneNumbers, Lists, Messages, MessagingServices, OutgoingCallerIds, Queues, Roles, Services, SIPCredentialLists, SIPCredentials, SIPDomains, SIPIpAccessControlLists, SIPIpAddresses, UsageTriggers, Workers, Workflows, Workspaces INSERT, UPDATE Accounts, Feedbacks, Recordings INSERT, DELETE AlphaSenders, ConversationServices, PhoneNumbers, ShortCodes, SIPCredentialListMappings, SIPIpAccessControlListMappings UPDATE Conferences, Members UPDATE, DELETE Keys DELETE Medias, RecordingTranscriptions, UserConversations Filtering Specifics Twilio supports the following native filters : Object Field Accounts Status, FriendlyName Addresses FriendlyName, CustomerName, IsoCountry Applications FriendlyName MonitorAlerts LogLevel, StartDate, EndDate MonitorEvents EventType, ResourceSid, SourceIpAddress, StartDate, EndDate Calls To, From, ParentCallSid, Status, StartTime, EndTime Conferences FriendlyName, Status Messages From, To AddressConfiguration Type Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Supported Actions Skyvia supports all the common actions for Twilio." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/twitterads_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources X Ads [X Ads](https://business.twitter.com/en/advertising.html) (formerly Twitter Ads) is a service for gathering direct responses, increasing website traffic, downloading content or any app, or simply increasing views and followers on X. The X Ads service allows you to sponsor your message directly in your target audience\u2019s feed or large consumer segments. Data integration : Skyvia supports importing data to and from X Ads, exporting X Ads data to CSV files, replicating X Ads data to relational databases, and synchronizing X Ads data with other cloud apps and relational databases. Backup : Skyvia Backup does not support X Ads. Query : Skyvia Query supports X Ads. Establishing Connection Log in to X Ads using your credentials to create a connection with X Ads. Creating Connection To create an X Ads connection, perform the following steps: In the Connection Editor, click Sign In with X . In the opened window, enter your X credentials and click Authorize app . Connector Specifics Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following X Ads objects: Accounts, Campaigns, FundingInstruments, AdGroups, PromotedAccounts, PromotedTweets, PromotableUsers, ScheduledPromotedTweets, AccountMedia, ImageAppDownloadCards, ImageConversationCards, VideoConversationCards, Cards, ScheduledTweets, VideoAppDownloadCards, VideoWebsiteCards, WebsiteCards, PrerollCallToActions, ImageDirectMessageCards, VideoDirectMessageCards, MediaLibrary, PollCards, DraftTweets, TailoredAudiences, TailoredAudiencePermissions, AccountsDailyReport, CampaignsDailyReport, FundingInstrumentsDailyReport, AdGroupsDailyReport, PromotedAccountsDailyReport, PromotedTweetsDailyReport, MediaCreativesDailyReport . Skyvia supports Replication with Incremental Updates for objects that have UpdatedDate and CreatedDate fields and for objects with Report and View in their names, which have Date field. This field stores dates without the time part. Thus, when performing Incremental Replication, Skyvia queries updates up to the previous date. Skyvia doesn\u2019t track the updates made on the current date. DML Operations Support Operation Object INSERT, UPDATE, DELETE Campaigns, AdGroups, ScheduledPromotedTweets, ImageAppDownloadCards, ImageConversationCards, ScheduledTweets, VideoAppDownloadCards, VideoWebsiteCards, WebsiteCards, PrerollCallToActions, PublishedTweets , ImageDirectMessageCards, VideoDirectMessageCards, MediaLibrary, DraftTweets, TailoredAudiences, TailoredAudiencePermissions INSERT, DELETE MediaCreatives, PromotedAccounts, PromotedTweets, PollCards UPDATE Accounts, FundingInstruments DELETE AccountMedia Supported Actions Skyvia supports all the common actions for X Ads." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/typeform_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Typeform [Typeform](https://www.typeform.com/) is a cloud-based solution for online form building and online surveys. The solution is designed for companies of all sizes. Typeform features survey design, where users can design customized surveys simply via the drag-and-drop interface. Data integration : Skyvia supports importing data to and from Typeform, exporting Typeform data to CSV files, replicating Typeform data to relational databases, and synchronizing Typeform data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Typeform. Query : Skyvia Query supports Typeform. Establishing Connection To create a connection to Typeform, sign in to Typeform via OAuth 2.0. Creating Connection To connect to Typeform, sign in with your Typeform credentials and give Skyvia your permission to access your Typeform account. Click Sign In with Typeform . Enter the email and password used when signing up to Typeform. Click Accept to grant Skyvia permissions to access your account. Additional Connection Parameters Suppressing Extended Requests For the Forms object, API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional field is Workspace_Href . To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities Images When loading data to Images object, you can either specify image URL in the Url field or specify its content in the base64 encoding in the Image field. Base64-encoded value must not contain descriptors, such as data:image/png;base64 . Include only the base64 code, for example: iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAIAAAAC64paAAAAG0lEQVR42mOccuMbA7mAcVTzqOZRzaOaB1YzABKjL70rq/b4AAAAAElFTkSuQmCC Webhooks When inserting data to the Webhooks object, if a record with the specified value in the Tag field exists, then Skyvia updates this record. If there is no record with the value in the Tag field yet, Skyvia will insert such record. Forms Due to Typeform API specifics, The UPDATE operation works in a following way. It updates all the mapped fields and assignes nulls for the unmapped fields. When performing the insert operation to the Forms object, you need to specify the value for the Fields field as a JSON array. For example: [{\"title\":\"SampleTitle\",\"ref\":\"01FWEFASVQD8C8WMZGYZN7FRD8\",\"properties\":{\"randomize\":false,\"allow_multiple_selection\":false,\"allow_other_choice\":false,\"vertical_alignment\":true,\"choices\":[{\"ref\":\"01FWEFASVQY5BY4V0FSFKWMA4T\",\"label\":\"Choice A\"},{\"ref\":\"01FWEFASVQDQDHKWQ9614GKSWM\",\"label\":\"Choice B\"},{\"ref\":\"46b5f210-c95b-440a-aef1-13f155c2efa4\",\"label\":\"Choice CC\"}]},\"validations\":{\"required\":false},\"type\":\"multiple_choice\",\"layout\":{\"type\":\"split\",\"attachment\":{\"type\":\"image\",\"href\":\"https://images.typeform.com/images/WMALzu59xbXQ\"}}}] Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Forms and Webhooks objects only. \nSkyvia supports Synchronization for the Forms object only. DML Operations Support Skyvia supports DML operations for such Typeform objects Operation Object INSERT, UPDATE, DELETE Forms, Themes INSERT, DELETE Images, Webhooks, Workspaces Supported Actions Skyvia supports all the common actions for Typeform." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/unbounce_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Unbounce [Unbounce](https://unbounce.com/) is an AI-powered marketing platform that enables businesses to quickly build personalized landing pages, popups, and sticky bars, enhancing visitor experiences and driving increased sales and signups. Data integration : Skyvia supports importing data to and from Unbounce, exporting Unbounce data to CSV files, and replicating Unbounce data to relational databases. Backup : Skyvia Backup does not support Unbounce backup. Query : Skyvia Query supports Unbounce. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Unbounce, do the following: Click Sign In with Unbounce . Log in with your Unbounce credentials and allow Skyvia to access your Unbounce data. Connector Specifics Objects Peculiarities PageLeads FormData field is required for mapping during import. Unbounce stores data in the FormData field in the JSON format. Make sure that mapped field, constant, or expression returns data in the JSON format too. For example: 1\n2\n3\n4\n5 {\n \"first_name\": \"John\",\n \"last_name\": \"Doe\",\n \"email\": \"jdoe@unbounce.com\"\n} Incremental Replication and Synchronization Unbounce objects do not support synchronization. Accounts, PageLeads, Pages, SubAccountPages, SubAccounts objects support incremental replication. However, as there is no UpdatedDate date field in these objects, only new records will be replicated. DML Operations Support Operations Objects INSERT, DELETE PageLeads Supported Actions Skyvia supports all the common actions for Unbounce." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/uservoice_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources UserVoice [UserVoice](https://uservoice.com/) is a product feedback management solution that collects and organizes feedback from multiple sources to provide a clear, actionable view of user feedback for product teams. Data integration : Skyvia supports importing data to and from UserVoice, exporting UserVoice data to CSV files, replicating UserVoice data to relational databases, and synchronizing UserVoice data with other cloud apps and relational databases. Backup : Skyvia Backup does not support UserVoice. Query : Skyvia Query supports UserVoice. Establishing Connection To create a connection to UserVoice, you need to get a key and a secret. Getting Credentials To obtain a key and a secret, perform the following steps: Sign in to your [UserVoice account](https://app.uservoice.com/signin) . Click on the gear icon in the menu and move to the Integrations tab. Click UserVoice API keys , then click Create API key . Enter the name of your key and select the Trusted checkbox. Copy your key and secret. Creating Connection To connect to UserVoice, specify the Subdomain , Key and Secret . Connector Specifics Object Peculiarities Comments When you perform a DELETE operation on the Comments object, the records are not being removed. The State field value is changed to deleted instead. CurrentSubdomains When you perform a SELECT operation on the CurrentSubdomains object, you only receive the current subdomain.\nYou can execute the UPDATE operation on any subdomain by specifying its ID. Views Views object can be accessed only via its parent Users object. To query Views records, UserVoice API requires ID\u2019s of the corresponding User records. Skyvia does not require the ID of the parent object from user. If you don\u2019t specify the IDs of the parent objects (for example, in a filter), Skyvia queries all the parent object records first, takes their IDs and then queries child object records for each parent object record. This allows querying child objects without knowing their parents, but this method takes much time and consumes many API calls. It uses at least one API call for every parent object record, and can be slow. Because of this, it is recommended to use filters on the parent object fields when querying data from such child objects. This reduces the number of parent object records, for which child object data must be queried. StatusUpdates When you perform an UPDATE operation, the UpdatedDate field stays unchanged.\nAs a result, Skyvia excludes records you update through the UPDATE operation from Replication with Incremental Updates and Synchronization. DELETE operation does not delete the record from the table. It clears the values in the Body and HtmlBody fields and updates the UpdatedDate field. \nThese records can be included in Replication with Incremental Updates and Synchronization. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for all objects, except for CurrentSubdomains, ImportanceResponses, Permissions, Teams . Skyvia supports Synchronization for the following objects: Categories, Comments, CustomFields, Features, FeedbackRecordsWithoutLinks, FeedbackRecordsWithLinks, Labels, Notes, StatusUpdates, Suggestions, Views . DML Operations Support Operation Object INSERT, UPDATE, DELETE Categories, Comments, CustomFields, Features, FeedbackRecords, FeedbackRecordsWithLinks, FeedbackRecordsWithoutLinks, Labels, Notes, Permissions, StatusUpdates, Suggestions, Teams, Views INSERT, DELETE ForumInvitations, NPSRatings INSERT, UPDATE InternalStatusUpdates UPDATE CurrentSubdomains INSERT ExternalAccounts, ExternalUsers, InternalFlags, SupporterMessages Stored Procedures Skyvia represents part of the supported UserVoice features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . RejectSuggestedMerge To reject a suggested merge in the SuggestedMerges object, use the command: call RejectSuggestedMerge(:id, :reason) After you call this procedure, the record will receive a \u201crejected\u201d status and will not be displayed in the SuggestedMerges object. ClearInternalFlag To clear the internal flag in the InternalFlags object, use this command: call ClearInternalFlag(:id, :cleared_message) Supported Actions Skyvia supports all the common actions for UserVoice." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/woocommerce_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources WooCommerce [WooCommerce](https://woocommerce.com/) is a customizable, open-source eCommerce platform built on WordPress that enables users to create and manage online stores. Data integration : Skyvia supports importing data to and from WooCommerce, exporting WooCommerce data to CSV files, replicating WooCommerce data to relational databases, and synchronizing WooCommerce data with other cloud apps and relational databases. Backup : Skyvia Backup does not support WooCommerce. Query : Skyvia Query supports WooCommerce. Establishing Connection To create a connection to WooCommerce, you need to specify the Domain , Consumer Key and Consumer Secret . Getting Credentials To obtain the Consumer Key and the Consumer Secret, perform the following steps: Open the WooCommerce Settings. Click Advanced -> REST API -> Add Key . Specify the Description , select the User and select Read/Write Permissions . Click Generate API Key . Copy the Consumer Key and Consumer Secret. Read more about Consumer Key and Secret in the [WooCommerce documentation](https://woocommerce.com/document/woocommerce-rest-api/) . Creating Connection To connect to WooCommerce, specify the Domain URL, Consumer Key , and Consumer Secret . Additional Connection Parameters Skyvia uses a secure HTTPS protocol to connect to WooCommerce. If you need to connect to an unsecure WooCommerce domain or use an untrusted certificate, select the Allow Untrusted Certificate and Allow Certificate Name Mismatch checkboxes. Connector Specifics The following objects are read-only: CouponCustomFields, CustomerCustomFields, CustomerDownloads, OrderCustomFields, OrderLineItems, OrderLineItemCustomFields, OrderShippings, OrderTaxes, OrderFees, OrderCoupons, OrderRefunds, RefundLineItems, ProductCustomFields, ProductImages, ProductDownloads, ProductAttributes, ProductCategories, ProductTags, ProductDefaultAttributes, TopSellersReport, CouponsTotalsReport, CustomersTotalsReport, OrdersTotalsReport, ProductsTotalsReport, ReviewsTotalsReport . Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for such WooCommerce objects as Customers, Coupons, Orders, OrderNotes, Refunds, Products, ProductVariations, ProductReviews, Webhooks . Skyvia supports Synchronization for the objects\tthat support the INSERT and UPDATE operations and have the UpdatedDate or CreatedDate fields. Filtering Specifics Skyvia supports the following native filters : Object Operator Field All objects (Not supported for objects Customers, Webhooks, OrderNotes, ProductImages ) < , <= , > , >= CreatedDate All objects (Not supported for objects ProductVariations, ProductImages, ProductTags, ProductCategories, ProductDownloads, RefundLineItems, OrderTaxes, OrderShippings, OrderRefunds, OrderCoupons, OrderFees, OrderLineItems ) = Id CouponCustomFields, CustomerCustomFields, OrderCustomFields, OrderLineItems, OrderLineItemCustomFields, OrderShippings, OrderTaxes, OrderFees, OrderCoupons, OrderRefunds, ProductCustomFields, ProductImages, ProductDownloads, ProductAttributes, ProductCategories, ProductTags, ProductDefaultAttributes = OrderId, ProductId , etc. Coupons = Code Customers = Email, Role Orders = ParentOrderId, Status, CustomerId Products = Slug, Sku, Type, Status, OnSale, Featured, TaxClass, StockStatus, ShippingClass ProductVariations = Sku, Status, OnSale, TaxClass, StockStatus, ShippingClass ProductReviews = ProductId, ReviewerEmail, Status AttributeTerms = Slug Categories = ParentId, Slug ShippingClasses = Slug Tags = Slug TaxRates = Class Webhooks = Status TopSellersReport = Period Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. DML Operations Support Operation Object INSERT, UPDATE, DELETE Attributes, AttributeTerms, Categories, Coupons, Customers, Orders, Products, ProductVariations, ProductReviews, ShippingClasses, Tags, TaxRates, Webhooks . INSERT, DELETE OrderNotes, Refunds, TaxClasses Supported Actions Skyvia supports all the common actions for WooCommerce." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/wordpress_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources WordPress [WordPress](https://wordpress.org/) is a free and open-source content management system based on PHP and MySQL. Data integration : Skyvia supports importing data to and from WordPress, exporting WordPress data to CSV files, replicating WordPress data to relational databases, and synchronizing WordPress data with other cloud apps and relational databases. Backup : Skyvia Backup supports WordPress. Query : Skyvia Query supports WordPress. Establishing Connection To create a connection to WordPress, you need to install the [Basic-Auth](https://github.com/WP-API/Basic-Auth) plugin on your WordPress instance. Creating Connection To connect to WordPress, specify the following parameters: Enter the Domain where WordPress is running. Use http instead of https when specifying the domain URL. Enter the User account name or email used to log in. Enter the WordPress application Password . Use a WordPress application password, not your regular WordPress account password. [See this post](https://make.wordpress.org/core/2020/11/05/application-passwords-integration-guide/) about application passwords to learn more. Connector Specifics Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Pages , Posts , and Blocks objects. Skyvia supports Synchronization for the Pages , Posts , and Blocks objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Categories, Comments, Blocks, Pages, Posts, Tags, Users UPDATE, DELETE Media DELETE PostRevisions Supported Actions Skyvia supports all the common actions for WordPress." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/wrike_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Wrike [Wrike](https://www.wrike.com/) is a digital work management tool that helps users track and coordinate projects. Data integration : Skyvia supports importing data to and from Wrike, exporting Wrike data to CSV files, replicating Wrike data to relational databases, and synchronizing Wrike data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Wrike. Query : Skyvia Query supports Wrike. Establishing Connection To create a connection with Wrike, log in to your Wrike account. Creating Connection To start creating a connection, follow the steps below: In the Connection Editor, click Sign In with Wrike . enter the email you used when signing in to Wrike in the opened window. Enter your password. Additional Connection Parameters Use Custom Fields Behavior This parameter defines how the connector will detect custom fields. None \u2014 connector represents custom fields as a JSON Array in a single CustomFields field. JoinAll \u2014 \u00a0connector represents custom fields belonging to all spaces as separate fields of specific types. Common \u2014 connector represents custom fields belonging to the account as separate fields of specific types. More details on how to work with custom fields in Wrike are available in Custom Fields block. Suppress Extended Requests Wrike API returns only part of the fields for some objects when querying multiple records. Skyvia performs additional extended requests to query values of missing fields. Skyvia performs such API requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Timelogs Finance_Currency, Finance_ActualFees Finance_ActualCost Projects UserAccessRoles You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Connector Specifics Object Peculiarities AuditLog The AuditLog object data is available for the Wrike users on the Enterprise plan. Comments The Comments object returns data for the last seven days by default if you don\u2019t use filters. To query data for other periods, filter by the PeriodDate field. Its values in the filter should look like this {\"start\":\"2023-06-28T14:50:31Z\",\"end\":\"2023-06-30T14:50:31Z\"} . Invitations When deleting records from this object, the delete operation performs a soft delete. It changes the record\u2019s status to Canceled instead of physically deleting it. Spaces Use the Members, SuggestedProjectWorkflowIds, SuggestedTaskWorkflowIds fields to query data from the Spaces object. Fields MembersAdd, MembersUpdate, MembersRemove, SuggestedProjectWorkflowsAdd, SuggestedProjectWorkflowsRemove, SuggestedTaskWorkflowsAdd, SuggestedTaskWorkflowsRemove are used for import and return empty results by default when querying. When performing the INSERT operation, assign the Members field mapping in the following format: [{\"id\":\"KUFK5PNB\",\"accessRoleId\":\"IEC7PGVEND777777\",\"isManager\":true}] .\n\u00a0 \nWhen performing an Update operation, the MembersAdd, MembersUpdate fields should have the same format. \nWhen deleting records, select the Update operation in the integration and map the MembersRemove field to the list of Ids you want to remove (for example, [\"KUFK5PNB\", \"KUFK5PN8\"] ). Tasks The AddParents, AddShareds, AddResponsibles, AddFollowers, AddSuperTasks, RemoveParents, RemoveShareds, RemoveResponsibles, RemoveSuperTasks fields return empty results when querying. They are used for the Update operation. Timelogs When you query this object, it returns an empty value for the BillingType field. \nTo get values from this field, use the following filter by the Fields field: Fields = [\u201cbillingType\u201d] .\nThis filter is available for users with Enterprise subscriptions. Users on other subscription plans will receive an error \n\u201cError occurred while reading \u2018Timelogs\u2019 object: Operation is not allowed due insufficient user rights or account license limitations\u201d. To select records for the specific period, specify period start and end in JSON format using filters by the CreatedDateFilter or UpdatedDateFilter field. For example, to select time logs created from June 29, 2023 to June 30, 2023, add filter by the CreatedDateFilter with value {\u201cstart\u201d:\u201d2023-06-29T00:00:00Z\u201d,\u201dend\u201d:\u201d2023-06-30T23:59:05Z\u201d} . It will look like this in SQL query. 1\n2\n3 SELECT t.*\nFROM Timelogs AS t WHERE\n(t.CreatedDateFilter = '{\"start\":\"2023-06-29T00:00:00Z\",\"end\":\"2023-06-30T23:59:05Z\"}') It will look like this in the Query Builder. You can set only starting or ending time for the period. For example, to select all the time logs from June 29, 2023 to current moment, set the following filter t.CreatedDateFilter = \u2018 {\u201cstart\u201d:\u201d2023-06-29T00:00:00Z\u201d}\u2019 . Users You can only get data from this object by the specified Id . WorkSchedule When querying data from the WorkSchedule object, the Workweek field returns in the following format [{\"workDays\":[\"Mon\",\"Tue\",\"Wed\",\"Thu\",\"Fri\"]}] in the result. When performing the INSERT and UPDATE operations, the Workweek field should look like below: 1\n2\n3\n4 [{\"dayOfWeek\":\"Mon\",\"isWorkDay\":true},{\"dayOfWeek\":\"Tue\",\"isWorkDay\":true}, \u00a0 \n{\"dayOfWeek\":\"Wed\",\"isWorkDay\":true},{ \"dayOfWeek\":\"Thu\",\"isWorkDay\":false}, \u00a0 \n{\"dayOfWeek\":\"Fri\",\"isWorkDay\":false},{\"dayOfWeek\":\"Sat\",\"isWorkDay\":false}, \u00a0 \n{\"dayOfWeek \":\"Sun\",\"isWorkDay\":false}] The AddUsers and RemoveUsers fields are used for import and return empty results when querying. \nThe AddUsers field is used for the INSERT and UPDATE operations. To delete the record, select the Update operation in the integration and specify the user ID, you want to delete in the RemoveUsers field mapping. For example, [\"KUAHVFPE\"] . Custom Fields Custom fields support INSERT and UPDATE operations. Wrike supports custom fields of the following types. Wrike Type Db Type Text String Number Decimal Currency Decimal Percent Decimal Single select String Multiple select String People String. You can import multiple values to such fields. Map them in the array format. There are two valid format for such fields. For example, '[\"IEAGE6BYI5H5LIMS\", \"IEAGE6BYJUAF3E5G\", \"IEAGE6BYJUAF3E5V\"]' or '[IEAGE6BYI5H5LIMS, IEAGE6BYJUAF3E5G, IEAGE6BYJUAF3E5V]' Date Date Duration String Checkbox Boolean Wrike API doesn\u2019t validate the custom fields when you insert or update their values. It allows to insert invalid values to the custom fields. However, it may cause errors in further querying. You can fix invalid values in Wrike UI or using the UPDATE query in Skyvia. There are two types of custom fields visibility in Wrike: Belongs to and Applies to. Belongs to Custom fields may belong to account or space. Account fields are commonly available fields. Space fields are available for specific spaces. Skyvia represents this type of field visibility by the Custom Fields Behavior connection parameter. This is a drop-down list in the Connection Editor with the following options available. None \u2014 connector represents custom fields as a JSON Array in a single CustomFields field. JoinAll \u2014 \u00a0connector represents custom fields belonging to all spaces as separate fields of specific types. Common \u2014 connector represents custom fields belonging to the account as separate fields of specific types. For example, you have two spaces in Wrike, Space1 and Space2 . Space 1 has the AccField_Space1 field belonging to account, and the SpaceField_Space1 belonging to the space. Space2 has the AccField_space2 field, belonging to account, and the SpaceField_Space2 belonging to space. If you select Join All option, in the connection, \u00a0the connector\u2019s behavior is the following. The object to which the fields are applied will contain all four custom fields. \nIf you select Common option, the object to which the fields are applied, will contain AccField_Space1 and AccField_space2 fields. Applies to Custom fields apply to: Tasks, projects, folders. Projects, folders. Only projects. Skyvia represents this type of field visibility by a corresponding object. For example, if the field applies to tasks, projects, and folders, they will be available in Tasks, Projects and FoldersAndSpaces objects. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Wrike objects: Account, Approvals, Attachments, Comments, Tasks, Timelogs, FoldersAndSpaces, Projects . Skyvia supports Synchronization for such Wrike objects Approvals, Projects, Tasks, Timelogs . DML Operation Support Skyvia supports DML operations for such Wrike objects: Operation Object INSERT, UPDATE, DELETE Approvals, Comments, Folders, FoldersAndSpaces, Groups, Invitations, Projects, Spaces, Tasks, Timelogs, UserScheduleExclusions, WorkScheduleExclusions, Workschedules INSERT, UPDATE CustomFields, Workflows UPDATE Contacts, Users DELETE Attachments Stored Procedures Skyvia represents part of the supported Wrike features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . CreateTaskComment Use the following command to add a comment for a folder. call CreateTaskComment(:task1, :text) PARAMETER NAME DESCRIPTION Task1 Related task Id Text The comment text itself You can update the existing comments only within 5 minutes after their creation. When you try to update comments later, you will get an error. CreateFolderApproval To add a record to the specific folder, use the following command. call CreateFolderApproval(:FolderId, :Description, :DueDate, :Approvers, :Attachments, :AutoFinishOnApprove, :AutoFinishOnReject) PARAMETER NAME DESCRIPTION FolderId The identifier of the specific folder Description the approval description DueDate Due date in the \u00a0yyyy-MM-dd format Approvers Array of approved contact Ids Attachments Array of the root attachments to set in approval AutoFinishOnApprove True or False. Whether to finish approval automatically after all approvers have completed their approvals AutoFinishOnReject True or False. Whether to finish approval when someone rejected UpdateDependencies To Update and Delete the dependencies, use the following command call UpdateDependencies(:dependencyId, :RelationType) PARAMETER NAME DESCRIPTION DependencyId The dependency identifier RelationType Valid values are StartToStart, StartToFinish, FinishToStar, FinishToFinish . DeleteDependencies Use the following command to delete the specific dependency. call DeleteDependencies(:dependencyId) . Supported Actions Skyvia supports all the common actions for Wrike." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/xero_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Xero [Xero](https://www.Xero.com/) is an accounting app with automatic bank feeds, invoicing, accounts payable, expense claims, fixed asset depreciation, purchase orders, bank reconciliations, and other features. Data integration : Skyvia supports importing data to and from Xero, exporting Xero data to CSV files, replicating Xero data to relational databases, and synchronizing Xero data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Xero backup. Query : Skyvia Query supports Xero. Table of Contents What Skyvia Can Do for Xero Users Getting Started FAQ Xero Specifics Supported Actions What Skyvia Can Do for Xero Users Skyvia consists of several products that can help you working with Xero data. Data Integration can help you implement different integration scenarios. Data migration from other accounting tools to Xero, integrating Xero with other cloud apps, copying Xero data to a database for data analysis or archiving purposes, syncing Xero data with other cloud apps and databases, designing complex data pipelines including Xero and other data sources, etc. Query allows you to build quick reports on Xero data by querying it with SQL commands or Visual Query Builder. You can export the result to CSV or PDF formats. Skyvia Query also provides add-ins for Excel and Google Sheets, allowing querying Xero data directly from the corresponding spreadsheet applications. In addition to SELECT, Skyvia Query supports simple DML statements, allowing you to perform mass update and delete operations. Connect allows publishing Xero data via web API endpoints. You can create OData endpoints and make your Xero data available via OData protocol. You can use the endpoints to connect other tools supporting OData protocol, like Salesforce Connect or Power BI. Additionally, Connect allows you to create SQL endpoints that allow SQL to be executed over the web. Skyvia also provides a ready-to-use ODBC driver, ADO.NET provider, and Looker Studio connector for SQL endpoints. All Skyvia tools are designed for non-tech users. They allow users to build their custom data flows, and there are no predefined data flows. You can load any Xero data that is available via the Xero API. Getting Started To create a connection to Xero in Skyvia, sign in to Xero via OAuth 2.0. Skyvia stores the OAuth authentication token. Skyvia does not store your Xero credentials. In the Connection Editor click Connect to Xero . In the opened window, enter email and password used when signing up to Xero. Click Allow access . The authentication token is generated. Now you need to select Tenant to connect to from the list. Use the Test Connection button to make sure that the connection is successful and then click Create Connection . After you have created a Xero connection, you can start integrating Xero with other sources, querying Xero with the Query tool and publish Xero data via web API endpoints. Please see the following topics of our documentation to know how to use different Skyvia tools: Import (loading data in one direction) Export (exporting data to CSV files) Replication (copying cloud data to a database) Synchronization (synchronizing data in both directions) Data Flow (designing complex data pipelines) Control Flow (designing more advanced data pipelines with custom logics) Query (querying data from web browser) Google Sheets Add-On (querying data from Google Sheets) Excel Add-in (querying data from Excel) OData Endpoints SQL Endpoints FAQ How do I disconnect from Xero? You can disconnect from Xero on Skyvia side by deleting all the Xero connections you have created together with all the objects that depend from the connections: integrations, endpoints, queries. Does Skyvia store my data? Skyvia does not store your Xero login and password. For the details, see the Retention of User Data section in our [Security page](https://skyvia.com/security) . Note that Skyvia Backup does not support Xero, and thus, Xero backups cannot be stored on Skyvia. Will I be notified in case of any data loading errors? Skyvia has the email notification feature, but it is not enabled by default. Please read the Email Notification topic to find out how to configure it. Xero Specifics Object Peculiarities *Attachment Objects When importing data to the *Attachments objetcs, don\u2019t use special characters in a file name. \nThe FileName field cannot contain any of the following characters: <, >, :, \", /, \\, |, ?, *, \\0 . DML Operations Support Skyvia supports the following DML operations for Xero objects. Operation Object INSERT, UPDATE, DELETE Account, ContactGroup, Contact, Employee, Item, LinkedTransaction, ProjectTask, ProjectTime INSERT, UPDATE BankTransaction, CreditNote, Invoice, ManualJournal, Payment, Project, PurchaseOrder, Quote INSERT AccountAttachment, BankTransfer, BankTransactionAttachment, BankTransferAttachment, ContactAttachment, CreditNoteAttachment, InvoiceAttachment, ManualJournalAttachment, PurchaseOrderAttachment, QuoteAttachment, RepeatingInvoiceAttachment UPDATE Overpayment, Prepayment Incremental Replication Skyvia supports Incremental Replication for such Xero objects: Account , BankTransaction , BankTransfer , Contact , CreditNote , Employee , Invoice , Item , LinkedTransaction , ManualJournal , Payment , PurchaseOrder , Quote , Overpayment , Prepayment . Skyvia supports Synchronization for the following Xero objects: Account , BankTransaction , Contact , CreditNote , Employee , Invoice , Item , LinkedTransaction , ManualJournal , Payment , PurchaseOrder , Quote . Supported Actions and Actions Specifics Skyvia supports all the common actions for Xero." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/yotpo_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Yotpo [Yotpo](https://www.yotpo.com/) is a cloud-based content marketing platform for e-commerce businesses that enables users to collect user-generated content and accelerate direct-to-consumer growth. Data integration : Skyvia supports importing data to and from Yotpo, exporting Yotpo data to CSV files, replicating Yotpo data to relational databases and synchronizing Yotpo data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Yotpo backup. Query : Skyvia Query supports Yotpo. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/) with Yotpo, you need to use App key and Secret Key. To work with Yotpo Loyalty objects, specify the Api Key and Guid also. Getting Credentials App Key and Secret key To get your authentication credentials: Login to your [Yotpo account](https://login.yotpo.com/) . Click on the user icon and choose Settings . Look for API Credentials sections. Copy App Key from the corresponding field. Click Get Secret Key . A confirmation code will be sent to your email. Enter the code in the box appeared and click Submit to get the Secret Key. Loyalty API Credentials If you use Yotpo Loyalty, obtain the Api Key and Guid: Login to your Yotpo Loyalty account. Click Settings -> General Settings . Copy the API Key and GUID values from the corresponding boxes. Creating Connection To create a connection to Yotpo, enter your App Key and Secret Key. If you want to work with Yotpo Loyalty objects, also specify the Api Key and Guid . If you don\u2019t use Yotpo Loyalty, you can omit these parameters. Additional Connection Parameters Suppress Extended Requests Suppress Extended Requests parameter disables the use of additional requests for getting ProductsApps values from the Reviews object. Connector Specifics Object Peculiarities Customers This object supports only Insert operation in Import. The ExternalId field is required. If you set an ExternalId value during the mapping setup and that record does not exist in Customers object, a new record will be added. If you set a value that already exists, it will be updated. Reviews When you load data into the Reviews object, you cannot get ids of the resulting records. Records will not be displayed in the Log or in the [Returning](https://docs.skyvia.com/data-integration/import/#returning) feature. It takes approx. 2-3 minutes for the updated data to appear in the Reviews object after the successful Insert execution. LoyaltyRecentCustomers This object displays a list of customers, who were updated for the last 7 days, and customer data registered in the Yotpo Loyalty database, when querying. DML Operations Support Skyvia supports the following DML operations for Yotpo objects: Operations Objects INSERT, UPDATE Collections, Orders, Products, ProductVariants INSERT, DELETE OrderFulfillments INSERT Customers, Reviews, Unsubscribers Incremental Replication and Synchronization Skyvia supports Incremental Replication for the following objects: Collections, CollectionProducts, LoyaltyCampaigns, Orders, OrderFulfillments, Products, ProductVariants, QuestionAnswers, Questions, Reviews, TopThree5StarReviews . Skyvia tracks only the new records for the QuestionAnswers , Questions and TopThree5StarReviews objects. Skyvia supports Synchronization for the Collections, Orders, Products , and ProductVariants objects. These objects only support Insert and Update operations. Thus, only inserted and updated records will be affected. Running Synchronization which needs to delete the records from the objects will cause the error \u201cTable does not support DELETE statement\u201d in the log. Stored Procedures Skyvia represents part of the supported Yotpo features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddProductToCollection and RemoveProductFromCollection Use the following procedures to add or delete records from the CollectionProducts object: call AddProductToCollection(:yotpoCollectionId, :externalProductId) call RemoveProductFromCollection(:yotpoCollectionId, :externalProductId) CreateCustomerRecord Use the following command to create a customer Loyalty record. call CreateCustomerRecord(:email, :first_name, :last_name, :phone_number, :has_account, :opted_in, :pos_account_id, :tags,:opted_in_at) PARAMETER NAME DESCRIPTION Email The customer\u2019s email address. First_name The customer\u2019s first name. Last_name The customer\u2019s last name. Phone_number The customer\u2019s phone number in the +XXXXXXXXX format. Has_account Whether the customer has an account with the platform. Opted_in Whether the customer should be opted in to loyalty program. If null, will rely on the account settings of opt in. Pos_account_id The Point of Sale unique account identifier. Tags A comma separated list of tags (or collections) this customer belongs to. Note: This will overwrite existing tags Opted_in_at Date and time when the customer was opted in to the loyalty program. For example, call CreateCustomerRecord ('georg.born@gmail.com', 'Georg', 'Born', '+380501234567', true, true,'','Tag1','2023-10-30') UpdateCustomerBirthdays Use the following command to update the customer\u2019s date of birth. call UpdateCustomerBirthdays(:customer_email,:day,:month,:year) UpdateCustomerAnniversary Use the following command, to update the customer\u2019s anniversary. call UpdateCustomerAnniversary(:customer_email,:day,:month,:year) RemoveCustomerAnniversary The following command removes the existing anniversary value for the specific customer. call RemoveCustomerAnniversary(:customer_email) Supported Actions Skyvia supports all the common actions for Yotpo." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zammad_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zammad [Zammad](https://zammad.com/) is a web based open source helpdesk/customer support system. Data integration : Skyvia supports importing data to and from Zammad, exporting Zammad data to CSV files, replicating Zammad data to relational databases, and synchronizing Zammad data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zammad. Query : Skyvia Query supports Zammad. Establishing Connection To create a connection to Zammad, specify the subdomain and access token. Getting Credentials Access Token To obtain the Access Token, perform the following steps: Log in to Zammad. Click the User Icon in the bottom left corner of the page. Click Profile -> Token Access -> Create Name the new token, set the expiry date if necessary and select the permissions to grant. Copy the appeared token. Creating Connection To connect to Zammad, paste your Zammad domain and access token to the corresponding boxes in the Connection Editor. Connector Specifics Object Peculiarities Tickets Skyvia displays tickets data in the Tickets and the related TicketArticles objects. TicketArticles object stores the ticket\u2019s body, subject and other data. The group of the Tickets fields with the Article* prefix in the names displays empty values when querying. \nHowever these fields are used for the INSERT and UPDATE operations. If you update any of the Article* fields, you must also map the Article_Body . You can insert body, subject and other ticket data in two ways: By mapping the the Article* fields of the Tickets object. By mapping the fields of the Tickets and the related TicketArticles object. Custom Fields Skyvia supports custom attributes for the Tickets, Users, Organizations , and Groups objects as custom fields. \nCustom fields support the INSERT and UPDATE operations. Possible custom field types are the following: Zammad Type DBtype Text String Textarea String Boolean Boolean Integer Int32 Date Date Date & Time Datetime Single selection String Multiple selection String. Stores a path to the element Single tree selection String Multiple tree selection String External data source String. Skyvia represents this type of field by two separate fields with *Label and *Value suffixes in their names Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the Groups, Organizations, TicketArticles, TicketPriorities, Tickets, TicketStates, UserAccessPermissions, UserAccessTokens, and Users objects. Skyvia supports Synchronization for the objects Groups, Organizations, TicketPriorities, Tickets, TicketStates, Users DML Operations Support Operation Object INSERT, UPDATE, DELETE Groups, Organizations, TicketPriorities, Tickets, TicketStates, Users UPDATE, DELETE OnlineNotifications, Tags INSERT TicketArticles DELETE UserAccessTokens Stored Procedures Skyvia represents part of the supported Zammad features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddTicketTag To add new tag in the specific ticket, use the command call AddTicketTag(item, object, o_id) PARAMETER NAME DESCRIPTION Item Tag name Object The valid value is \u2018Ticket\u2019 O_id Ticket Id The procedure doesn\u2019t insert the whole record, it just adds a new tag to the array of tags in the TicketTags object. Due to Zammad API specifics, the successfully executed procedure in Skyvia returns the result with the following error message: Error occurred while executing \u2018AddTicketTag\u2019 procedure: true. RemoveTicketTag To delete a particular tag from the specific ticket, use the command call RemoveTicketTag(item, object, o_id) PARAMETER NAME DESCRIPTION Item Tag name Object The valid value is \u2018Ticket\u2019 O_id Ticket Id The procedure doesn\u2019t delete the whole record, it just deletes a particular tag from the array of tags in the TicketTags object. Due to Zammad API specifics, the successfully executed procedure in Skyvia returns the result with the following error message: Error occurred while executing \u2018RemoveTicketTag\u2019 procedure: true. CreateTag To create a new tag in the Tags object, use the command call CreateTag(name) Supported Actions Skyvia supports all the common actions for Zammad." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zendesksell_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zendesk Sell [Zendesk Sell](https://www.zendesk.com/sell/) is a sales CRM software tool that enhances productivity, processes, and pipeline visibility for sales teams. Data integration : Skyvia supports importing data to and from Zendesk Sell, exporting Zendesk Sell data to CSV files, replicating Zendesk Sell data to relational databases, and synchronizing Zendesk Sell data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zendesk Sell. Query : Skyvia Query supports Zendesk Sell. Establishing Connection To create a connection to Zendesk Sell, sign in with your Zendesk Sell credentials. Click Sign In with Zendesk Sell . Enter your login and password. Click Authorize to provide Skyvia with access permissions. Click Create Connection . Connector Specifics Object Peculiarities Read-only Objects Accounts, CallOutcomes, LeadUnqualifiedReasons, Pipelines, TextMessages, Users , and Stages objects are read-only. Contacts The Contacts object contains Persons and Companies records. LastName field is required for a person record and Name field is required for a company record. Set either LastName or Name value depending on the contact type. There are two fields that reference to parent organizations: ParentOrganizationId and PersonParentOrganizationId . Set ParentOrganizationId to define the parent organization of another organization, set PersonParentOrganizationId to define the parent organization of a person. If you try to set ParentOrganizationId in a \u201cPerson\u201d record, you will receive an error. Orders In Zendesk Sell one order corresponds to one deal only. You can insert only records with a unique DealId into Orders object. Synchronization and Incremental Replication Skyvia does not support synchronization and replication with incremental updates for the AssociatedContacts object. DML Operations Support Skyvia supports the following DML operations for Zendesk Sell objects: Operations Objects INSERT, UPDATE, DELETE Calls, Contacts, Deals, DealSources, DealUnqualifiedReasons, Leads, LeadSources, LossReasons, Notes, Orders, Products, Tags, Tasks INSERT, DELETE AssociatedContacts, Collaborations, LineItems Supported Actions and Actions Specifics Skyvia supports all the common actions for Zendesk Sell." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zendesk_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zendesk [Zendesk](https://www.zendesk.com) is a cloud customer support ticketing system with customer satisfaction prediction. Data integration : Skyvia supports importing data to and from Zendesk, exporting Zendesk data to CSV files, replicating Zendesk data to relational databases, and synchronizing Zendesk data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zendesk backup. Query : Skyvia Query supports Zendesk. Establishing Connection Getting Credentials While [creating a connection](https://docs.skyvia.com/connections/#creating-connections) to Zendesk, depending on the authentication type you choose, you might be asked to either enter your API Token or Subdomain. API Token API Token is an automatically generated REST API authentication token that is used to connect to Zendesk. To get your API Token, login to [Zendesk](https://www.zendesk.com/) and go to Admin > Channels > API. Subdomain Subdomain is a string in the browser address bar that comes before .zendesk.com . Creating Connection To create a connection to Zendesk choose an authentication type. There are three authentication types available: OAuth 2.0 , API Token , Email & Password . Depending on the authentication type you choose, the process of creating connection may differ. OAuth Authentication Select OAuth 2.0 from the Authentication drop-down. Enter your Subdomain . Click Sign In with Zendesk and enter your login credentials. API Token Authentication Select API Token from the Authentication drop-down. Specify the URL to connect to. Enter the email address of your Zendesk user. Enter your API Token . Email & Password Authentication Select Email & Password from the Authentication drop-down. Enter your Subdomain . Enter your Zendesk user email and password . Additional Connection Parameters Use Custom Fields Use Custom Fields specifies whether Skyvia will work with custom Zendesk fields via this connection. By default, this checkbox is selected, and you can access custom Zendesk fields via Skyvia. If you clear this checkbox, Skyvia won\u2019t try to access custom fields data. This is useful when your subscription does not allow access to custom Zendesk fields, as accessing Zendesk objects storing custom fields data results in access errors. Incremental Export Incremental Export defines whether to use [Incremental Export API](https://developer.zendesk.com/api-reference/ticketing/ticket-management/incremental_exports/#incremental-ticket-export) or Standard Zendesk API for retrieving data from Tickets object. Incremental Export API vs Standard API A ticket in Zendesk becomes archived 120 days after its closure. Standard Zendesk API does not return archived tickets. To work with all tickets, select the Incremental Export checkbox. Incremental Export API supports native ticket filtering by the Updated field only. Standard Zendesk API supports the native tickets filtering by Status, Type, Priority, RequesterId, OrganizationId fields and others, but It does not support the native filtering by the Updated field. Native filtering allows you to save the number of API calls needed to query data. To reduce the number of API calls used during tickets replication, synchronization, and when applying an Updated state filter in Import, use Incremental Export API. Use Standard Zendesk API when you apply filters by other Ticket fields. You can create several connections to the same Zendesk account with different settings and use them for different purposes. Describe Custom Objects You can choose between two versions of the Custom Objects API: the [legacy custom objects](https://developer.zendesk.com/api-reference/custom-data/custom-objects-api/custom-objects-api/) and the new [custom objects](https://developer.zendesk.com/api-reference/custom-data/custom-objects/custom_objects/) . Each version handles metadata and data storage for custom objects differently. Be aware that you cannot access legacy custom objects when selecting a new API. If you need access to the legacy and new custom objects, you can create several connections to the same Zendesk account but with different API versions. Connector Specifics Custom Fields Naming Skyvia does not support custom Zendesk fields with the double quotation marks in their name. Filtering Specifics Zendesk supports the following native filters: Object Fields and operators Address , CustomObject , DefaultGreeting , DigitalLine , Greeting , GreetingCategory , IVR , PhoneNumber Id ( = ) AccountOverview PhoneNumberId ( = ) AgentAvailabilityState AgentId ( = ) Call , CallLeg Updated ( = , > , >= ) CurrentQueueActivity PhoneNumberId ( = ) CustomData Name ( = ), External Id ( = ), Created ( > , >= ), Updated ( > , >= ) CustomData (custom fields) Text ( = ), Multiline ( = ), Date ( = , > , >= , < , <= ), Decimal ( = , > , >= , < , <= , in ), Number ( = , > , >= , < , <= , in ), Lookup ( = ) CustomObjectField Id ( = ), CustomerObjectId ( = ) IVRMenu Id ( = ), IVRId ( = ) IVRRoute Id ( = ), IVRMenuId ( = ), IVRId ( = ) Article SectionId ( = ), AuthorId ( = ) Section CategoryId ( = ) DML Operations Support Operation Object INSERT, UPDATE, DELETE Address, CustomObject, Greeting, DigitalLine, IVR, IVRMenu, IVRRoute Supported Actions Skyvia supports all the common actions for Zendesk." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohoanalytics_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Analytics [Zoho Analytics](https://www.zoho.com/analytics/) is a self-service business intelligence platform and analytics software by Zoho. Data integration : Skyvia supports importing data to and from Zoho Analytics, exporting Zoho Analytics data to CSV files, and replicating Zoho Analytics data to relational databases. Backup : Skyvia Backup does not support Zoho Analytics. Query : Skyvia Query supports Zoho Analytics. Establishing Connection To create a connection to Zoho Analytics, select the data center, sign in with Zoho and select the workspace. Creating Connection To connect to Zoho Analytics, perform the following steps: Select the data center and Click Sign in with Zoho . Enter your credentials. Allow Skyvia to access data in Zoho account. Additional Connection Parameters Asynchronous Export This parameter enables the asynchronous export for connector. Primary Key Columns Here you can manually define primary key fields for connector objects. To define a primary key for a specific object, use the object_name.field_name format. For example, to make the Account ID a primary key for the Accounts object, enter Accounts.Account ID . To define a primary key for all objects in the connector, specify the field name only. For example, to make the Account ID a primary key for all the objects, enter Account ID . Connector Specifics Zoho Analytics has no predefined objects. All the objects and fields are custom. Zoho Analytics supports the following field types: Zoho Analytics Type DbType and description PLAIN String MULTI_LINE String (length is 4000 characters or more up to CLOB) EMAIL String URL String (2048 characters) BOOLEAN Boolean AUTO_NUMBER Int64 NUMBER Int64 by default or Double if Decimal Places > 0 POSITIVE_NUMBER Int64 by default or Double if Decimal Places > 0 DECIMAL_NUMBER Decimal CURRENCY String (255 characters) PERCENT Double GEO String (4000 characters) GEO_NUM String (255 characters) altitude or longitude DURATION String (255 characters) DATETIME DateTime DATE Date Export Peculiarities Zoho Analytics supports synchronous and asynchronous data export. \nSynchronous data export is useful for reading small data volumes. This approach is used by default in our connector. Asynchronous export is more helpful for: Tables having more than one million rows. Tables and Views from live connect workspaces. Dashboard and Querytable view types. To use this approach, enable the Asynchronous Export parameter in Connection Editor. \nIt allows only 5 simultaneous export jobs for an organization. Filtering Specifics Zoho Analytics API supports the following native filters: Field Types Operators PLAIN, MULTI_LINE, EMAIL, URL IS NULL, IS NOT NULL, = , != , LIKE BOOLEAN IS NULL, IS NOT NULL, = AUTO_NUMBER, NUMBER, POSITIVE_NUMBER, DECIMAL_NUMBER, PERCENT, DURATION, DATETIME, DATE IS NULL, IS NOT NULL, = , != , > , >= , < , <= , BETWEEN, IN CURRENCY IS NULL, IS NOT NULL GEO IS NULL, IS NOT NULL, = , != , LIKE GEO_NUM IS NULL, IS NOT NULL Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Incremental Replication and Synchronization Skyvia doesn\u2019t support Replication with Incremental Updates and Synchronization for Zoho Analytics. DML Operations Support All objects of the Table type support INSERT, UPDATE, and DELETE operations. QueryTable objects are read-only. Supported Actions Skyvia supports all the common actions for Zoho Analytics." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohobooks_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Books [Zoho Books](https://www.zoho.com/books/) is a cloud accounting software for small businesses developed by ZOHO Corporation. Data Integration : Skyvia supports importing data to and from Zoho Books, exporting Zoho Books data to CSV files, replicating Zoho Books data to relational databases, and synchronizing Zoho Books data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zoho Books. Query : Skyvia Query supports Zoho Books. Establishing Connection To create a connection to Zoho Books, select your Data Center, sign in with Zoho, and specify your Organization ID. Getting Credentials Organization ID To get your Organization ID: Log in to your Zoho Books account. Click your Organization Name at the top right. Copy your Organization ID . Creating Connection To create a connection to Zoho Books, do the following: Select a data center location from the Data Center dropdown. Enter your Organization ID if there are multiple organizations in your Zoho Books account. Click Sign In with Zoho and fill out your Zoho login credentials. Additional Connection Parameters Suppress Extended Requests For some objects, Zoho Books API returns only part of the fields when querying multiple records. To retrieve missing fields, Skyvia sends additional extended requests for each record, which can increase API usage and affect performance. The following objects contain fields that require extended requests: OBJECT FIELD Estimates LineItems, BillAddr_*, ShipAddr_*, CustDefBillAddr_* Contacts BillAddr_*, ShipAddr_* SalesOrders LineItems, BillAddr_*, ShipAddr_* Invoices LineItems, BillAddr_*, ShipAddr_*, CustDefBillAddr_* RecurringInvoices LineItems, BillAddr_*, ShipAddr_* CreditNotes LineItems, BillAddr_*, ShipAddr_* CreditNoteRefunds FromAccountId CustomerPayments Invoices CustomerPaymentRefunds FromAccountId Expenses LineItems, ProjectId, ProjectName RecurringExpenses AccountId, Amount, StartDate, LineItems RetainerInvoices LineItems, BillAddr_* PurchaseOrders LineItems, BillAddr_*, DeliveryAddress_* Bills LineItems, BillAddr_* VendorCredits VendorId, LineItems VendorCreditRefunds AccountId VendorPaymentRefunds ToAccountId Journals LineItems , CreatedDate , UpdatedDate To reduce API calls, you can enable the Suppress Extended Requests option. However, this means certain fields will not be available in Skyvia and will return empty values, as Zoho Books API does not return them without extended requests. Since the CreatedDate and UpdatedDate fields are used for incremental update replication, enabling Suppress Extended Requests will prevent incremental updates for objects that include these fields. Connector Specifics Objects With a Complex-Structured Data Bills, Invoices, Estimates, Expenses, CreditNotes, PurchaseOrders, SalesOrders, Journals, RecurringInvoices, RecurringExpenses, RetainerInvoices , and VendorCredits objects store complex-structured data. Skyvia represents this information as a LineItems JSON field. Here is an example of the LineItems field value from the Invoice table: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41 [\n {\n \"item_id\": \"369175000000122053\",\n \"account_id\": \"369175000000000388\",\n \"account_name\": \"Sales\",\n \"sku\": \"\",\n \"salesorder_item_id\": \"\",\n \"project_id\": \"\",\n \"time_entry_ids\": [],\n \"expense_id\": \"\",\n \"item_type\": \"sales_and_purchases\",\n \"expense_receipt_name\": \"\",\n \"name\": \"ServiceItem\",\n \"description\": \"sample description\",\n \"item_order\": 0,\n \"bcy_rate\": 148.92,\n \"rate\": 148.92,\n \"quantity\": 1,\n \"unit\": \"kg\",\n \"discount_amount\": 0,\n \"discount\": 0,\n \"tax_id\": \"369175000000044076\",\n \"tax_name\": \"Tax_our\",\n \"tax_type\": \"tax\",\n \"tax_percentage\": 15,\n \"item_total\": 148.92,\n \"tags\": [\n {\n \"tag_option_id\": \"369175000000085213\",\n \"is_tag_mandatory\": false,\n \"tag_name\": \"testTag\",\n \"tag_id\": \"369175000000000333\",\n \"tag_option_name\": \"option1\"\n }\n ],\n \"documents\": [],\n \"item_custom_fields\": [],\n \"bill_id\": \"\",\n \"bill_item_id\": \"\"\n }\n] For user convenience, lines of these objects are also available as separate records via read-only tables with names containing LineItems suffix, like InvoiceLineItems, ExpenseLineItems, etc. They allow you to view these lines in a tabular form with Query , export them to CSV with Export , import them from Zoho Books to another cloud application or database, where these lines should be stored in a separate table, etc. Since these tables (that have the LineItems suffix in their name) are read-only, they are not available in import or synchronization integrations as a target. To modify lines of an invoice, expense, etc., you need to provide values in JSON format for the LineItems field of the corresponding main table - Invoices, Expenses, etc. For example, to import invoices from a CSV files to Zoho Books, you need to specify invoice lines in the JSON format, like in the example above, in one of the CSV file columns, and import this file to the Invoice table. You need to map the LineItems field of the Invoices table to this column of the CSV file. The tables with the LineItems suffix in their name are available in backups, but you cannot restore data to these tables, because they are read-only. Since they store the same information as the LineItems field of the corresponding main tables, you don\u2019t actually need to backup them. All the information in LineItems tables is present in the corresponding main tables, and you can back up and restore data in the main table only. Auto-generation of Numbers Zoho Books offers auto-generation of numbers when creating records in transaction objects, like Bills, CreditNotes, Invoices, Estimates, SalesOrders and others. If this features is enabled, it may cause errors when performing the Insert action in Import and Data Flow or the Restore operation in Backup. Backup and Restore When you restore data in the Bills, CreditNotes, Invoices, Estimates, SalesOrders and other transaction objects, Skyvia inserts all the backed up records into source, including the number field values. Thus, when the auto-generation is enabled, Skyvia tries to insert the records with existing number. Zoho Books returns an error trying to generate a new record number for such records automatically. To avoid errors while restoring records of transaction objects, disable the auto-generation of numbers for the objects you restore. Import or Data Flow If you insert records in the Bills, CreditNotes, Invoices, Estimates, SalesOrders and other transaction objects by Import or Data Flow, and map the number fields while the auto generation is enabled, Zoho Books returns an error trying to generate a new record number for such records automatically. To avoid errors, remove mapping for the fields whose values are auto generated or disable the auto generation for the object you insert data to. DML Operations Support Skyvia supports the following DML operations for Zoho Books objects: Operation Object INSERT, UPDATE, DELETE BankAccounts, BankTransactions, Bills, ChartOfAccounts, ContactAddresses, ContactPersons, Contacts, CreditNoteRefunds, CreditNotes, Currency, CustomerPaymentRefunds, CustomerPayments, EstimateComments, Estimates, Expenses, Invoices, Items, Journals, OpeningBalance, Projects, PurchaseOrderComments, PurchaseOrders, RecurringExpenses, RecurringInvoices, RecurringInvoiceComments, RetainerInvoices, SalesOrderComments, SalesOrders, Tasks, TaxAuthorities, Taxes, TaxExemption, TimeEntries, Users, VendorCreditRefunds, VendorCredits, VendorPaymentRefunds, VendorPayments INSERT, DELETE BaseCurrencyAdjustment, BillComments, BillsCredited, CreditedInvoices, CreditNoteComments, Employees, ProjectComments, VendorCreditComments DELETE InvoicePayments Zoho Books API Calls Limit By default, Zoho Books has a 1000 API calls per day and 60 API calls per minute limit. Performing backup, replication, or querying all the data from complex objects may quickly use all the API calls available. For example, querying LineItems field values uses additional requests. Filtering Specifics Native filtering is supported for the following fields in the Invoices object: Object Operator Field Invoices = InvoiceNumber, CustomerName, ReferenceNumber > , >= , < , <= Date, DueDate, UpdatedDate Fields with Nested Objects The following objects: Contacts, Invoices, RecurringInvoices, Estimates, SalesOrders, Expenses, and PurchaseOrders have fields such as LineItems that contain an array of nested objects. Skvyia represents properties of these nested objects as separate fields, to ease the mapping process during Import and other tasks. To enable nested objects support during import, select the Nested Objects checkbox. Nested Objects DML Support Object Name Field Supported Operations Contacts CustomFields, ContactPersons Insert, Update Invoices, RecurringInvoices, SalesOrders, PurchaseOrders LineItems (required during Insert) Insert, Update Expenses LineItems Insert, Update Invoices, RecurringInvoices, SalesOrders, PurchaseOrders Tags inside LineItems Insert, Update Invoices, RecurringInvoices, SalesOrders, PurchaseOrders LineItemTaxes inside LineItems Read-Only Contacts, Invoices, Estimates, Expenses Tags, LineItemTaxes inside LineItems Read-Only Supported Actions Skyvia supports all the common actions for Zoho Books." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohodesk_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Desk [Zoho Desk](https://www.zoho.com/desk/) is a cloud helpdesk software for managing customer conversations across multiple channels, such as email, chat, phone, social media, developed by ZOHO Corporation. Data integration : Skyvia supports importing data to and from Zoho Desk, exporting Zoho Desk data to CSV files, replicating Zoho Desk data to relational databases, and synchronizing Zoho Desk data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zoho Desk. Query : Skyvia Query supports Zoho Desk. Establishing Connection To create a connection to Zoho Desk, select your Data Center, sign in with Zoho, and specify your Organization ID. Getting Credentials Organization ID To get your Organization ID: Login to your [Zoho Desk](https://www.zoho.com/desk/login.html) account. Click on \u2699 icon to open Setup menu. Select APIs . Look for OrgId and the bottom of the API page. Creating Connection To [create a Zoho Desk connection](https://docs.skyvia.com/connections/) : Select a Zoho Desk data center location from the Data Center dropdown. Enter your Organization ID . Click Sign In with Zoho and enter your Zoho login credentials. Click the Create Connection . Connector Specifics Object Peculiarities Threads The Content field is an extended request field. When you include it to your query an additional request is sent for each row. Exclude this field from your query unless it is necessary. Tickets and TicketsExtended The Tickets object contains the standard tickets fields. \nTo perform the Insert operation to this object, map either ContactId or ContactEmail together with the required fields. The TicketsExtended object contains all the Tickets fields and the additional fields. Due to Zoho Desk API specifics, Skyvia performs additional API requests to read the additional fields for each record of this object, which may affect performance. Tags and TicketTags. Both objects represent a Tags entity. However, if you add a record with TicketId = null to Tags object, it won\u2019t be added to TicketTags . Filtering Specifics Zoho Desk supports the following native filters: Object Field Operator AccountAttachments, AccountFollowers AccountId = AccountComments Id, AccountId = Accounts, ArticleFeedbacks, BusinessHours, Contacts, HelpCenterGroups, HelpCenterLabels, Organizations, Teams, Tickets, TicketsExtended, TicketTimeEntries Id = AccountTimeEntries Id, AccountId, IsBillable, InvoiceId = Agents Id, RolePermissionType, Status, IsConfirmed = AgentTimeEntries Id, AgentId, IsBillable, InvoiceId = ArchivedTicketAttachments, ArchivedTicketFollowers ArchivedTicketId, DepartmentId = ArchivedTicketComments Id, ArchivedTicketId, DepartmentId = ArchivedTickets, ArchivedTicketsExtended, KBCategories, Tags, TicketTags DepartmentId = ArchivedTicketThreads Id, TicketId, DepartmentId = ArchivedTicketTimeEntries Id, ArchivedTicketId, IsBillable, InvoiceId, DepartmentId = ArticleComments Id, ArticleId = Articles Id, CategoryId, Status, Permission, AuthorId = CallAttachments CallId = CallComments Id, CallId = Calls Id, DepartmentId, StartTime = ContactAttachments, ContactFollowers ContactId = ContactComments Id, ContactId = ContactTimeEntries Id, ContactId, IsBillable, InvoiceId = ContractComments Id, ContractId = Contracts Id, AccountId, DepartmentId, OwnerId = CustomerHappiness DepartmentId, AgentId, ContactId, TicketId, AccountId, Rating = Departments Id, ChatStatus = EventAttachments EventId = EventComments Id, EventId = Events Id, DepartmentId, StartTime = HelpCenters, HelpCenterUsers Id, Status = JiraIssues ZohoTicketId = ProductAttachments ProductId = Products Id, DepartmentIds, OwnerId = Profiles Id, IsVisible, Default = Roles Id, IsDefault, IsVisible = TaskAttachments TaskId = TaskComments Id, TaskId = Tasks Id, DepartmentId, DueDate = TaskTimeEntries Id, TaskId, IsBillable, InvoiceId = ThreadAttachments ThreadId, TicketId = Threads, TicketComments Id, TicketId = TicketActivities, TicketApprovals Id, TicketId = TicketAttachments TicketId, IsPublic = TicketFollowers, TicketMetrics TicketId = Tickets, TicketsExtended DepartmentId, Channel, Status, Priority =, In TicketTimeEntries TicketId, IsBillable, InvoiceId = Incremental Replication Skyvia supports Incremetal Replication for the following objects: AccountAttachments, AccountComments, Accounts, AccountTimeEntries, AgentTimeEntries, ArchivedTicketAttachments, ArchivedTicketComments, ArchivedTickets, ArchivedTicketsExtended, ArchivedTicketThreads, ArchivedTicketTimeEntries, ArticleComments, ArticleFeedbacks, Articles, BusinessHours, CallAttachments, CallComments, Calls, ContactAttachments, ContactComments, Contacts, ContactTimeEntries, ContractComments, Contracts, CustomerHappiness, Departments, EventAttachments, EventComments, Events, HelpCenterGroups, HelpCenterLabels, ProductAttachments, Products, TaskAttachments, TaskComments, Tasks, TaskTimeEntries, Threads, TicketActivities, TicketAttachments, TicketComments, Tickets, TicketsExtended, TicketTimeEntries . Skyvia tracks only new records for the following objects: AccountAttachments, AccountTimeEntries, AgentTimeEntries, ArchivedTicketAttachments, ArchivedTickets, ArchivedTicketThreads, ArchivedTicketTimeEntries, ArticleFeedbacks, CallAttachments, ContactAttachments, ContactTimeEntries, CustomerHappiness, Departments, EventAttachments, ProductAttachments, TaskAttachments, TaskTimeEntries, Threads, TicketAttachments, TicketTimeEntries . DML Operations Operations Objects INSERT, UPDATE, DELETE AccountComments, Accounts, Agents, ArticleComments, Articles, BusinessHours, CallComments, Calls, ContactComments, Contacts, ContractComments, Contracts, EventComments, Events, HelpCenterGroups, HelpCenterLabels, KBCategories, Products, TaskComments, Tasks, TaskTimeEntries, TicketAttachments, TicketComments, Tickets, TicketsExtended, TicketTimeEntries INSERT, UPDATE Departments, Roles, Teams INSERT, DELETE AccountAttachments, CallAttachments, ContactAttachments, EventAttachments, ProductAttachments, TaskAttachments UPDATE, DELETE ArchivedTickets INSERT TicketApprovals UPDATE HelpCenterUsers, Organizations, Profiles DELETE AccountFollowers, ArticleFeedbacks, ContactFollowers, TicketFollowers Nested Objects Zoho Desk fields store complex structured data in JSON format. You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Separate Tables for the Unwind Nested Objects option when using the new replication runtime to replicate the nested data into separate tables. The complex structured objects are the following: Object Field - Nested Object AccountComments Attachments - AttachmentsType Mention - MentionsType Articles AvailableLocaleTranslations - AvailableLocaleTranslationsType Attachments - AttachmentsType ArticleComments Attachments - AttachmentsType Mention - MentionsType BusinessHours BusinessTimes - BusinessTimesType AssociatedHolidayList - AssociatedHolidayListType Calls Reminder - RemindersType CallComments Attachments - AttachmentsType Mention - MentionsType ContactComments Attachments - AttachmentsType Mention - MentionsType ContractComments Attachments - AttachmentsType Mention - MentionsType ArchivedTickets SharedDepartments - SharedDepartmentsType ArchivedTicketComments Attachments - AttachmentsType Mention - MentionsType ArchivedTicketThreads Actions - ActionsType ArchivedTicketsExtended SharedDepartments - SharedDepartmentsType Events Reminder - RemindersType EventComments Attachments - AttachmentsType Mention - MentionsType HelpCenters HelpCenterLocales - HelpCenterLocalesType Domains - DomainsType Tasks Reminder - RemindersType TaskComments Attachments - AttachmentsType Mention - MentionsType Tickets SharedDepartments - SharedDepartmentsType Threads Actions - ActionsType TicketComments Attachments - AttachmentsType Mention - MentionsType TicketMetrics StagingData - StagingDataType AgentsHandled - AgentsHandledType TicketsExtended SharedDepartments - SharedDepartmentsType Stored Procedures Skyvia represents part of the supported Zoho Desk features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . GetDeleted To get IDs of deleted objects, use the command: call GetDeleted(:name, :startDate, :endDate, :excludeUserId) PARAMETER NAME REQUIREMENT DESCRIPTION Name Required Name of the object. This parameter accepts the following values: Accounts, Contacts, Contracts, Tickets, TicketsExtended, Tasks, Calls, Events, Products, Topics . StartDate Optional Filters records starting from the specified date. EndDate Optional Filters records up to the specified date. ExcludeUserId Optional Excludes records with the specified user ID. SendEmailReply To send a reply for the selected ticket, use the command: call SendEmailReply(:ticketId, :fromEmailAddress, :to, :content, :contextType, :isForward, :cc, :bcc) PARAMETER NAME REQUIREMENT DESCRIPTION TicketId Required ID of the ticket. FromEmailAddress Required Email address of the sender. To Required Email address of the recipient. Content Required Body of the email message. ContentType Required Defines the format of the email content. Accepts the following values: html and plaintext . IsForward Optional Defines if the email is a forwarded message. CC Optional Defines email address to be CC\u2019d on the reply. BCC Optional Defines email address to be BCC\u2019d on the reply. Supported Actions Skyvia supports the SendEmailReply action in addition to all the common actions for Zoho Desk." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohodesk_connections/send-email-reply-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Desk SendEmailReply This action sends a reply for the selected Zoho Desk ticket. Action Settings Setting Requirement Description TicketId Required ID of the ticket. FromEmailAddress Required Email address of the sender. To Required Email address of the recipient. Content Required Body of the email message. ContentType Required Defines the format of the email content. Accepts the following values: html and plaintext . IsForward Optional Defines if the email is a forwarded message. CC Optional Defines email address to be CC\u2019d on the reply. BCC Optional Defines email address to be BCC\u2019d on the reply. Action Parameters SendEmailReply action parameters correspond to the fields of the target object. You must map at least the required parameters. Result The reply is sent to the specified receiver." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohoinventory_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Inventory [Zoho Inventory](https://www.zoho.com/inventory/) is a cloud inventory management software for growing businesses with order management, end-to-end tracking, warehouse management, etc., developed by ZOHO Corporation. Data Integration : Skyvia supports importing data to and from Zoho Inventory, exporting Zoho Inventory data to CSV files, replicating Zoho Inventory data to relational databases, and synchronizing Zoho Inventory data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zoho Inventory. Query : Skyvia Query supports Zoho Inventory. Establishing Connection To create a connection to Zoho Inventory, select your Data Center, sign in with Zoho, and specify your Organization ID. Getting Credentials Organization ID To get your Organization ID: Log in to your Zoho Inventory account. Click your Organization Name at the top right. Copy your Organization ID . Creating Connection To create a connection to Zoho Inventory, perform the following steps: Select a Zoho Inventory data center location from the Data Center dropdown. Enter your Organization Id . Click Sign In with Zoho and enter your Zoho login credentials. Click the Create Connection . Additional Connection Parameters Suppress Extended Requests For some objects, Zoho Inventory API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD Bills LineItems Bundles LineItems CompositeItems MappedItems Contacts BillAddr_, ShipAddr_, Twitter, Facebook, Notes CreditNotes LineItems Invoices LineItems Items CustomFields, CustomFieldHash, Batches ItemAdjustments LineItems Packages LineItems Pricebooks PricebookItems PurchaseOrders LineItems PurchaseReceives BillAddr_, ShipAddr_, ContactCategory, SubmittedDate, SubmittedBy, SubmittedByName, SubmittedByEmail, SubmittedByPhotoUrl, SubmitterId, ApproverId, ApprovalReason, ApproversList, CurrencyId, CurrencyCode, CurrencySymbol, IsInclusiveTax, ContactPersons, Documents, IsBilled, IsPOMarkedAsReceived, PaymentTerms, PaymentTermsLabel, Bills, Notes, LineItems, IsTrackingSupported, TrackingNumber, TrackingLink, ExpectedDeliveryDate, ShipmentDeliveredDate, Carrier, DeliveryMethod, DeliveryMethodId, TrackingStatuses, CustomFields, CustomFieldHash, CreatedById, LastModifiedById SalesOrders LineItems SalesReturns LineItems ShipmentOrders BillAddr_, ShipAddr_, SalesOrderDate, SalesorderFulfilmentStatus, ShipmentStatus, DetailedStatus, StatusMessage, TrackingCarrierCode, Service, DeliveryDays, SourceId, SourceName, DeliveryGuarantee, ReferenceNumber, CustomerId, ContactPersons, CurrencyId, CurrencyCode, CurrencySymbol, ExchangeRate, Discount, IsDiscountBeforeTax, DiscountType, EstimateId, DeliveryMethod, DeliveryMethodId, TrackingLink, LastTrackingUpdateDate, ExpectedDeliveryDate, ShipmentDeliveredDate, MultipieceShipments, ShipmentType, IsTrackingEnabled, IsFormsAvailable, IsEmailNotificationEnabled, TrackingStatuses, Invoices, LineItems, SubTotal, TaxTotal, Total, Taxes, PricePrecision, IsEmailed, Notes, CustomFields, CustomFieldHash, TemplateId, TemplateName, TemplateType, Packages, Documents, AssociatedPackagesCount, CreatedById, LastModifiedById TransferOrders LineItems, CustomFields, CustomFieldHash To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. Connector Specifics Object Peculiarities ShipmentOrders This objects requires mapping the SalesOrderId and the PackageIds fields for the Update operation.\nYou can list several values, separated by commas when mapping the PackageIds fields. Filtering Specifics The UpdatedDate field in the Items, SalesOrders, Contacts, Invoices, PurchaseOrders , and Bills objects supports filters with >= and > operators. Use these filters to improve performance and save API calls. You can use filters with other fields or operators, but it may increase API call usage. Complex Structured Data Invoices, Bills, ItemAdjustments, PurchaseOrders, SalesOrders , and TransferOrders objects store complex-structured data. Skyvia represents this information as a LineItems JSON field. Here is an example of the LineItems field value from the Invoice object: Some of the Zoho Inventory tables store complex structured data. These are the following tables: Invoices, Bills, ItemAdjustments, PurchaseOrders, SalesOrders, TransferOrders . For example, an invoice or a purchase order can have several lines. Skyvia represents this information in such tables as a JSON field LineItems . Here is an example of the LineItems field value from the Invoice object: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39 [\n {\n \"documents\": [],\n \"item_id\": \"369175000000122053\",\n \"is_combo_product\": false,\n \"account_id\": \"369175000000000388\",\n \"account_name\": \"Sales\",\n \"sku\": \"\",\n \"salesorder_item_id\": \"\",\n \"project_id\": \"\",\n \"time_entry_ids\": [],\n \"expense_id\": \"\",\n \"expense_receipt_name\": \"\",\n \"name\": \"ServiceItem\",\n \"description\": \"sample description\",\n \"item_order\": 0,\n \"bcy_rate\": 148.92,\n \"rate\": 148.92,\n \"quantity\": 1,\n \"unit\": \"kg\",\n \"discount\": 0,\n \"tax_id\": \"369175000000044076\",\n \"tax_name\": \"Tax_our\",\n \"tax_type\": \"tax\",\n \"tax_percentage\": 15,\n \"item_total\": 148.92,\n \"tags\": [\n {\n \"tag_option_id\": \"369175000000085213\",\n \"is_tag_mandatory\": false,\n \"tag_name\": \"testTag\",\n \"tag_id\": \"369175000000000333\",\n \"tag_option_name\": \"option1\"\n }\n ],\n \"item_custom_fields\": [],\n \"item_type\": \"sales_and_purchases\"\n }\n] For user convenience, data stored in the lines is also available as separate records in objects with *Lineitems suffix in their names ( InvoiceLineItems , BillLineItems , etc). Since the objects with the *Lineitems suffix in their names are read-only, they can not be used in Synchronization and as a Target in Import. To make changes to the line in a Zoho Inventory object, you need to have your updated line data in a JSON format and map it to the LineItems field. The objects with the *Lineitems suffix in their names can be found in Backup. You cannot restore data to these objects, as they are read-only. Instead, make changes to the corresponding main (suffixless) objects and those changes will be applied to the objects with *Lineitems suffix automatically. DML Operations Support Operations Objects INSERT, UPDATE, DELETE Bills, ContactPersons, Contacts, Currency, CustomerPayments, InvoiceCreditsApplied, Invoices, ItemGroups, Items, Package, PurchaseOrders, SalesOrders, TaxAuthorities, Taxes, Users, PurchaseReceives, ShipmentOrders INSERT, UPDATE Organizations INSERT, DELETE TransferOrders, ItemAdjustments, InvoiceCreditsApplied Zoho Inventory API Calls Limit By default, Zoho Inventory has a 1000 API calls per day and 60 API calls per minute limit. Performing backup, replication, or querying all the data from complex objects may quickly use all the API calls available. For example, querying LineItems field values uses additional requests. Supported Actions Skyvia supports all the common actions for Zoho Inventory." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohoinvoice_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Invoice [Zoho Invoice](https://www.zoho.com/invoice/) is a cloud invoicing and billing software for freelancers and small business owners developed by ZOHO Corporation. Data Integration : Skyvia supports importing data to and from Zoho Invoice, exporting Zoho Invoice data to CSV files, replicating Zoho Invoice data to relational databases and synchronizing Zoho Invoice data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zoho Invoice. Query : Skyvia Query supports Zoho Invoice. Establishing Connection To create a connection to Zoho Invoice, select your Data Center, sign in with Zoho and specify your Organization ID. Getting Credentials Organization ID To get your Organization ID: Log in to your Zoho Invoice account. Click your Organization Name at the top right. Copy your Organization ID . Creating Connection To create a Zoho Invoice connection : Select a Zoho Invoice data center location from the Data Center list. Enter your Organization ID . Click Sign In with Zoho and enter your Zoho login credentials. Click the Create Connection . Additional Connection Parameters Suppress Extended Requests For the Contacts, Estimates, Invoices, Items, RecurringInvoices, RetainerInvoices, CreditNotes, Expenses , and RecurringExpenses objects, Zoho Invoice API returns only part of the fields when querying multiple records. In order to query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in Zoho Invoice because Zoho Invoice API does not return them without extended requests. Connector Specifics Objects With a Complex-Structured Data Invoices, Estimates, Expenses, CreditNotes, RecurringInvoices, RecurringExpenses, and RetainerInvoicesstore objects store complex-structured data. Skyvia represents this information as a LineItems JSON field. Here is an example of the LineItems field value from the Invoice object: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50 [\n {\n \"item_id\": \"1305024000000065053\",\n \"salesorder_item_id\": \"\",\n \"project_id\": \"\",\n \"time_entry_ids\": [],\n \"expense_id\": \"\",\n \"item_type\": \"sales\",\n \"expense_receipt_name\": \"\",\n \"name\": \"DGERH35435\",\n \"description\": \"sample description\",\n \"item_order\": 0,\n \"bcy_rate\": 467,\n \"rate\": 467,\n \"quantity\": 7,\n \"unit\": \"qty\",\n \"discount\": 0,\n \"tax_id\": \"1305024000000065051\",\n \"tax_name\": \"testtax\",\n \"tax_type\": \"tax\",\n \"tax_percentage\": 2,\n \"item_total\": 3269,\n \"documents\": [],\n \"item_custom_fields\": []\n },\n {\n \"item_id\": \"\",\n \"salesorder_item_id\": \"\",\n \"project_id\": \"1305024000000065096\",\n \"time_entry_ids\": [],\n \"expense_id\": \"\",\n \"item_type\": \"\",\n \"expense_receipt_name\": \"\",\n \"name\": \"TestProject\",\n \"description\": \"sample description 2\",\n \"item_order\": 1,\n \"bcy_rate\": 123,\n \"rate\": 123,\n \"quantity\": 1,\n \"unit\": \"\",\n \"discount\": 0,\n \"tax_id\": \"\",\n \"tax_name\": \"\",\n \"tax_type\": \"tax\",\n \"tax_percentage\": 0,\n \"item_total\": 123,\n \"documents\": [],\n \"item_custom_fields\": []\n }\n] For user convenience, data stored in the lines is also available as separate records in objects with LineItems suffix in their names ( InvoiceLineItems , BillLineItems , etc). Since the objects with the LineItems suffix in their names are read-only, they can not be used in synchronization and as a Target in the import. To make changes to the line in a Zoho Invoice object, you need to have your updated line data in a JSON format and map it to the LineItems field. The objects with the LineItems suffix in their names can be found in Backup. You cannot restore data to these objects, as they are read-only. Instead, make changes to the corresponding main (suffixless) objects and those changes will be applied to the objects with Lineitems suffix automatically. DML Operations Support CreditedInvoices , CreditNoteComments , Employees , ProjectComments objects do not support UPDATE operation. Zoho Invoice API Calls Limit By default, Zoho Invoice has a 1000 API calls per day and 60 API calls per minute limit. Performing backup, replication, or querying all the data from complex objects may quickly use all the API calls available. For example, querying LineItems field values uses additional requests. Supported Actions Skyvia supports all the common actions for Zoho Invoice." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohopeople_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho People [Zoho People](https://www.zoho.com/people/) is a cloud HR processes automation software with an easy time and attendance tracking developed by ZOHO Corporation. Data Integration : Skyvia supports importing data to and from Zoho People, exporting Zoho People data to CSV files, replicating Zoho People data to relational databases and synchronizing Zoho People data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zoho People. Query : Skyvia Query supports Zoho People. Establishing Connection Creating Connection To create a Zoho People connection : Select a Zoho People data center location from the Data Center dropdown. Click Sign In with Zoho and enter your Zoho login credentials. Click the Create Connection . Additional Connection Parameters Date Format Date Format is used to properly display values in the Date and DateTime type fields. The Date Format parameter in your connection should match the Date Format parameter in your Zoho People account. The default value is _dd-mm-yyyy_ . To check the Date Format parameter in Zoho People, sign in to your Zoho People account and go to Settings > Company Details > Display Settings . Connector Specifics When loading data to the Jobs objects, you must provide values for Assignees or AssignedTo fields. Note that the Assignees field stores values in a JSON format. The Expense field in the TravelExpenses object and Medical field in the Benefits object are read-only in Skyvia. Skyvia does not support restoring data from backup for the Jobs object and objects referencing Employee such as ADA, Assets, Benefits, Disciplinaries, EEO, FMLA, Irca, Leaves, OSHA, Separations, TrainingFeedbacks, TrainingRegistrations, TravelExpenses , and VETS100 . By default, Zoho People has a limit of 1000 API calls per day and 60 calls per minute. Performing backup, replication, or querying all data from objects may quickly use all the API calls available. Supported Actions Skyvia supports all the common actions for Zoho People." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zohosprints_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Sprints [Zoho Sprints](https://sprints.zoho.com/) is a cloud-based solution for agile teams designed for collaborative project planning. Zoho Sprints provides tracking solutions with drag-and-drop planning tools, Scrum boards, timers and timesheets, meeting scheduling, dashboards, reports, and a team activity feed. Data integration : Skyvia supports importing data to and from Zoho Sprints, exporting Zoho Sprints data to CSV files, replicating Zoho Sprints data to relational databases, and synchronizing Zoho Sprints data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zoho Sprints. Query : Skyvia Query supports Zoho Sprints. Establishing Connection To create a connection to Zoho Sprints, sign in with Zoho and select data center, and team. Creating Connection To connect to Zoho Sprints, perform the following steps: Select the Data Center. Click Sign in with Zoho . Enter your Zoho credentials. Click Allow to permit Skyvia to access your Zoho data. Select a team from the drop-down list. Additional Connection Parameters Metadata Cache You can specify the time after which Metadata Cache expires. Custom Layouts It is a group of parameters that includes Projects Custom Layouts , Sprints Custom Layouts , Items Custom Layouts , Epics Custom Layouts , Releases Custom Layouts , and Meetings Custom Layouts . Each parameter corresponds to custom fields layout of the specific Zoho Sprint object. \nThese parameters are optional. Use them to work with the custom fields of the corresponding object. \nIf you don\u2019t specify any custom layout, no custom fields will be available in Skyvia. Connector Specifics Object Peculiarities Read-only Objects The following Zoho Sprint objects are read-only: ItemActivities, ItemTimers, ProjectGroups, ProjectItemTypes, ProjectPriorities, Teams, Users . Webhooks The CustomHeaders, CustomParams, KeyValParams fields have a complex structure and store values in JSON Array format. Performance Specifics Most of Zoho Sprint\u2019s objects are related to each other. To query most of them, Skyvia must first query their parent objects. Some parent objects may also have other objects as parents. Thus, it takes more time and API calls to query such objects. See the list of related objects below. Parent Objects Child Objects ChecklistGroups ChecklistItems Items ChecklistGroups, ItemActivities, ItemAttachments, ItemComments, ItemFollowers, ItemReminders, ItemTags, ItemTimers Projects Epics, ProjectItemTypes, ProjectPriorities, ProjectStatuses, Releases, ReleaseStages, Sprints Sprints Items, Meetings, SprintComments, SprintUsers Use filters by foreign key fields in your integrations and queries to increase performance. Custom Fields Skyvia supports custom fields for the Projects, Sprints, Items, Epics, Releases , and Meetings objects. Each object may have different sets of custom fields. Every set of custom fields inside a single object belongs to a separate layout. For example, project1 custom fields belong to the Project_1_layout. Project2 custom fields belong to the Project_2_layout. To work with custom fields, specify the required corresponding layout in your Zoho Sprints connection. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: ChecklistItems, Epics, ItemAttachments, ItemComments, ItemReminders, Items, LinkTypes, Meetings, ProjectItemTypes, Projects, ProjectStatuses, Releases, SprintComments, Sprints, Users . \nSkyvia detects only new records for the Epics, ItemAttachments, ItemReminders, Items, Meetings, Projects, ProjectStatuses, Releases, Sprints, and Users objects. Skyvia supports Synchronization for the following objects: ChecklistItems, Epics, ItemComments, ItemReminders, Items, LinkTypes, LogHours, Meetings, ProjectItemTypes, Projects, ProjectStatuses, Releases, SprintComments, Sprints, Users . DML Operations Support Operation Object INSERT, UPDATE, DELETE ChecklistGroups, ChecklistItems, Epics, ItemComments, ItemReminders, Items, LinkTypes, LogHours, Meetings, Projects, ProjectStatuses, ProjectUsers, Releases, ReleaseStages, SprintComments, Sprints, SprintUsers, Tags, Webhooks INSERT, DELETE ItemFollowers DELETE ItemAttachments INSERT ItemTags Supported Actions Skyvia supports all the common actions for Zoho Sprints." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_billing_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Billing [Zoho Billing](https://billing.zoho.com/) is a comprehensive tool from Zoho that combines invoicing, expense management, project billing, and recurring billing. Data integration : Skyvia supports importing data to and from Zoho Billing, exporting Zoho Billing data to CSV files, replicating Zoho Billing data to relational databases, and synchronizing Zoho Billing data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zoho Billing. Query : Skyvia Query supports Zoho Billing. Establishing Connection To create a connection to Zoho Billing, sign in with Zoho, select your Data Center, and specify your Organization ID. Getting Credentials Organization ID To obtain the Organization ID, perform the following actions: Go to [Zoho Billing](https://billing.zoho.com/) UI. Click the user name in the top right corner of the page. Copy the Organization ID value. Creating Connection To connect to Zoho Billing, perform the following steps: Click Sign In with Zoho Billing in the Connection Editor. Enter your Zoho credentials. Grant Skyvia the requested permission. Select your Zoho Data Center if it differs from the default one. By default, it is set to US . Specify the Organization Id Additional Connection Parameters Suppress Extended Requests For some objects, Zoho Billing API returns only part of the fields when querying multiple records. To query values of lacking fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such an object. However, this can decrease performance and significantly increase the number of API calls used. The additional fields are the following: OBJECT FIELD ContactPersons Fax, Department, Designation, Skype, IsAddedInPortal, CanInvite, CommunicationPreference_IsEmailEnabled, CommunicationPreference_IsSmsEnabled, CommunicationPreference_IsWhatsappEnabled, CreatedDate, UpdatedDate Coupons CouponsetId, CouponName, IsActive, DiscountType, MaxRedemptionCount, MaxRedemptionCountPerUser, ExpiryTime, ApplyToPlans, ApplyToAddons, CreatedTimeFormatted, CouponCodes, MinimumOrderValue CreditNotes CreditnoteItems, LineItems Customers Salutation, BillingAddress*, ShippingAddress*, Twitter, Facebook, Skype, CustomFields, PrimaryContactpersonId, CanAddCard, CanAddBankAccount, Notes* Events Payload, Webhooks Products ProductDigest, ItemsAssociated, AutonumberEnabled, PrefixString, NextNumber, UpdatedDate UnbilledCharges UnbilledChargeItems, Coupons You can select the Suppress Extended Requests checkbox to reduce the number of API calls. Skyvia sends additional web requests when querying custom fields. Custom fields are unavailable when querying with the selected Suppress Extended Requests checkbox. Unselect the Suppress Extended Requests checkbox to query custom fields. Connector Specifics Object Peculiarities Objects with Complex Structure The following Zoho Billing objects contain text fields in JSON array format: Object Fields Plans Tags, Addons Customers Tags, Documents Addons PriceBrackets, Tags, Plans, ItemTaxPreferences Coupons Plans, Addons Invoices InvoiceItems, Coupons, Credits, Payments, Comments, Documents Subscriptions Addons, Taxes, ContactPersons, Notes, PaymentGateways Payments Invoices UnbilledCharges UnbilledChargeItems, Coupons CreditNotes CreditnoteItems, Invoices, Taxes Events Webhooks You can use our Nested Objects mapping feature in the Import integrations to insert or update the nested values in such fields. Select the Nested Objects checkbox in Import to enable this feature. Subscriptions If you insert subscription records for the existing customer, map the CustomerId field. If a customer doesn\u2019t exist, map the fields with the Customer* prefixes in the names. The UPDATE operation requires mapping the Card_CardId field. Addons The PriceBrackets object stores an array of nested fields that are required for mapping. Requirements change based on the PricingScheme field value. PricingScheme PriceBrackets fields required for mapping PriceBrackets fields you should not map Volume Price, StartQuantity, EndQuantity Tier Price, StartQuantity, EndQuantity Package Price, EndQuantity StartQuantity Unit Price StartQuantity, EndQuantity StartQuantity , and EndQuantity fields will have null values if you map them when the PricingScheme equals Package or Unit . In this case, marking them for mapping may cause an \u201cInvalid value passed for start_quantity\u201d error. To avoid it, create a separate Import Task for each PricingScheme . CustomFields Skyvia supports custom fields for the following objects Plans, Customers, Addons, Invoices, Subscriptions, Payments, CreditNotes . Custom fields may have the following types: Zoho Billing Data Type DbType String, Email, Url, Phone, Autonumber, Dropdown, Multiselect, Multiline String Date Date Date_time DateTime Check_box Boolean Number, Lookup Int64 Decimal Decimal Percent Double Amount In Zoho Billing UI such field value contains currency and the amount itself. Skyvia represents such custom fields by two separate objects of Double and String types. For example, the Amount custom field value looks like UAH595.59 . In the connector it is represented by the String field with value UAH595.59 and Double field with value 595.59 . Attachment Int64. Custom field of the Attachment type contains the Id of the corresponding record in the Document object. You should add record to the Documents object first. Then you can use this Documents record Id as a value for custom fields of the Attachment type. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for the following objects: Addons, Cards, ContactPersons, Coupons, CreditNotes, Customers, Documents, Invoices, Plans, Products, Subscriptions, Taxes, UnbilledCharges . Skyvia supports Synchronization for the ContactPersons, Customers, Invoices, Products, Subscriptions objects. DML Operations Support Operation Object INSERT, UPDATE, DELETE Addons, ContactPersons, Coupons, Customers, Invoices, Payments, Plans, Products, Subscriptions INSERT, DELETE CreditNotes, Documents DELETE Cards, UnbilledCharges Stored Procedures Skyvia represents part of the supported Zoho Billing features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . ConvertInvoiceToOpen The following command changes the status of the invoice to open. call ConvertInvoiceToOpen(:invoice_id) VoidAnInvoice The following command marks an invoice status as void. call VoidAnInvoice(:invoice_id) EmailAnInvoice The following command sends an invoice in an email. call EmailAnInvoice(:invoice_id, :to_mail_ids, :cc_mail_ids, :subject, :body) Parameter Description Invoice_id The invoice identifier To_mail_ids The recipients\u2019 email addresses in array format. For example, [\"jone.smith@somedomain.com\", \"jane.smith@somdomain.com\"] Cc_mail_ids The email addresses to send invoice copies to in array format. For example, [\"jone.smith@somedomain.com\", \"jane.smith@somdomain.com\"] Subject The email subject Body The message body CollectChargeViaBankAccount The following command charges a customer for an invoice using an existing bank account. call CollectChargeViaBankAccount(:invoice_id, :account_id) WriteOffInvoice The following command writes off an invoice. call WriteOffInvoice(:invoice_id) CancelWriteOff The following command cancels the write-off amount of an invoice. call CancelWriteOff(:invoice_id) CancelSubscription Use the following command to cancel the subscription. call CancelSubscription(:subscription_id, :cancel_at_end) You can cancel your subscription immediately or at the end of the current term based on the value of the cancel_at_end parameter. If the cancel_at_end is set to true, then the subscription status is changed to non_renewing , and if it is false, the status would be canceled. ReactivateSubscription Use the following command to reactivate a subscription that currently has non-renewing status. \u0441all ReactivateSubscription(:subscription_id) ApplyCouponToSubscription The following command applies a coupon to an existing subscription. call ApplyCouponToSubscription(:subscription_id, :coupon_id) RemoveCouponFromSubscription To remove the coupon associated with a subscription, use the following command. call RemoveCouponFromSubscription(:subscription_id) UpdateCustomerCardDetails Use the following command to update the card details of a customer. Once updated, the past card will no longer be charged. call UpdateCustomerCardDetails(:subscription_id, :card_id, :auto_collect) RemoveCardFromSubscription To delete a card associated with the subscription, use the following command. The subscription will become an offline subscription. call RemoveCardFromSubscription(:subscription_id) ConvertUnbilledChargeToInvoice The following command converts unbilled charges to an invoice manually instead of waiting for the next renewal. call ConvertUnbilledChargeToInvoice(:unbilled_charge_id) EmailCreditNote The following command sends a credit note in an email. call EmailCreditNote(:creditnote_id, :to_mail_ids, :cc_mail_ids, :subject, :body) Parameter Description Creditnote_id The credit note identifier To_mail_ids The recipients\u2019 email addresses in array format. For example, [\"jone.smith@somedomain.com\", \"jane.smith@somdomain.com\"] Cc_mail_ids The email addresses to send credit note copy to in array format. For example, [\"jone.smith@somedomain.com\", \"jane.smith@somdomain.com\"] Subject The email subject Body The message body VoidCreditNote The following command marks credit note status as void. call VoidCreditNote(:creditnote_id) OpenVoidedCreditNote To convert a voided credit note to open, use the following command. call OpenVoidedCreditNote(:creditnote_id) Supported Actions Skyvia supports all the common actions for Zoho Billing." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho CRM [Zoho CRM](https://www.zoho.com/crm/) is a customer relationship management software that helps businesses manage their customer interactions and improve sales and marketing efforts. It provides tools for sales automation, lead management, pipeline management, marketing automation, and analytics. Data Integration : Skyvia supports importing data to and from Zoho CRM, exporting Zoho CRM data to CSV files, replicating Zoho CRM data to relational databases and synchronizing Zoho CRM data with other cloud apps and relational databases. Backup : Skyvia Backup supports Zoho CRM. Query : Skyvia Query supports Zoho CRM. Establishing Connection Creating Connection To create a Zoho CRM connection : Select the API Version to use. It is recommended to use the most recent version. Select your Zoho CRM domain from a Domain dropdown. Select the prefered Environment . Click Sign In with Zoho and enter your Zoho login credentials. If asked, select the CRM organization to connect with Skyvia. Click Accept to allow Skyvia acces your data. Click the Create Connection . Additional Connection Parameters Query API Query API allows for querying Zoho CRM data using COQL (CRM Object Query Language). Query API increases the execution speed of query that contain filters and boosts the performance of the operations which use filters. Before using Query API, consider the following: COQL has limitations: not all fields are natively supported in SELECT, and not all fields are natively supported in WHERE. COQL is supported for Leads, Accounts, Contacts, Deals, Campaigns, Tasks, Cases, Events, Calls, Solutions, Products, Vendors, PriceBooks, Quotes, SalesOrders, PurchaseOrders, Invoices objects, and custom objects. If a query can be executed via COQL, but at least one field in SELECT is not natively supported by COQL, a query to obtain only a list of Ids will be executed, after that all other fields will be requested for these Ids using the classic method. In this case, an additional request will be generated for each row. If a query cannot be executed via COQL, it is executed using the classic method. Enable Non-Approved Records is not supported in COQL. Suppress Extended Requests Suppress Extended Requests disables the use of additional requests for getting ExtendedRequest fields. This includes Attachments and Photo fields in Leads, Contacts, Accounts, Products, Vendors , and custom objects. Suppress Extended Requests is enabled by default and causes these fields to be returned empty. Use Display Name for Tables Use Display Name for Tables defines what names are used to access Zoho CRM modules for API v4 and custom Zoho CRM modules for API v2. You can switch between display names and module names. Enable Non-Approved Records Enable Non-Approved Records checkbox defines whether Skyvia is able to obtain the records, which are not approved. This option is not supported by Query API . Metadata Cache Metadata Cache defines the period of time, after which Metadata Cache is considered expired. Connector Specifics Performance Optimization Here are few tips for the API v4 connections: If Suppress Extended Requests is disabled and the query includes the Photo field, an additional request will be made for each row to read this field. Enabling this option or excluding the field from the query can improve query performance. If Suppress Extended Requests is disabled and the query includes the Content field for the Attachments object, an additional request is made for each row to read this field. Enabling this option or excluding the field from the query can improve query performance. If Suppress Extended Requests is disabled and at least one ExtendedRequest field such as Subforms or Related Lists is being queried, additional request will be generated for each row. If Suppress Extended Requests is enabled, Skyvia will ignore these fields, which significantly speeds up the query. Importing Binary Data (Attachments) Skyvia supports [Importing Binary Data](https://docs.skyvia.com/data-integration/import/how-to-guides/importing-binary-data.html) for Zoho CRM. Importing binary data is supported for the Photo field of the Contacts , Leads objects and the Content field of the attachments objects: LeadAttachments , AccountAttachments , CampaignAttachments , CaseAttachments , ContactAttachments , DealAttachments , MeetingAttachments , InvoiceAttachments , PriceBookAttachments , ProductAttachments , PurchaseOrderAttachments , QuoteAttachments , SalesOrderAttachments , SolutionAttachments , TaskAttachments , VendorAttachments . Note that the Attachments object does not support importing data in it and allows only the DELETE operation. Binary data import is supported only via API v4. Many-to-many Relations Many-to-many relations are not supported. Supported Actions Zoho CRM connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components. Upsert in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_connections/delete-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho CRM Delete Action This action deletes records in the data source. Skyvia can use either External ID fields or the ID field to find the record to delete. External ID in Zoho CRM is a custom field that has the \u201cExternal ID\u201d attribute and uniquely identifies records. There can be multiple External Id fields in a Zoho CRM object. Action Settings Setting Description Table The target table to delete records from. External ID The object External ID field to use for matching records. This setting is optional, leave it empty to use the ID field for matching. Action Parameters Delete action parameters correspond to the target table External ID or ID/primary key fields. Result The action deletes a record in the specified table in the connector. Example Here is an example of the Delete action in the Target component of Data Flow ." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_connections/update-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho CRM Update Action This action updates existing records in the data source. Action Settings Setting Description Table The target table to update records in. External Id The object External ID field to use for matching records. You need to specify either External ID or Keys setting. External Id setting has a priority, so if you specify both, External ID is used. Keys If the object does not have an external ID field, you can manually specify fields to use for matching records in this list. Action Parameters Update action parameters correspond to the target table fields that allow updating data and External ID or Keys fields. You must map the parameters corresponding to the External ID or Keys fields and the fields that you want to update. Result The action updates a record with the matching Keys or External ID in the specified table. Example Here is an example of the Update action in the Target component of Data Flow . This example shows the Update action that updates Accounts in Zoho CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_connections/upsert-action.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho CRM Upsert This action uses Zoho CRM native UPSERT mechanism, which allows inserting records if such record does not exist in Zoho CRM, and updating the record if such a record is found. For Zoho CRM, Skyvia can use either External ID fields or user specified fields to check whether a matching record already exists in Zoho CRM. External ID in Zoho CRM is a custom field that has the \u201cExternal ID\u201d attribute and uniquely identifies records. There can be multiple External Id fields in a Zoho CRM object. Action Settings Setting Description Table An object to load records to. External Id The object External ID field to use for matching records. You need to specify either External ID or Keys setting. External Id setting has a priority, so if you specify both, External ID is used. Keys If the object does not have an external ID field, you can manually specify fields to use for matching records in this list. If non-null values are passed to the parameters, corresponding to the Keys fields (a least to one of them), Skyvia will try to update a matching record. It fails if there is no such record. It tries to insert a record only if null values are provided for all the parameters, corresponding to the Keys fields. Action Parameters Upsert action parameters correspond to the fields of target table. You must map at least the parameters corresponding to the required target table fields, and the parameter corresponding to the selected External ID or Keys fields. Result The records are inserted or updated in the target table. Example Here is an example of Upsert action in the Target component of Data Flow . This example loads Contacts to Zoho CRM." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoho_projects_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoho Projects [Zoho Projects](https://www.zoho.com/projects/) is a cloud-based solution for project planning and tracking projects for team collaboration, and achieving project goals. Data integration : Skyvia supports importing data to and from Zoho Projects, exporting Zoho Projects data to CSV files, replicating Zoho Projects data to relational databases, and synchronizing Zoho Projects data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zoho Projects backup. Query : Skyvia Query supports Zoho Projects. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) , log in with Zoho account, then specify Data Center and Portal Name. Skyvia stores only the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token. Skyvia does not store your credentials. Getting Credentials To obtain the Portal Name, perform the following steps: Sign in to [Zoho Projects](https://projects.zoho.com) . Click your User Profile Icon in the top right corner. Select My Portals . Copy your Portal Name . Creating Connection To connect to Zoho Projects, perform the following steps: Click Sign In with Zoho Projects in the connection editor in Skyvia. Enter your Zoho credentials to log in. Give Skyvia permission to perform actions in Zoho Projects. Specify the Data Center . By default the value is set to US . Enter the Portal Name . Login to Zoho Projects, click on your account avatar and go to My Portal to obtain the Portal Name. Click Create Connection . Additional Connection Parameters Suppress Extended Requests For the ProjectLayouts object, Zoho Projects API returns only part of the fields when querying multiple records. In order to query values of additional fields, Skyvia performs additional extended requests. Such API requests can be performed for each record of such object. However, this can decrease performance and significantly increase the number of API calls used. Here is the list of such fields: OBJECT FIELD ProjectLayouts IsShowNote, IdNameMap, CustomStatus, CustomFields, RunningSchedule, Picklists, ModuleName, UserData, Sections To reduce the number of API calls, you can select the Suppress Extended Requests checkbox. However, please note that some of the fields in such objects will not be available in Skyvia (will return empty values) even if they have values in Zoho Projects because its API does not return them without extended requests. Connector Specifics Object Peculiarities Fields with DATE Datatype Skyvia works with date fields ( StartDate, EndDate, DueDate, LogDate in all tables storing such fields) as String because of datatype conversion specifics in Zoho Projects API. Pass the dates in the \u2018MM-DD-YYYY\u2019 format when importing data to such fields. InternalTaskList and ExternalTaskList The TaskList object in Skyvia is represented as two objects: InternalTaskList and ExternalTaskList The InternalTaskLists table returns the records where the Flag field is set to Internal or AllFlag . The ExternalTaskLists returns the records with Flag set to External or AllFlag . Events Add the 0 character to a single-digit value when performing the INSERT operation to the Hour, Minutes, DurationHour, DurationMinutes fields. For example, you have to pass the values 1, 2, \u2026, 9 like 01, 02, \u2026, 09 . Documents This table has empty records with a single value Split = true . Such records visually divide the table content by upload date. For example by Today, Previous 7 days, Earlier. API Limit Zoho Projects allows performing 100 API calls per 2 minutes against one endpoint. When this limit is exceeded, all further calls to the same endpoint are blocked for 30 minutes. \nHowever, you can perform API calls to other Zoho Projects endpoints. For example, you can Import 100 records at once for the Tasks table. All other records return the Cannot execute more than 100 requests per API in 2 minutes. Try again after 29 minutes. In this case the Import is blocked for the Tasks table, however you can select data from this table Custom Fields This field returns the array in the following format when querying: [{\"column_name\":\"UDF_LONG1\",\"label_name\":\"CF_Number\",\"value\":\"12345\"},{\"column_name\":\"UDF_BOOLEAN1\",\"label_name\":\"CF_Checkbox\",\"value\":\"true\"}] When importing data to the CustomFields field, you don\u2019t have to map the whole array. You must map only the JSON object with the custom field API Name, and it\u2019s value. For example, to insert value 777 to the numeric custom field with the API Name UDF_LONG1 , you have to set mapping in the following format: {\"UDF_LONG1\":\"777\"} . Incremental Replication and Synchronization Replication with Incremental Updates is supported for the following objects: BugComments, Bugs, Documents, Events, ExternalTaskLists, Forums, InternalTaskLists, Milestones, MyBugs, MyMilestones, MyTasks, Projects, Subtasks, TaskComments, \nTasks, TaskTimeLogs. Incremental Replication considers only the new records for the Events, Projects, TaskComments tables. These tables contain only the CreatedDate field, and there is no the UpdatedDate field which would have allowed considering the updated records. Synchronization is supported for the following objects: Bugs, ExternalTaskLists, InternalTaskLists, Milestones,\nTaskTimeLogs. DML Operations Support Operation Object INSERT, UPDATE, DELETE Bugs, ExternalTaskLists, InternalTaskLists, Milestones, Projects, TaskComments, Tasks, TaskTimeLogs INSERT, DELETE BugComments, Events INSERT BugFollowers, Clients, ProjectClients, ProjectGroups, Subtasks Stored Procedures Skyvia represents part of the supported Zoho Project features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddUserToPortal To add existing Zoho One account users to a Portal, use command call AddUserToPortal(:email, :profileId, :roleId, :workProjects) PARAMETER DESCRIPTION Email Email ID of users separated by a comma ProfileId The ID of the user profile to assign to the user. Note that this should be a user profile, not the client profile RoleId The ID of the role to assign to the user. Note that this should be a user role, not the client role WorkProjects A JSON array of project ids UpdateUserInPortal To update a User on the Portal, use command call UpdateUserInPortal(:userId, :profileId, :roleId, :projectIds) PARAMETER DESCRIPTION UserId The ID of the user to update ProfileId The ID of the user profile to assign to the user. Note that this should be a user profile, not the client profile RoleId The ID of the role to assign to the user. Note that this should be a user role, not the client role WorkProjects A JSON array of project ids DeleteUserFromPortal To delete the user from a portal, use command call DeleteUserFromPortal(:userId) CreateContacts To create new client contacts in the portal, use command call CreateContacts(:portalId, :clientId, :contacts, :workProjects) PARAMETER DESCRIPTION PortalId The id of the portal ClientId The id of the client company Contacts The contacts to add. You need to specify one or more contacts as an array of JSON objects. For example: [{ \"first_name\": \"somename\", \"last_name\": \"somelastname\", \"email\": \"email@domain.com\", \"invoice_rate\": 45 }] Workprojects JSON array of project ids. For example: [170876000011141285, 170876000011141005] Supported Actions Skyvia supports all the common actions for Zoho Projects." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zoom_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zoom This connector is currently in development, and it is not available yet. [Zoom](https://zoom.us/) is a cloud-based solution for online form building and online surveys. The solution is designed for companies of all sizes. Zoom features survey design, where users can design customized surveys simply via the drag-and-drop interface. Data integration : Skyvia supports importing data to and from Zoom, exporting Zoom data to CSV files, and replicating Zoom data to relational databases. Backup : Skyvia Backup does not support Zoom. Query : Skyvia Query supports Zoom. Establishing Connection Creating Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connections) to Zoom, sign in to Zoom via OAuth 2.0. Click Sign In with Zoom . Enter your Zoom email and password in the appeared window. Click Allow . Click Create Connection . Additional Connection Parameters Suppress Extended Requests Zoom API returns only part of the fields for some objects when querying multiple records. To query the values of additional fields, Skyvia performs additional extended requests for each record of such an object. However, this can decrease performance and significantly increase the number of API calls. The list of such additional fields is the following: OBJECT FIELD RoomLocations Description , Timezone , SupportEmail , SupportPhone , RoomPasscode , and RequiredCodeToExt To reduce the number of API calls, select the Suppress Extended Requests checkbox. However, please note that some fields in such objects are unavailable in Skyvia (return empty values) even if they have values in Zoom because its API does not return them without extended requests. Disconnecting Skyvia from Zoom If you don\u2019t want Skyvia to be able to connect to your Zoom data anymore, you can remove the Skyvia app from your Zoom account and delete the Zoom connection and all objects related to it on Skyvia. Removing the Skyvia app from your Zoom account invalidates the access token, stored in the Zoom connection on Skyvia, so Skyvia won\u2019t be able to connect to your Zoom data after it until you edit the connection and re-obtain the token. Deleting the connection object and all the related objects on Skyvia would ensure that there are no traces of data stored on Skyvia, for example, in error or success logs, etc. To remove Skyvia app from your Zoom account, perform the following steps: Log in to your Zoom account and navigate to the Zoom App Marketplace. Click Manage > Added Apps or search for the Skyvia app. Click the Remove button. See our User Interface Basics to know how to delete the connection and the related objects from Skyvia. Connector Specifics Incremental Replication Skyvia supports Incremental Replication for the Users , Meetings , MeetingRegistrants , Webinars , and WebinarRegistrants objects only. It supports only adding new records with incremental updates, it does not update or delete existing records if they are modified or deleted in Zoom. Synchronization Skyvia does not support Synchronization for Zoom. Meetings The Meetings object supports only scheduled meetings. Zoom API currently does not allow working with instant meetings. UserCloudRecording The UserCloudRecording object has from and to fields. These fields don\u2019t return data for queried records. Instead, they must be used in filters to specify the period for which to obtain records. By default (without filters), the object returns records for the current day. Maximal period for querying records is one month. When querying data from this object, you must specify the period to query data using from and to fields either in filters or in the WHERE clause. Zoom Scopes Needed When you connect to Zoom using Skyvia, it gets access to the scopes listed below. View all users\u2019 contacts/contact:read:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Contacts object in the Query tool Export Contacts data with Export integrations Replicate the Contacts data to databases and cloud data warehouses Load data from the Contacts object to other cloud apps and databases with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the Contacts object View all users\u2019 meetings information on Dashboard/dashboard_meetings:read:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the DashboardMetricMeetings object in the Query tool Export DashboardMetricMeetings data with Export integrations Replicate the DashboardMetricMeetings data to databases and cloud data warehouses Load data from the DashboardMetricMeetings object to other cloud apps and databases with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the DashboardMetricMeetings object View all users\u2019 webinar information on Dashboard/dashboard_webinars:read:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the DashboardMetricWebinars object in the Query tool Export DashboardMetricWebinars data with Export integrations Replicate the DashboardMetricWebinars data to databases and cloud data warehouses Load data from the DashboardMetricWebinars object to other cloud apps and databases with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the DashboardMetricWebinars object View and manage groups/group:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Groups, GroupAdmins, and GroupMembers objects in the Query tool Execute INSERT SQL statements against the Groups, GroupAdmins, and GroupMembers objects in the Query tool Execute UPDATE and DELETE SQL statements against the Groups and GroupMembers objects in the Query tool Export data from the Groups, GroupAdmins, and GroupMembers objects with Export integrations Replicate the Groups, GroupAdmins, and GroupMembers data to databases and cloud data warehouses Load data from the Groups, GroupAdmins, and GroupMembers objects to other cloud apps and databases with import and data flow integrations Insert records to the Groups, GroupAdmins, and GroupMembers objects with import and data flow integrations Update records in the Groups, and GroupMembers objects with import and data flow integrations Delete records from the Groups and GroupMembers objects with import and data flow integrations Perform GET and POST API calls against web API endpoints created in Skyvia Connect for the Groups, GroupAdmins, and GroupMembers objects Perform PATCH and DELETE API calls against web API endpoints created in Skyvia Connect for the Groups and GroupMembers objects View and manage all user meetings/meeting:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Meetings, MeetingRegistrants, MeetingPolls, MeetingLiveStream, MeetingRegistrationQuestions, \nMeetingInvitation, and MeetingTemplates objects in the Query tool Execute INSERT SQL statements to the Meetings, MeetingRegistrants, MeetingPolls, and MeetingRegistrationQuestions objects in the Query tool Execute UPDATE SQL statements to the Meetings and MeetingPolls objects in the Query tool Execute DELETE SQL statements to the Meetings, MeetingRegistrants, and MeetingPolls objects in the Query tool Export data from the Meetings, MeetingRegistrants, MeetingPolls, MeetingLiveStream, MeetingRegistrationQuestions, \nMeetingInvitation, and MeetingTemplates objects with Export integrations Replicate the Meetings, MeetingRegistrants, MeetingPolls, MeetingLiveStream, MeetingRegistrationQuestions, \nMeetingInvitation, and MeetingTemplates data to databases and cloud data warehouses Load data from the Meetings, MeetingRegistrants, MeetingPolls, MeetingLiveStream, MeetingRegistrationQuestions, \nMeetingInvitation, and MeetingTemplates objects to other cloud apps and databases with import and data flow integrations Insert records to the Meetings, MeetingRegistrants, MeetingPolls, and MeetingRegistrationQuestions objects with import and data flow integrations Update records in the Meetings and MeetingPolls objects with import and data flow integrations Delete records from the Meetings, MeetingRegistrants, and MeetingPolls objects with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the Meetings, MeetingRegistrants, MeetingPolls, MeetingLiveStream, MeetingRegistrationQuestions, MeetingInvitation, and MeetingTemplates objects Perform POST API calls against web API endpoints created in Skyvia Connect for the Meetings, MeetingRegistrants, MeetingPolls, and MeetingRegistrationQuestions objects Perform PATCH API calls against web API endpoints created in Skyvia Connect for the Meetings and MeetingPolls objects Perform DELETE API calls against web API endpoints created in Skyvia Connect for the Meetings, MeetingRegistrants, and MeetingPolls objects View and manage all user recordings/recording:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects in the Query tool Execute INSERT SQL statements against the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects in the Query tool Export data from the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects with Export integrations Replicate the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants data to databases and cloud data warehouses Load data from the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects to other cloud apps and databases with import and data flow integrations Insert records to the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects with import and data flow integrations Perform GET and POST API calls against web API endpoints created in Skyvia Connect for the UserCloudRecordings, MeetingRecordings, and MeetingRecordingRegistrants objects View report data/report:read:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the WebinarDetailReports and MeetingDetailReports objects in the Query tool Replicate the WebinarDetailReports and MeetingDetailReports data to databases and cloud data warehouses Load data from the WebinarDetailReports and MeetingDetailReports objects to other cloud apps and databases with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the WebinarDetailReports and MeetingDetailReports objects View and manage all users\u2019 Zoom Rooms information/room:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Rooms, RoomDevices, RoomDigitalSignageFolder, RoomDigitalSignageContent, RoomAccountProfile, RoomLocations, \nRoomMeetingSettings, RoomAlertSettings, LocationAlertSettings, LocationMeetingSettings, LocationSignageSettings,\nAccountAlertSettings, and AccountSignageSettings objects in the Query tool Execute INSERT SQL statements to the Rooms and RoomLocations objects in the Query tool Execute UPDATE SQL statements to the Rooms object in the Query tool Execute DELETE SQL statements to the Rooms and RoomLocations objects in the Query tool Export data from the Rooms, RoomDevices, RoomDigitalSignageFolder, RoomDigitalSignageContent, RoomAccountProfile, RoomLocations, \nRoomMeetingSettings, RoomAlertSettings, LocationAlertSettings, LocationMeetingSettings, LocationSignageSettings,\nAccountAlertSettings, and AccountSignageSettings objects with Export integrations Replicate the Rooms, RoomDevices, RoomDigitalSignageFolder, RoomDigitalSignageContent, RoomAccountProfile, RoomLocations, \nRoomMeetingSettings, RoomAlertSettings, LocationAlertSettings, LocationMeetingSettings, LocationSignageSettings,\nAccountAlertSettings, and AccountSignageSettings data to databases and cloud data warehouses Load data from Rooms, RoomDevices, RoomDigitalSignageFolder, RoomDigitalSignageContent, RoomAccountProfile, RoomLocations, \nRoomMeetingSettings, RoomAlertSettings, LocationAlertSettings, LocationMeetingSettings, LocationSignageSettings,\nAccountAlertSettings, and AccountSignageSettings objects to other cloud apps and databases with import and data flow integrations Insert records to the Rooms and RoomLocations objects with import and data flow integrations Update records in the Rooms object with import and data flow integrations Delete records from the Rooms and RoomLocations objects with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the Rooms, RoomDevices, RoomDigitalSignageFolder, RoomDigitalSignageContent, RoomAccountProfile, RoomLocations, \nRoomMeetingSettings, RoomAlertSettings, LocationAlertSettings, LocationMeetingSettings, LocationSignageSettings,\nAccountAlertSettings, and AccountSignageSettings objects Perform POST API calls against web API endpoints created in Skyvia Connect for the Rooms and RoomLocations objects Perform PATCH API calls against web API endpoints created in Skyvia Connect for the Rooms object Perform DELETE API calls against web API endpoints created in Skyvia Connect for the Rooms and RoomLocations objects View and manage TSP info/tsp:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the TSP and UserTSP objects in the Query tool Execute DML SQL statements against the TSP and UserTSP objects in the Query tool Export data from the TSP and UserTSP objects with Export integrations Replicate the TSP and UserTSP data to databases and cloud data warehouses Load data from the TSP and UserTSP objects to other cloud apps and databases with import and data flow integrations Insert records to the UserTSP object with import and data flow integrations Update records in the UserTSP object with import and data flow integrations Delete records from the UserTSP objects with import and data flow integrations Perform GET, POST, PATCH, and DELETE API calls against web API endpoints created in Skyvia Connect for the UserTSP objects View users information and manage users/user:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Users, UserAssistants, UserSchedulers, UserSettings, and UserToken objects in the Query tool Execute INSERT SQL statements to the Users and UserAssistants objects in the Query tool Execute UPDATE SQL statements to the Users and UserSettings objects in the Query tool Execute DELETE SQL statements to the Users, UserAssistants, and UserSchedulers objects in the Query tool Export data from the Users, UserAssistants, UserSchedulers, UserSettings, and UserToken objects with Export integrations Replicate the Users, UserAssistants, UserSchedulers, UserSettings, and UserToken data to databases and cloud data warehouses Load data from Users, UserAssistants, UserSchedulers, UserSettings, and UserToken objects to other cloud apps and databases with import and data flow integrations Insert records to the Users and UserAssistants objects with import and data flow integrations Update records in the Users and UserSettings objects with import and data flow integrations Delete records from the Users, UserAssistants, and UserSchedulers objects with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the Users, UserAssistants, UserSchedulers, UserSettings, and UserToken objects Perform POST API calls against web API endpoints created in Skyvia Connect for the Users and UserAssistants objects Perform PATCH API calls against web API endpoints created in Skyvia Connect for the Users and UserSettings objects Perform DELETE API calls against web API endpoints created in Skyvia Connect for the Users, UserAssistants, and UserSchedulers objects View and manage all user Webinars/webinar:write:admin This scope is needed to allow the user to perform the following actions in Skyvia: Query the Webinars, WebinarPanelists, WebinarPolls, WebinarRegistrants, WebinarRegistrantQuestions, \nWebinarRegistrantCustomQuestions, WebinarTemplates, WebinarLiveStream, WebinarTrackingSources, PastWebinarInstances, \nPastWebinarParticipants, and PastWebinarQuestionAndAnswer objects in the Query tool Execute INSERT SQL statements to the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects in the Query tool Execute UPDATE SQL statements to the Webinars and WebinarPolls objects in the Query tool Execute DELETE SQL statements to the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects in the Query tool Export data from the Webinars, WebinarPanelists, WebinarPolls, WebinarRegistrants, WebinarRegistrantQuestions, \nWebinarRegistrantCustomQuestions, WebinarTemplates, WebinarLiveStream, WebinarTrackingSources, PastWebinarInstances, \nPastWebinarParticipants, and PastWebinarQuestionAndAnswer objects with Export integrations Replicate the Webinars, WebinarPanelists, WebinarPolls, WebinarRegistrants, WebinarRegistrantQuestions, \nWebinarRegistrantCustomQuestions, WebinarTemplates, WebinarLiveStream, WebinarTrackingSources, PastWebinarInstances, \nPastWebinarParticipants, and PastWebinarQuestionAndAnswer data to databases and cloud data warehouses Load data from Webinars, WebinarPanelists, WebinarPolls, WebinarRegistrants, WebinarRegistrantQuestions, \nWebinarRegistrantCustomQuestions, WebinarTemplates, WebinarLiveStream, WebinarTrackingSources, PastWebinarInstances, \nPastWebinarParticipants, and PastWebinarQuestionAndAnswer objects to other cloud apps and databases with import and data flow integrations Insert records to the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects with import and data flow integrations Update records in the Webinars and WebinarPolls objects with import and data flow integrations Delete records from the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects with import and data flow integrations Perform GET API calls against web API endpoints created in Skyvia Connect for the Webinars, WebinarPanelists, WebinarPolls, WebinarRegistrants, WebinarRegistrantQuestions, \nWebinarRegistrantCustomQuestions, WebinarTemplates, WebinarLiveStream, WebinarTrackingSources, PastWebinarInstances, \nPastWebinarParticipants, and PastWebinarQuestionAndAnswer objects Perform POST API calls against web API endpoints created in Skyvia Connect for the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects Perform PATCH API calls against web API endpoints created in Skyvia Connect for the Webinars and WebinarPolls objects Perform DELETE API calls against web API endpoints created in Skyvia Connect for the Webinars, WebinarPanelists, WebinarPolls, and WebinarRegistrants objects DML Operations Support Skyvia supports the following DML operations for Zoom objects: Operation Objects INSERT, UPDATE, DELETE GroupMembers, Groups, MeetingPolls, Rooms, UserTSP, Users, WebinarPolls, Webinars INSERT, DELETE MeetingRegistrants, UserAssistants, WebinarPanelists, WebinarRegistrants INSERT, UPDATE RoomLocations INSERT GroupAdmins, MeetingRecordingRegistrants UPDATE UserSettings DELETE UserSchedulers Supported Actions Skyvia supports all the [common actions](https://docs.skyvia.com/connectors/actions/#common-actions) for Acuity Scheduling." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zulip_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zulip [Zulip](https://zulip.com/) is a cloud chat app for managing live and asynchronous conversations. Data integration : Skyvia supports importing data to and from Zulip, exporting Zulip data to CSV files, replicating Zulip data to relational databases, and synchronizing Zulip data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zulip. Query : Skyvia Query supports Zulip. Establishing Connection To create a connection to Zulip, specify the URL, bot email, and API key. Getting Credentials To locate a bot email and API Key, perform the following actions: Go to Zulip, click the gear icon in the top right corner, and select Personal Settings . Click Bots in the appeared menu and copy the BOT EMAIL value and API KEY . Creating Connection To connect to Zulip, do the following: In the Connection Editor, enter Zulip URL . Paste the obtained Bot email and Bot API KEY into corresponding boxes. Connector Specifics Object Peculiarities Messages By default, the Messages object returns records with Anchor = newest when querying. To get other records, you can set the filter with filters Anchor = oldest or Anchor = first_unread . By default, fields To, Topic, QueueId, and LocalId return empty results when querying. These fields are used for import only. The To field mapping differs by the message types. For stream messages, the To field can be either the name or integer ID of the stream. For direct messages, the To field can be either a list containing integer user IDs or string Zulip API email addresses. For example, [642087, 642088] , some_email@gmail.com. Drafts When importing data to the Drafts object, the integration log doesn\u2019t display the IDs of the inserted records. Streams By default, the Streams object returns only public streams when querying. To select other records, you can set filters by the IncludePublic, IncludeWebPublic, IncludeSubscribed, IncludeAllActive, IncludeDefault, and IncludeOwnerSubscribed fields. When querying, these fields return empty results, and they are used for filtering only. Incremental Replication and Synchronization Skyvia supports Replication with Incremental Updates for Messages and Streams objects. Skyvia supports Synchronization for the Messages object. DML Operations Support Operation Object INSERT, UPDATE, DELETE Drafts, Messages, ScheduledMessages, Users UPDATE, DELETE Streams Stored Procedures Skyvia represents part of the supported Zulip features as stored procedures.\nYou can call a stored procedure , for example, as a text of the command in the ExecuteCommand action in a Target component of a Data Flow or in Query . AddEmojiReaction To add an emoji reaction to a message, use the command call AddEmojiReaction(:message_id, :emoji_name) RemoveEmojiReaction To delete the emoji reaction, use the command call RemoveEmojiReaction(:message_id, :emoji_name) UpdatePersonalMessageFlags To update the personal message flags, use the command call UpdatePersonalMessageFlags(:messages, :op, :flag) PARAMETER NAME DESCRIPTION Messages List of message IDs for which you update the flags in array format. For example, [4, 5, 9] Op Possible values are Add and Remove Flag Flag which has to be updated/deleted. Valid values are Read, Starred, Collapsed, Mentioned, Wildcard Mentioned, Has Alert Word, Historical MarkAllAsRead To mark all the messages as read, use the command call MarkAllAsRead() ReactivateUser To reactivate the specific user, use the following command call ReactivateUser(:user_id) DeactivateOwnUser To deactivate the current user, use the command call DeactivateOwnUser() SetTypingStatus To inform other users that the current user is typing a message. call SetTypingStatus(:type, :op, :to, :topic) PARAMETER NAME DESCRIPTION Type Valid values are direct, stream or private Op Whether the user has started (\u201cstart\u201d) or stopped (\u201cstop\u201d) typing To For \u201cdirect\u201d type, it is the user IDs of the recipients of the message being typed. Send a JSON-encoded list of user IDs. (Use a list even if there is only one recipient.). For \u201cstream\u201d type, it is a single element list containing the ID of the stream in which the message is being typed. Example \u2014 [642089] Topic Topic to which message is being typed. Required for the \u201cstream\u201d type. Ignored in the case of \u201cdirect\u201d type Supported Actions Skyvia supports all the common actions for Zulip." }, { "url": "https://docs.skyvia.com/connectors/cloud-sources/zuora_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Cloud Sources Zuora [Zuora](https://www.zuora.com) is the leading monetization platform for businesses to launch and manage subscription-based services. Data integration : Skyvia supports importing data to and from Zuora, exporting Zuora data to CSV files, and replicating Zuora data to relational databases. It does not support synchronizing Zuora data with other cloud apps and relational databases. Backup : Skyvia Backup does not support Zuora. Query : Skyvia Query supports Zuora. Establishing Connection To [create a connection](https://docs.skyvia.com/connections/#creating-connection) to Zuora, choose an appropriate Tenant from the list and fill out Client Id , and Client Secret . Getting Credentials To get Client Id and Client Secret , create a new OAuth Client by following the steps listed below: Login to [Zuora](https://community.zuora.com/login) . Click on the user icon. Go to Administration > Manage Users > Your User > New OAuth Client. In the New OAuth Clients section, enter a name for the OAuth client. If you have the Multi-entity feature enabled, select which entities the OAuth client will be permitted to access. Click Create . Zuora displays the Client ID and Client Secret for the OAuth client. These values are displayed only once, so make sure to copy them. Click OK . Creating Connection To create a connection between Skyvia and Zuora: Choose the appropriate Tenant value from the dropdown. Enter Client Id and Client Secret . Click Create Connection . Additional Connection Parameters Metadata Cache The period of time after which [Metadata cache](https://docs.skyvia.com/connections/metadata-cache.html) is considered expired. Entity If Zuora Multi-entity is enabled, select an entity to connect to from the dropdown. Connector Specifics Synchronization and Incremental Replication Synchronization and Replication with Incremental Updates enabled are not supported for objects without CreatedDate or UpdatedDate fields. Both fields must be present for synchronization. Replication with incremental updates requires at least one of the fields. Objects That Support Synchronization Custom objects, AccountingCodes, AccountingPeriods, Accounts, Contacts, Features, Invoices, PaymentMethods, PaymentRuns, Payments, ProductRatePlans, Products, Refunds, TaxationItems, UnitOfMeasures, Usages. Objects That Do Not Support Incremental Replication EventTriggers, InvoiceFiles, InvoiceItems, InvoiceTaxationItems, OrderLineItems, PaymentGateways, PaymentMethodTramsactionLogs, PaymentRunData, SequenceSets. Object Peculiarities Custom objects are fully supported. The ProductRatePlanCharges object does not support the INSERT operation. Some of the PaymentMethods object fields are masked to hide sensitive information, such as credit card numbers or bank account numbers. This means that you can load data into these fields, but when you query the values of these fields, they are returned with the asterisks instead of actual characters. There is no way to retrieve the complete values via Zuora API. The following objects are considered deprecated in Zuora API and are implemented as read-only in Skyvia: CreditBalanceAdjustments , InvoiceAdjustments , InvoiceItemAdjustments , InvoicePayments , RefundInvoicePayments . In the CreditMemoItems and DebitMemoItems objects, the SourceItemId field is not implemented. This field can refer to different objects, depending on the method used to create the credit memo item. Instead, there are the InvoiceId , ProductRatePlanChargeId , and RatePlanChargeId fields. Some fields in Zuora can be obtained only with an additional request. If you query such field, an additional API call is performed for each queried record. It is recommended to exclude these fields if they are not needed. These are the following fields: the InvoiceItems field of the Invoices object, the Body field of the InvoiceFiles object, and Data field of the PaymentRuns object. The following Zuora objects are not supported: Credit Taxation Items, Debit Taxation Items, AccountInvoices, AccountPayments, Catalog Groups, Payment Method Credit Cards, AccountAttachments, InvoiceAttachments, SubscriptionAttachments, CreditMemoAttachments, DebitMemoAttachments. Supported Actions Skyvia supports all the common actions for Zuora." }, { "url": "https://docs.skyvia.com/connectors/databases/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Skyvia supports the following databases: AlloyDB Amazon Redshift Azure Synapse Analytics Google BigQuery MySQL Oracle PostgreSQL Snowflake SQL Server You can also find some advice on configuring your database server in order to access it from Skyvia in the How to Configure Local Database Server to Access It from Skyvia topic." }, { "url": "https://docs.skyvia.com/connectors/databases/alloydb_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases AlloyDB [AlloyDB](https://cloud.google.com/alloydb) is a fully managed PostgreSQL-compatible high-performance database service from Google Cloud. Skyvia supports usual TCP/IP AlloyDB connections and secure SSL and SSH connections. To connect to AlloyDB, you need to specify the database server host name or IP address, the port the AlloyDB server is running on, the user name and password to log in with, and the database name. For connecting to AlloyDB Omni (local AlloyDB servers), Skyvia offers two options: direct connections and agent connections . In order to use a direct connection, your AlloyDB Omni server must be available through the Internet. You can find some advice on configuring a database server in order to connect to it from Skyvia directly in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections don\u2019t have such requirements, but you need to install the Skyvia Agent application on a computer, from which the database server is available in order to use them. You need to specify the following parameters for TCP/IP AlloyDB connection: Name \u2014 connection name that will be used to identify the connection in the objects list and when selecting a connection for an integration. Server \u2014 name or IP address of the AlloyDB host to connect to. Port \u2014 AlloyDB connection port; default value is 5432. User Id \u2014 user name to log in with. Password \u2014 password to log in with. Database \u2014 name of the AlloyDB database you want to connect to. Schema \u2014 name of the AlloyDB schema you want to connect to. If you need to pass UTF8 characters to a AlloyDB database, click Advanced Settings and select the Unicode checkbox. If you want to use SSL or SSH connection, additionally you need to click Advanced Settings and select the required Protocol . After this, for SSL connection you need to specify: SSL Mode \u2014 this mode determines the priority of using secure SSL connection. You can select any of the following modes: Allow \u2014 try first a non-SSL connection, then if that fails, try an SSL connection. Disable \u2014 establish only an unencrypted SSL connection. If this mode is selected, SSL is not used, and other SSL parameters are not available. This mode is selected by default. Prefer \u2014 try first an SSL connection, then if that fails, try a none-SSL connection. Require \u2014 establish only a secure SSL connection. SSL CA Cert \u2014 authority certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Cert \u2014 client certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Key \u2014 client\u2019s private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. SSL TLS Protocol \u2014 preferred TLS protocol version reported to a server when establishing an SSL connection. The default value is 1.1 to avoid errors with older server versions, not supporting TLS 1.2. Acceptable values are 1.0 , 1.1 , 1.2 (without quotes). For SSH connection specify the following: SSH Authentication Type \u2014 type of SSH authentication to use: Password or PublicKey . When using the password authentication, you need to specify the SSH password. For public key authentication, you need to specify the passphrase for the private key and the private key. SSH Host \u2014 name or IP address of the SSH server. SSH Port \u2014 TCP/IP port to connect to the SSH Server. By default, it is 22. SSH User \u2014 user name on the machine where the SSH Server is running. It is a Windows user, not the AlloyDB. SSH Password \u2014 password of a user account on the SSH Server. It is available if Password SSH Authentication Type is selected. SSH Passphrase \u2014 passphrase for a private key. You can set it while generating public and private key files through a key generator tool, for example [PuTTygen](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) . It is available if PublicKey SSH Authentication Type is selected. SSH Private Key \u2014 private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. It is available if PublicKey SSH Authentication Type is selected. In Advanced Settings , you can also select the Unicode checkbox to use Unicode encoding for the connection. Additionally, Advanced Settings include the Connection Timeout and Command Timeout parameters: Connection Timeout parameter determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Command Timeout parameter specifies the wait time in seconds before terminating the attempt to execute a command and produce an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it does not include the time necessary to fetch data if the command selects some data. Generally, you do not need to modify these parameters, but in some specific cases, when a connection to the database server is not good or a command may take significant time to be executed, you may try increasing their values. Supported Actions and Actions Specifics AlloyDB connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/google_bigquery_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Google BigQuery [Google BigQuery](https://cloud.google.com/bigquery) is a fully-managed, serverless data warehouse that enables scalable analysis over petabytes of data. Establishing Connection For Google BigQuery, Skyvia supports authentication via a user account or a service account. To create a Google BigQuery connection, you need either to sign in with the corresponding Google Account for User Account authentication or specify the Private Key JSON for the Service Account authentication. You also need to specify the Project Id and Dataset Id to connect to. We recommended to specify a Google Cloud Storage Bucket for bulk import and replication operations. You can [create such a bucket](https://cloud.google.com/storage/docs/creating-buckets) in the [Google Cloud Console](https://console.cloud.google.com/storage/browser) . Getting Credentials Project ID You can find [Project ID](https://support.google.com/googleapi/answer/7014113?hl=en) in the Google [API Console](https://console.developers.google.com/) . DataSet ID To obtain a [DataSet ID](https://cloud.google.com/bigquery/docs/listing-datasets) perform the following steps: In the Google [API Console](https://console.cloud.google.com/apis/dashboard) , in the navigation menu, point to BigQuery and click BigQuery Studio . Expand the required project node in the Explorer to see the datasets in that project and click the required dataset. This opens Dataset info , where you can find and copy the dataset ID. In the example skyvia-152416.ba , the dataset ID is prefixed with the project ID. You need to specify the dataset ID without this prefix. In this example, you need to specify only the ba part in Skyvia connection editor. Creating Service Account If you want to use a service account for authentication, and don\u2019t have one, you can create and configure it in the following way: In the Google [API Console](https://console.developers.google.com/) , in the navigation menu, select APIs & Services and click Credentials . Click Manage Service Accounts . Click + Create Service Account . Enter Service account name , for example, Skyvia service account, and click Create and Continue . Configure the service account privileges. You need to grant the BigQuery Data Editor and BigQuery User roles to the service account. Click Done . Obtaining Private Key JSON Here is how you can obtain the Private Key JSON file for an existing service account: Open the service account that you want to use. Switch to the Keys tab. Click Add Key and then Create new key . Make sure that Key type is set to JSON and click Create . After this, the JSON key is downloaded to your computer. Store it in the safe place. You won\u2019t be able to re-obtain it, and will need to create a new key if you lose this one. Creating Connection User Account Authentication To [create a BigQuery connection](https://docs.skyvia.com/connections/#creating-connections) with authentication via a user account, perform the following steps: In the Authentication list, select User Account . Click Sign In with Google . In the opened window, enter your email or phone and click Next . Enter your password and click Next once again. Click the Allow button. Specify the Project Id and DataSet Id to connect to. Also specify the name of the bucket ( Cloud Storage Bucket ) that will be used for temporary upload of CSV files when performing Import or Replication to Google BigQuery. The uploaded files will be deleted after the corresponding operations are finished. Service Account Authentication To [create a BigQuery connection](https://docs.skyvia.com/connections/#creating-connections) with authentication via a service account, perform the following steps: In the Authentication list, select Service Account . Click the three-dotted button in the Private Key JSON box. Paste all the content of the downloaded private key JSON file for the respective service account into the box and click Save . Specify the Project Id and DataSet Id to connect to. Also specify the name of the bucket ( Cloud Storage Bucket ) that will be used for temporary upload of CSV files when performing Import or Replication to Google BigQuery. The uploaded files will be deleted after the corresponding operations are finished. Additional Connection Parameters Command Timeout Specifies the wait time before terminating an attempt to execute a command and generating an error. Command Timeout doesn\u2019t affect the wait time for data fetching. Change this value if command execution takes too long, and this causes timeout errors. Use Bulk Import Enables bulk import. Selected by default. This setting affects only import integrations with the INSERT, UPDATE, and DELETE operation and with BigQuery as a target. By default, such integrations import data in the following way: Skyvia writes data into multiple temporary CSV files, upload them to Google Cloud Storage and then tells Google BigQuery to import data from these CSV files. These actions are performed simultaneously, providing the best performance. After the CSV files are imported, they are deleted. However, when data are imported this way, it\u2019s not possible to obtain a per-record error log. If you disable bulk import, Skyvia will use the corresponding DML statements for importing data. This allows you to obtain a per-record error log , but provides far less performance. Thus, disabling bulk import for BigQuery is not recommended and should only be considered if you need to import a small amount of data and need to have a per-record error log or you don\u2019t want to use Google Cloud Storage for temporary files. Flexible Column Names Enables accepting source object names containing any characters when replicating to Google BigQuery. Use Legacy SQL Enables the use of legacy SQL syntax. Note, that the Incremental Replication does not support the legacy SQL syntax and may return the syntax error when this checkbox is selected. Connector Specifics Since Google BigQuery does not have primary or unique keys, Skyvia has the following limitations for Google BigQuery: Synchronization is not supported for Google BigQuery. UPSERT operation in Import is not supported for Google BigQuery. When performing import with the UPDATE or DELETE operation, you need to manually specify columns, which will be considered a primary key. Supported Actions Google BigQuery connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/mysql_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases MySQL Skyvia supports usual TCP/IP MySQL connections and secure SSL and SSH MySQL connections. To connect to a MySQL server, you need to specify the database server host name or IP address, the port the MySQL server is running on, the user name and password to log in with, and the database name. For connecting to local MySQL servers, Skyvia offers two options: direct connections and agent connections . In order to use a direct connection, your MySQL server must be available through the Internet. If you are connecting to MySQL server on your local computer directly from Skyvia, allow such connections in your firewall. See the IPs to allow in the How to Configure Local Database Server to Access It from Skyvia topic. If you are connecting to a computer in your local network, you should use port forwarding. Agent connections don\u2019t have such requirements, but you need to install the Skyvia Agent application in order to use them. You need to specify the following parameters for TCP/IP MySQL connection: Server \u2014 domain name or IP address of the MySQL Server host to connect to. Please note that your MySQL Server host must be accessible from the Internet by this IP address or domain name. If you use an IP address for the Server parameter, this must be the external IP address of the host, not the internal one. Port \u2014 MySQL Server connection port; default value is 3306. User Id \u2014 user name to log in with. Password \u2014 password to log in with. Database \u2014 name of the MySQL database you want to connect to. If you need to pass UTF8 characters to a MySQL database, click Advanced Settings and select the Unicode checkbox. If you want to use SSL or SSH connection, additionally you need to click Advanced Settings and select the required Protocol . After this, for SSL connection you need to specify: SSL CA Cert \u2014 authority certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Cert \u2014 client certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Key \u2014 client private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. SSL TLS Protocol \u2014 preferred TLS protocol version reported to a server when establishing an SSL connection. The default value is 1.1 to avoid errors with older server versions, not supporting TLS 1.2. Acceptable values are 1.0 , 1.1 , 1.2 (without quotes). For SSH connection specify the following: SSH Authentication Type \u2014 type of SSH authentication to use: Password or PublicKey . When using the password authentication you need to specify the SSH password. For public key authentication you need to specify the passphrase for the private key and the private key. SSH Host \u2014 name or IP address of the SSH server. SSH Port \u2014 TCP/IP port to connect to the SSH Server. By default, it is 22. SSH User \u2014 user name on the machine where the SSH Server is running. It is a Windows user, not a user of the MySQL Server. SSH Password \u2014 user account password on the SSH Server. It is available if Password SSH Authentication Type is selected. SSH Passphrase \u2014 passphrase for a private key. You can set it while generating public and private key files through a key generator tool, for example [PuTTygen](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) . It is available if PublicKey SSH Authentication Type is selected. SSH Private Key \u2014 private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. It is available if PublicKey SSH Authentication Type is selected. In Advanced Settings , you can also select the Unicode checkbox to use Unicode encoding for the connection and specify the Command Timeout interval. The latter specifies the wait time before terminating an attempt to execute a command and generating an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it doesn\u2019t include the time necessary to fetch data if the command selects some data. Additionally, if you want Skyvia to treat MySQL TINYINT(1) data type as numeric data type instead of boolean, with values 0 and 1 instead of False and True, you can clear the TINYINT(1) As Boolean check box. Besides, Advanced Settings include the Connection Timeout parameter. This parameter determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Generally you don\u2019t need to modify it, but in some specific cases when a connection to the database server is not good you may try increasing their values. Supported Actions and Actions Specifics MySQL connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/mysql_connections/google-cloud-mysql_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases MySQL Google Cloud SQL for MySQL Google Cloud SQL for MySQL is a fully managed cloud-based alternative to relational database service for MySQL. Skyvia supports usual TCP/IP MySQL connections and secure SSL connection. Your cloud MySQL instance must be available for external connections. How to Configure Google Cloud MySQL Instance for Direct External Connections Select your MySQL instance from your Google Cloud SQL dashboard. Click Connections in the left menu and select NETWORKING . Click ADD A NETWORK and specify the IP addresses from which Skyvia will access your server. Create a separate network for each IP address. Skyvia accesses your server from the IP addresses 40.118.246.204, 13.86.253.112, and 52.190.252.0. Click Overview in the left menu and find your server\u2019s public IP address. How to Get Certificates for SSL connection You can adjust the security settings in Google Cloud Console. An SSL server certificate is automatically created when you create your instance, and is required for SSL connections Click Connections in the left menu and switch to the Security tab. To get the certificates values for connection, scroll down to the Manage client certificate block. Click CREATE CLIENT CERTIFICATE . Name the certificate and click CREATE . Copy the certificate values or download the certificate files. Creating Connection You need to specify the following parameters for TCP/IP MySQL connection: Server \u2014 public IP address of the Google Cloud MySQL instance to connect to. Port \u2014 MySQL Server connection port. Default value is 3306. User Id \u2014 user name to log in with. You can find the user name by clicking Users in the left menu. Password \u2014 password to log in with. Database \u2014 name of the MySQL database you want to connect to. Additional Connection Parameters Google Cloud SQL for MySQL supports the SSL protocol. \nFor secure SSL connections, click Advanced Settings and specify the following parameters. Protocol \u2014 secure connection protocols. SSL CA Cert \u2014 server certificate (server-ca.pem file content). SSL Cert \u2014 client certificate (client-cert.pem file content). SSL Key \u2014 client private key (client-key.pem file content). SSL TLS Protocol \u2014 preferred TLS protocol version reported to a server when establishing an SSL connection. The default value is 1.1 to avoid errors with older server versions, not supporting TLS 1.2. Valid values are 1.0 , 1.1 , 1.2 . The details about other additional connection parameters are available in MySQL article. Supported Actions Skyvia supports all the common actions for Google Cloud SQL for MySQL." }, { "url": "https://docs.skyvia.com/connectors/databases/oracle_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Oracle To connect to an Oracle server, you need to specify the database server host name or IP address, the port the Oracle server is running on, the service name or SID, the user name and password to log in with. Skyvia supports both on-premise Oracle servers and Oracle Cloud. For connecting to local Oracle servers (in your local network), Skyvia offers two options: direct connections and agent connections . In order to use a direct connection, your Oracle server must be available through the Internet. If you are connecting to Oracle server in your local network, allow such connections in your firewall. See the IPs to allow in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections don\u2019t have such requirements, but you need to install the Skyvia Agent application in order to use them. You need to specify the following parameters for Oracle connection: Server \u2014 name or IP address of the Oracle server host to connect to. Port \u2014 Oracle server connection port; default value is 1521. Connection Syntax \u2014 this parameter determines whether to use Service Name or SID to connect. Service Name \u2014 alias to an Oracle database instance (or many instances) to use. SID \u2014 unique name for an Oracle database instance. User \u2014 user name to log in with. Password \u2014 password to log in with. Connect as \u2014 this parameter specifies how you want to connect to the Oracle server. This parameter is set to \u2018Normal\u2019 by default. Alternatively, you can select administrative privileges, such as SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM . Advanced Settings In Advanced Settings , you can configure processing of text and boolean data, enter timeout intervals, and set up SSL encryption for a connection. Unicode - select this checkbox to use Unicode encoding for the connection. Trim Fixed Char - select this checkbox if you want Skyvia to trim trailing spaces when reading data from fixed-length string data types (CHAR, NCHAR). NUMBER(1,0) As Boolean - select this checkbox if you want Skyvia to treat and display values of NUMBER(1,0) columns as Boolean values ( true or false ). Otherwise, they are treated and displayed as numbers. Command Timeout - specifies the wait time before terminating an attempt to execute a command and generating an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it doesn\u2019t include the time necessary to fetch data if the command selects some data. Connection Timeout - determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Generally, you don\u2019t need to modify it, but in some specific cases, when a connection to the database server is not good, you may try increasing their values. Protocol - select the protocol to use - TCP or secure SSL . For the SSL protocol, configure additional parameters: SSL Cert \u2014 client certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Key \u2014 client private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. SSL ServerCertDN \u2014 a set of parameters for Oracle server certificate check. For example, \u201cSslServerCertDN=\"C=UA,O=Devart,OU=DevartSSL,CN=TestSSL\"\u201c . You can find more about it in [Oracle documentation](https://docs.oracle.com/cd/B28359_01/network.111/b28530/asossl.htm#i1006611) . Supported Actions and Actions Specifics Oracle connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/postgresql_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases PostgreSQL [PostgreSQL](https://www.postgresql.org/) is one of the most popular and advanced free and open-source relational database management systems. Skyvia supports usual TCP/IP PostgreSQL connections and secure SSL and SSH connections. Skyvia supports synchronization of PostgreSQL servers of versions 8.3 and higher and Amazon Aurora PostgreSQL. To connect to a PostgreSQL server, you need to specify the database server host name or IP address, the port the PostgreSQL server is running on, the user name and password to log in with, and the database name. For connecting to local PostgreSQL servers, Skyvia offers two options: direct connections and agent connections . In order to use a direct connection, your PostgreSQL server must be available through the Internet. You can find some advice on configuring your PostgreSQL server in order to connect to it from Skyvia directly in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections don\u2019t have such requirements, but you need to install the Skyvia Agent application in order to use them. You need to specify the following parameters for TCP/IP PostgreSQL connection: Name \u2014 connection name that will be used to identify the connection in the objects list and when selecting a connection for an integration. Server \u2014 name or IP address of the PostgreSQL Server host to connect to. Port \u2014 PostgreSQL Server connection port; default value is 5432. User Id \u2014 user name to log in with. Password \u2014 password to log in with. Database \u2014 name of the PostgreSQL database you want to connect to. Schema \u2014 name of the PostgreSQL schema you want to connect to. If you need to pass UTF8 characters to a PostgreSQL database, click Advanced Settings and select the Unicode checkbox. If you want to use SSL or SSH connection, additionally you need to click Advanced Settings and select the required Protocol . After this, for SSL connection you need to specify: SSL Mode \u2014 this mode determines the priority of using secure SSL connection. You can select any of the following modes: Allow \u2014 try first a non-SSL connection, then if that fails, try an SSL connection. Disable \u2014 establish only an unencrypted connection. If this mode is selected, SSL is not used, and other SSL parameters are not available. This mode is selected by default. Prefer \u2014 try first an SSL connection, then if that fails, try a none-SSL connection. Require \u2014 establish only a secure SSL connection. SSL CA Cert \u2014 authority certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Cert \u2014 client certificate. Paste the content of the certificate file into the box or click the button to open a multiline editor and paste the content of the certificate file there. SSL Key \u2014 client\u2019s private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. SSL TLS Protocol \u2014 preferred TLS protocol version reported to a server when establishing an SSL connection. The default value is 1.1 to avoid errors with older server versions, not supporting TLS 1.2. Acceptable values are 1.0 , 1.1 , 1.2 (without quotes). For SSH connection specify the following: SSH Authentication Type \u2014 type of SSH authentication to use: Password or PublicKey . When using the password authentication, you need to specify the SSH password. For public key authentication, you need to specify the passphrase for the private key and the private key. SSH Host \u2014 name or IP address of the SSH server. SSH Port \u2014 TCP/IP port to connect to the SSH Server. By default, it is 22. SSH User \u2014 user name on the machine where the SSH Server is running. It is a Windows user, not a user of the MySQL Server. SSH Password \u2014 password of a user account on the SSH Server. It is available if Password SSH Authentication Type is selected. SSH Passphrase \u2014 passphrase for a private key. You can set it while generating public and private key files through a key generator tool, for example [PuTTygen](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) . It is available if PublicKey SSH Authentication Type is selected. SSH Private Key \u2014 private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. It is available if PublicKey SSH Authentication Type is selected. In Advanced Settings , you can also select the Unicode checkbox to use Unicode encoding for the connection. Additionally, Advanced Settings include the Connection Timeout and Command Timeout parameters: Connection Timeout parameter determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Command Timeout parameter specifies the wait time in seconds before terminating the attempt to execute a command and produce an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it does not include the time necessary to fetch data if the command selects some data. Generally, you do not need to modify these parameters, but in some specific cases, when a connection to the database server is not good or a command may take significant time to be executed, you may try increasing their values. Supported Actions and Actions Specifics PostgreSQL connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/postgresql_connections/google-cloud-pg_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases PostgreSQL Google Cloud SQL for PostgreSQL Google Cloud SQL for PostgreSQL is a fully managed cloud-based alternative to relational database service for PostgreSQL. Skyvia supports usual TCP/IP PostgreSQL connections and secure SSL connections. Your cloud PostgreSQL instance must be available for external connections. How to Google Cloud PostgreSQL Instance for External Direct Connections Select your PostgreSQL instance from your Google Cloud SQL dashboard. Click Connections in the left menu and select NETWORKING . Click ADD A NETWORK and specify the IP addresses from which Skyvia will access your server. Create a separate network for each IP address. Click Overview in the left menu and find your server\u2019s public IP address. How to Get Certificates for SSL connections You can adjust the security settings in Google Cloud Console. An SSL server certificate is automatically created when you create your instance, and is required for SSL connections Click Connections in the left menu and switch to the Security tab. To get the certificates values for connection, scroll down to the Manage client certificate block. Click CREATE CLIENT CERTIFICATE . Name the certificate and click CREATE . Copy the certificate values or download the certificate files. Creating Connection You need to specify the following parameters for TCP/IP PostgreSQL connection: Server \u2014 public IP address of the PostgreSQL instance to connect to. Port \u2014 PostgreSQL Server connection port. Default value is 5432. User Id \u2014 user name to log in with. Password \u2014 password to log in with.You can find the user name by clicking Users in the left menu. Database \u2014 name of the PostgreSQL database you want to connect to. Schema \u2014 name of the PostgreSQL schema you want to connect to. Additional Connection Parameters Google Cloud SQL for PostgreSQL supports the SSL protocol. \nFor secure SSL connections, click Advanced Settings and specify the following parameters. Protocol \u2014 secure connection protocols. SSL Mode \u2014 this mode determines the priority of using secure SSL connection. You can select any of the following modes:\n * Allow \u2014 try first a non-SSL connection, then if that fails, try an SSL connection.\n * Disable \u2014 establish only an unencrypted connection. If this mode is selected, SSL is not used, and other SSL parameters are not available. This mode is selected by default.\n * Prefer \u2014 try first an SSL connection, then if that fails, try a none-SSL connection.\n * Require \u2014 establish only a secure SSL connection. SSL CA Cert \u2014 server certificate (server-ca.pem file content). SSL Cert \u2014 client certificate (client-cert.pem file content). SSL Key \u2014 client private key (client-key.pem file content). SSL TLS Protocol \u2014 preferred TLS protocol version reported to a server when establishing an SSL connection. The default value is 1.1 to avoid errors with older server versions, not supporting TLS 1.2. Valid values are 1.0 , 1.1 , 1.2 . The details about other additional connection parameters are available in PostgreSQL article. Supported Actions Skyvia supports all the common actions for Google Cloud SQL for PosgreSQL." }, { "url": "https://docs.skyvia.com/connectors/databases/redshift_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Amazon Redshift [Amazon Redshift](https://aws.amazon.com/redshift/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) is a data warehouse product which forms part of the larger cloud-computing platform Amazon Web Services. Amazon Redshift-Specific Features and Limitations Skyvia has the following limitations for Amazon Redshift: Synchronization is not supported for Amazon Redshift. Amazon Redshift Connections To connect to Redshift, you need to specify the server host name or IP address, the port the server is running on, the user name and password to log in with, and the database name. If you need to import data to Redshift via Skyvia or perform replication , you will need to specify additional advanced parameters described below. You need to specify the following parameters for Amazon Redshift connection: Name \u2014 connection name that will be used to identify the connection in the objects list and when selecting a connection for an integration. Server \u2014 name or IP address of the Redshift host to connect to. Port \u2014 Redshift connection port; default value is 5432. User Id \u2014 the username used to log in. Password \u2014 password to log in with. Database \u2014 the name of the Redshift database to which you are connecting. Schema \u2014 the name of the Redshift schema to which you are connecting. To execute Import or Replication integrations via this connection, you must configure Advanced Settings and provide parameters for connecting to Amazon S3. This is required because Skyvia uses Redshift\u2019s data import from Amazon S3 for these operations. Specifically, Skyvia uploads data as CSV files to Amazon S3, instructs Redshift to import the data, and deletes the CSV files after the import is complete. Therefore, you must specify the S3 region to use and provide either AWS Security Token or AWS Access Key ID and AWS Secret Key . You may also optionally specify the S3 Bucket Name for file uploads. Below are descriptions of these parameters: AWS Access Key ID \u2014 first part of your Amazon Web Services access key. AWS Secret Key \u2014 second part of your Amazon Web Services access key. [Read more about AWS access keys\u2026](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html?icmpid=docs_iam_console) AWS Security Token \u2014 alternative to AWS Access Key ID and AWS Secret Key pair, Amazon Web Services Security token is a temporary limited-privilege credential. S3 Region \u2014 AWS region, where your S3 storage is hosted. S3 Bucket Name \u2014 here you may specify the name of your S3 bucket to temporarily load the file with imported or replicated data to. If you leave it empty, Skyvia will create a new bucket when importing or replicating data to Redshift and delete it after the operation is finished. SSL Connections To use an SSL connection, click Advanced Settings and select SSL under Protocol . As of January 10, 2025, all newly created Amazon Redshift clusters have the require_ssl parameter set to true by default. If a connection attempt over TCP fails, check your cluster\u2019s configuration settings. If require_ssl parameter \u0456s set to true , make sure to connect via SSL or change require_ssl value to false . SSL Mode \u2014 this mode determines the priority of using secure SSL connection. You can select any of the following modes: Allow \u2014 try first a non-SSL connection, then if that fails, try an SSL connection. Disable \u2014 establish only an unencrypted connection. If this mode is selected, SSL is not used, and other SSL parameters are not available. This mode is selected by default. Prefer \u2014 try first an SSL connection, then if that fails, try a none-SSL connection. Require \u2014 establish only a secure SSL connection. SSL CA Cert \u2014 authority certificate. Click the button with three dots to open a multiline editor and paste the content of the certificate file there. SSH Connections To use an SSH connection, click Advanced Settings and select SSH under Protocol : SSH Authentication Type \u2014 type of SSH authentication to use: Password or PublicKey . When using the password authentication, you need to specify the SSH password. For public key authentication, you need to specify the passphrase for the private key and the private key. SSH Host \u2014 name or IP address of the SSH server. SSH Port \u2014 TCP/IP port to connect to the SSH Server. By default, it is 22. SSH User \u2014 user name on the machine where the SSH Server is running. It is a Windows user, not a user of the MySQL Server. SSH Password \u2014 password of a user account on the SSH Server. It is available if Password SSH Authentication Type is selected. SSH Passphrase \u2014 passphrase for a private key. You can set it while generating public and private key files through a key generator tool, for example [PuTTygen](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) . It is available if PublicKey SSH Authentication Type is selected. SSH Private Key \u2014 private key. Click the button with three dots to open a multiline editor and paste the content paste the content of the key file there. It is available if PublicKey SSH Authentication Type is selected. Other Advanced Parameters Unicode \u2014 select this checkbox to use Unicode encoding for the connection. Command Timeout \u2014 this parameter specifies the wait time before terminating an attempt to execute a command and generating an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it doesn\u2019t include the time necessary to fetch data if the command selects some data. Connection Timeout \u2014 this parameter determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Generally you don\u2019t need to modify it, but in some specific cases when a connection to the database server is not good you may try increasing its value. Use Bulk Import \u2014 this parameter affects import integrations with the INSERT, UPDATE and DELETE operations and with Amazon Redshift as a target. By default, these integrations import data as follows: Skyvia writes data into multiple temporary CSV files, uploads them to Amazon S3, and then instructs Redshift to import the data from these files. These actions occur simultaneously to ensure optimal performance. Once the CSV files are imported, they are deleted. However, when data is imported this way, it is not possible to generate a per-record error log. If you disable bulk import, Skyvia will use standard INSERT, UPDATE, and DELETE statements instead. This allows you to obtain a per-record error log but significantly reduces performance. Therefore, disabling bulk import is not recommended unless you: Are importing a small number of records and require a per-record error log, or Do not want to use Amazon S3 for temporary file storage. Supported Actions and Actions Specifics Amazon Redshift connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/snowflake_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Snowflake [Snowflake](https://www.snowflake.com/) is a scalable cloud-based data platform, which enables data storage, processing and analytic solutions that are faster, more flexible and easier to use. The Snowflake data platform is not built on any existing database technology or \u201cbig data\u201d one, instead it combines a completely new SQL query engine with an innovative architecture natively designed for the cloud. Snowflake-Specific Features and Limitations Skyvia has the following limitations for Snowflake: Skyvia supports the following types of import operations for Snowflake: INSERT, UPDATE, DELETE. To create a Replication , specify the Schema and Files Storage for Bulk fields. Synchronization is not supported for Snowflake. Establishing Connection To create a connection to Snowflake, select the authentication type and specify the required connection parameters. \nSkyvia supports the following authentication types for Snowflake connection: OAuth authentication Basic authentication Key-pair authentication OAuth Authentication To connect to Snowflake using OAuth, you have to create an OAuth security integration in Snowflake first and obtain the Client Id and Client Secret . Creating Snowflake OAuth Security Integration If you want to use OAuth authentication in Skyvia, create a Snowflake OAuth Security Integration first. \nTo do that, run the following script in Snowflake. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 USE ROLE ACCOUNTADMIN ; CREATE SECURITY INTEGRATION Skyvia TYPE = OAUTH ENABLED = TRUE OAUTH_CLIENT = CUSTOM OAUTH_CLIENT_TYPE = 'CONFIDENTIAL' OAUTH_REDIRECT_URI = 'https://app.skyvia.com/oauthcallback/snowflake' OAUTH_ISSUE_REFRESH_TOKENS = TRUE OAUTH_REFRESH_TOKEN_VALIDITY = 7776000 ; The refresh token validity default value is 7776000 (90 days). If you need to change the refresh token validity value, ask your Snowflake account administrator to send a request to Snowflake Support. Viewing OAuth Security Integration To view the created integration, run the following command: 1\n2\n3 USE ROLE ACCOUNTADMIN;\n\nDESC SECURITY INTEGRATION SKYVIA; See more information about Snowflake security integrations [here](https://docs.snowflake.com/en/sql-reference/sql/create-security-integration-oauth-snowflake) . Getting Client Id and Client Secret To obtain the Client Secret and Client Id, run the following command. 1\n2\n3 USE ROLE ACCOUNTADMIN;\n\nSELECT system$show_oauth_client_secrets('SKYVIA'); Creating OAuth Connection To connect to Snowflake using the OAuth 2.0, create the OAuth Security Integration and perform the following steps: Specify the Snowflake account domain. Enter Client Id . Enter Client Secret . Click Sign In with Snowflake and enter your credentials. Specify the name of the database you want to connect to. Basic Authentication To connect to Snowflake using basic authentication, specify the following required connection parameters: Domain is a Snowflake account domain. User is a user name to log in with. Password is a password to log in with. Database is a database name. Key-pair Authentication To connect to Snowflake using the key pair, enter your Snowflake domain, user name, private key and passphrase. Getting Credentials for Key-pair Authentication To obtain the public-private key pair, perform the following steps: Generate a private key using OpenSSL tool. Generate the public key for your private key. Assign the public key to your Snowflake user . Generating a Private Key You can use an encrypted or unencrypted private key.\nTo generate an unencrypted private key, run the following command. 1 openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out -nocrypt To generate an encrypted private key, run the command below and set an encryption password, the passphrase that you need to provide while connecting to Snowflake using the key pair authentication. 1 openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 des3 -inform PEM -out Replace the placeholder values in the commands above with your own key name. The private key is generated in the PEM format. 1\n2\n3 -----BEGIN ENCRYPTED PRIVATE KEY-----\nMIIFHDBOBgk...\n----END ENCRYPTED PRIVATE KEY----- Save the private key file in a secure location. Generating a public key To use key-pair authentication, generate a public key for the private key created earlier. \nTo do that open a terminal window, and on the command line, run the following command. 1 openssl rsa -in -pubout -out Replace the placeholder values in the command above with your own public key file name. \nIf you are generating a public key for an encrypted private key, you will need to provide the encryption password used to create the private key.\nThe public key is generated in the PEM format. 1\n2\n3 -----BEGIN PUBLIC KEY-----\nMIIBIjANBgk...\n-----END PUBLIC KEY----- Save the public key file in a secure location. Associating Public Key with Snowflake User To associate the created public key with the Snowflake user, go to Snowflake and run the following command. 1 ALTER USER SET RSA_PUBLIC_KEY='MIIBIjANBgk...'; Replace the placeholder with your user name values. \nSpecify the public key value omitting the \u2014\u2013BEGIN PUBLIC KEY\u2014\u2013 and \u2014\u2013END PUBLIC KEY\u2014\u2013 comments. Creating Key-pair Connection To connect to Snowflake using the key-pair authentication, specify the following required connection parameters. Domain is a Snowflake account domain. User is a user name to log in with. Private Key is an RSA private key . If you use the encrypted private key, also specify the Passphrase - the password for RSA encrypted private key. Additional Connection Parameters If you need to perform import or replication , you will also have to specify some additional parameters described below. Schema is a current schema name in the database. Although this parameter is optional, you need to specify it when using replication or import in the Bulk Load mode. Warehouse is a name of the warehouse used for a database. Role is a role name used to connect. Advanced Settings Command Timeout Specifies how long to wait for a server response after sending a command, excluding the time needed to fetch any selected data. STRING(38) As Guid Specifies whether values in columns of type STRING (38) should be interpreted as GUIDs. File Storage for Bulk Allows you to choose a file storage service for bulk loads: None, Internal, Amazon S3, or Azure. Depending on the selected service, additional parameters may be required for connection. Option Description None When None is selected, Skyvia uses INSERT, UPDATE, and DELETE statements for importing data. This enables per-record error logging but significantly reduces performance. Using this option is not recommended unless you\u2019re importing a small number of records, need detailed error logs, or prefer not to use storages for temporary files. Internal When Internal is selected, Skyvia uses built-in storage for temporary files during bulk import. This option offers good performance without requiring external storage configuration. Amazon S3 When selecting Amazon S3 Storage from Files Storage for Bulk , you need to specify the following additional parameters: AWS Access Key ID \u2014 first part of your Amazon Web Services access key. AWS Secret Key \u2014 second part of your Amazon Web Services access key. Read more about AWS access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html?icmpid=docs_iam_console) . AWS Security Token \u2014 an alternative option to AWS Access Key ID and AWS Secret Key pair. The Amazon Web Services security token is a temporary limited-privilege credential. S3 Region \u2014 AWS region, where your S3 storage is hosted. S3 Bucket Name \u2014 it is an optional parameter. You may specify the name of your S3 bucket to temporarily load the file with imported or replicated data to. If you leave it empty, Skyvia will create a new bucket when importing or replicating data and delete it after the operation is finished. Azure When selecting Azure Blob Storage from Files Storage for Bulk , you need to specify the following additional parameters: Azure Storage Account \u2014 Azure storage account name. Azure Storage Account Key \u2014 your 512-bit storage access key. Azure Storage Endpoints Protocol \u2014 this parameter determines the protocol to use (HTTPS or HTTP). Use Bulk Import When selected, Skyvia imports data by creating temporary CSV files, uploading them to the storage of your choice, and then instructing Snowflake to load the data from those files. This process runs in parallel for better performance. Once done, the CSV files are deleted. However, this method doesn\u2019t provide detailed error logs for individual records. If you clear this checkbox, Skyvia will use standard SQL commands (INSERT, UPDATE, DELETE) to load data. Disabling bulk import provides a per-record error logs , but significantly reduces integration performance. You can disable it if you\u2019re importing a small number of records or need a detailed error log. This option is not available when None is selected in File Storage for Bulk and affects only import integrations with the INSERT, UPDATE and DELETE operation and with Snowflake as a target." }, { "url": "https://docs.skyvia.com/connectors/databases/sqldatawarehouse_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases Azure Synapse Analytics [Azure Synapse Analytics](https://azure.microsoft.com/en-us/products/synapse-analytics/) is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Note that Skyvia supports only Dedicated SQL Pool. Azure Synapse Analytics-Specific Features and Limitations Skyvia has the following limitations for Azure Synapse Analytics: Synchronization is not supported for Azure Synapse Analytics. Skyvia supports only Dedicated SQL Pool. It does not support Serverless SQL Pool. Azure Synapse Analytics Connections To connect to Azure Synapse Analytics, you need to specify the database server host name or IP address, user name and password to log in with, and database name. If you need to import data to Azure Synapse Analytics or perform replication , you will also need to specify some advanced parameters described below. You need to specify the following parameters for Azure Synapse Analytics connection: Server \u2014 name or IP address of the Azure Synapse Analytics server to connect to. Leaving this field empty means using localhost. User Id \u2014 user name to log in with. Password \u2014 password to log in with. Database \u2014 name of the database you want to connect to. If you want to execute Import integrations, inserting data to Azure Synapse Analytics, or Replication integrations via this connection, you need to click Advanced Settings and set parameters for connecting to Azure Blob storage service. This is necessary, because for these operations Skyvia uses PolyBase to ensure fastest data loading to Azure Synapse Analytics. It loads data to Azure Blob Storage as CSV files and uses PolyBase to import data from these files to Azure Synapse Analytics, and then deletes the CSV files after the import. Thus, you need to specify the Storage Account and Storage Account Key to use. You may also optionally change the protocol to use from default https to http, but this is not recommended. Here are the descriptions of these parameters: Storage Account \u2014 Azure storage account name. Storage Account Key \u2014 your 512-bit storage access key. Storage Endpoints Protocol \u2014 this parameter determines the protocol to use (HTTPS or HTTP). You can also optionally set the following parameters in Advanced Settings : Command Timeout \u2014 this parameter specifies the wait time before terminating an attempt to execute a command and generating an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it doesn\u2019t include the time necessary to fetch data if the command selects some data. Connection Timeout \u2014 this parameter determines the time (in seconds) to wait while trying to establish a connection before terminating the attempt and reporting an error. Generally you don\u2019t need to modify it, but in some specific cases when a connection to the database server is not good you may try increasing its value. Use Bulk Import \u2014 this parameter affects import integrations with the INSERT, UPDATE and DELETE operations and with Azure Synapse Analytics as a target. By default, such integrations import data using PolyBase to quickly load large volumes of data to Azure Synapse Analytics. Skyvia writes data into multiple temporary CSV files, upload them to Azure Blob Storage and then tells SQL Data Warehouse to import data from these CSV files. These actions are performed simultaneously, providing the best performance. After the CSV files are imported, they are deleted. However, when data are imported this way, it\u2019s not possible to obtain a per-record error log. If you disable bulk import, Skyvia will use usual INSERT, UPDATE, and DELETE statements for importing data. This allows you to obtain a per-record error log , but provides far less performance. Thus, disabling bulk import is not recommended and should only be considered if you need to import not that many records and need to have a per-record error log or you do not want to use Azure Blob Storage for temporary files. Supported Actions and Actions Specifics Azure Synapse Analytics connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/sqlserver_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases SQL Server or SQL Azure Skyvia supports usual unencrypted connections and secure SSL connections to SQL Server or SQL Azure. Note that for connecting both to SQL Server and to SQL Azure, you need to select the SQL Server connector when creating a connection. To connect to a SQL Server server, you need to specify the database server host name or IP address, user name and password to log in with, and database name. For connecting to local SQL Server, Skyvia offers two options: direct connections and agent connections . In order to use a direct connection, your SQL Server must be available through the Internet. If you are connecting to SQL Server on your local computer directly from Skyvia, allow such connections in your firewall. If you are connecting to a computer in your local network, you should use port forwarding. You can find the Skyvia service IPs to allow in the Firewall and some advice on configuring your SQL Server in order to access it directly from Skyvia in the How to Configure Local Database Server to Access It from Skyvia topic. Agent connections don\u2019t have such requirements, but you need to install the Skyvia Agent application in order to use them. You need to specify the following parameters for TCP/IP SQL Server connection: Server \u2014 name or IP address of the SQL Server instance to connect to. Here you also specify the SQL Server named instance name and, if necessary, port, in one of the following formats: tcp:\\ tcp:, Here IP is the IP address or domain name of the SQL Server computer, named instance is the name of the corresponding SQL Server named instance, and port is the SQL Server port. The port part is necessary only if your SQL Server uses a non-default port number. Otherwise, comma and port may be omitted. User Id \u2014 user name to log in with. Password \u2014 password to log in with. Database \u2014 name of the SQL Server database you want to connect to. If you want to use SSL connection, additionally you need to click Advanced Settings and set the following parameters: Encrypt \u2014 this parameter determines whether to use SSL encryption for all data sent between the client and server if the server has a certificate installed. Trust Server Certificate \u2014 this parameter determines whether the channel is encrypted while bypassing the certificate chain to validate trust. Use DATETIME2 data type \u2014 this parameter determines whether Skyvia uses datetime or datetime2 data type when creating the target table in the replication. In Advanced Settings , you can also set the Command Timeout interval. It specifies the wait time before terminating an attempt to execute a command and generating an error. Note that it is the time to wait for any server reply since the command was sent to a server, and it doesn\u2019t include the time necessary to fetch data if the command selects some data. Additionally, Advanced Settings include the Connection Timeout parameter. This parameter determines the time (in seconds) to wait while trying to establish a connection before terminating an attempt and reporting an error. Generally, you do not need to modify it, but in some specific cases, when a connection to the database server is not good, you may try increasing its value. Enabling Change Tracking (CT) in SQL Server If you plan to replicate data from SQL Server to another database or data warehouse using Log-Based Ingestion mode, you will need to activate Change Tracking (CT) in SQL Server. To do so, follow the steps below: 1. Enable Change Tracking (CT) at the Database Level First, enable change tracking for the entire database by using the ALTER DATABASE statement. 1\n2\n3\n4\n5\n6\n7 USE master ; GO ALTER DATABASE YourDatabaseName SET CHANGE_TRACKING = ON ( CHANGE_RETENTION = 7 DAYS , AUTO_CLEANUP = ON ); GO CHANGE_RETENTION = 7 DAYS : This sets the retention period for change tracking information. You can adjust this to meet your requirements. The retention period value should be greater than the frequency of incremental replication runs for replication to work properly. AUTO_CLEANUP = ON : This setting ensures that SQL Server automatically cleans up the change tracking information after the retention period. 2. Enable Change Tracking (CT) on Specific Tables Then, enable change tracking for the specific tables you want to track. 1\n2\n3\n4\n5\n6 USE YourDatabaseName ; GO ALTER TABLE YourTableName ENABLE CHANGE_TRACKING ; GO Example Here\u2019s an example that puts it all together. Assume you have a database named SalesDB and a table named Customers . Enable Change Tracking on the Database: 1\n2\n3\n4\n5\n6\n7 USE master ; GO ALTER DATABASE SalesDB SET CHANGE_TRACKING = ON ( CHANGE_RETENTION = 7 DAYS , AUTO_CLEANUP = ON ); GO Enable Change Tracking on the Customers Table: 1\n2\n3\n4\n5\n6 USE SalesDB ; GO ALTER TABLE Customers ENABLE CHANGE_TRACKING ; GO 3. Verify Change Tracking (CT) Configuration You can verify that change tracking is enabled by querying the system catalog views: 1\n2\n3\n4\n5\n6\n7\n8\n9 -- Check if change tracking is enabled on the database SELECT name , is_auto_clean_on , retention_period , retention_period_units_desc FROM sys . change_tracking_databases WHERE database_id = DB_ID ( 'SalesDB' ); -- Check if change tracking is enabled on the table SELECT name , object_id FROM sys . change_tracking_tables WHERE object_id = OBJECT_ID ( 'SalesDB.dbo.Customers' ); SQL Azure When connecting to SQL Azure, you need to select the SQL Server connector and specify the Server value in the following format: TCP:< Server_DNS_Name >,< Port > The port part (with the comma) is usually not necessary; it is required only when the port, configured on the server, differs from the default one. You can use the IP address instead of the server DNS name, but this is not recommended, because an IP address of an Azure virtual machine may change when Azure moves resources for redundancy or maintenance. You can get the server DNS name in the following way: In the Azure Management Portal, select VIRTUAL MACHINES. On the VIRTUAL MACHINE INSTANCES page, under the Quick Glance column, find and copy the DNS name for the virtual machine. The port can be found in the following way: In the Azure Management Portal, find the Virtual Machine. On the Dashboard, click ENDPOINTS and use the PUBLIC PORT assigned to MSSQL. Supported Actions and Actions Specifics SQL Server connector supports the following actions : Execute Command in Source, Lookup, and Target Data Flow components and in Import and Export tasks in the Advanced mode. Execute Query in Source Data Flow components and in Import and Export tasks in the Advanced mode. Lookup in Lookup Data Flow components. Insert in Target Data Flow components. Update in Target Data Flow components. Delete in Target Data Flow components." }, { "url": "https://docs.skyvia.com/connectors/databases/sqlserver_connections/google-cloud-sql-server_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Databases SQL Server or SQL Azure Google Cloud SQL for SQL Server Google Cloud SQL for SQL Server is a fully managed cloud-based alternative to relational database service for SQL Server. Skyvia supports direct connection to Google Cloud SQL for SQL Server. Your cloud SQL Server instance must be available for external connections. How to Configure Google Cloud SQL Server Instance for External Connections Select your SQL Server instance from your Google Cloud SQL dashboard. Click Connections in the left menu and select NETWORKING . Click ADD A NETWORK and specify the IP addresses from which Skyvia will access your server. Create a separate network for each IP address. Skyvia accesses your server from the IP addresses 40.118.246.204, 13.86.253.112, and 52.190.252.0. Click Overview in the left menu and find your server\u2019s public IP address. Creating Connection You need to specify the following parameters for TCP/IP Google Cloud SQL for SQL Server connection: Server \u2014 public IP address of the Google Cloud SQL instance to connect to. User ID \u2014 user name to log in with. You can find the user name by clicking Users in the left menu. Password \u2014 user\u2019s password to log in with. Database \u2014 the name of the cloud database you want to connect to. The details about additional connection parameters are available in SQL Server or SQL Azure article. Supported Actions Skyvia supports all the common actions for Google Cloud SQL for SQL Server." }, { "url": "https://docs.skyvia.com/connectors/file-storages/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages File storages are used to Import CSV files and [binary data](https://docs.skyvia.com/data-integration/import/how-to-guides/importing-binary-data.html) from, and export CSV files to. Skyvia supports the following file storages: Amazon S3 Azure Blob Storage Azure Data Lake Storage Azure File Storage Box Dropbox Google Drive OneDrive FTP SFTP SharePoint Files Zoho WorkDrive" }, { "url": "https://docs.skyvia.com/connectors/file-storages/amazons3_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Amazon S3 To connect to an Amazon S3 server, you need to specify the S3 Region to use AWS Access Key ID and AWS Secret Key . For temporary credentials you will also need to specify AWS Security Token . You also need to specify the S3 Bucket Name to upload file to. You need to specify the following parameters for Amazon S3 connection: Access Key ID \u2014 first part of your Amazon Web Services access key. Secret Key \u2014 second part of your Amazon Web Services access key. [Read more about AWS access keys\u2026](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html?icmpid=docs_iam_console) Security Token \u2014 session token used with temporary security credentials. [Read more about temporary credentials\u2026](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) Region \u2014 AWS region, where your S3 storage is hosted. Bucket Name \u2014 the name of your S3 bucket to load CSV files from or to. In the Advanced Settings you can also specify Working Directory - a path in the Amazon S3 bucket to use as a root folder in Skyvia. You may use this parameter, for example, if you only have access to a specific folder in the Amazon S3 bucket, and cannot access bucket root. Note that you should not add bucket name to this setting. For example, if you have access to bucket/folder/subfolder where bucket is the bucket name, you should set Working Directory to folder/subfolder , and set Bucket Name to bucket ." }, { "url": "https://docs.skyvia.com/connectors/file-storages/azureblobstorage_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Azure Blob Storage [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs) is a secure cloud object storage designed for cloud-native workloads, archives, data lakes, HPC, and machine learning. Establishing Connection Skyvia connects to Azure Blob Storage using OAuth 2.0 authentication. To create a connection , sign in with your Microsoft Azure account. Skyvia does not store your login credentials. Instead, it uses OAuth 2.0 to generate an authentication token, which is stored on the Skyvia server and bound to this connection. Required connection parameters: Storage Account Name \u2014 the name of your storage account in the Azure portal. Tenant ID \u2014 the ID of the current directory in the Azure portal. Refresh Token \u2014 automatically generated upon signing in with your Azure account. Getting Credentials To retrieve the Tenant ID , follow these steps: In your Azure account, click the User icon in the top-right corner and select Switch directory : Locate the ID of the current directory on the Directories + subscriptions page: Copy the directory ID and paste it into the Tenant ID field. Creating Connection To create a connection to Azure Blob Storage, perform the following steps: Enter the Storage Account Name and Tenant ID . Click Sign In with Microsoft . In the sign-in window, choose your preferred method (email, phone or Skype) and click Next : Grant the necessary permissions by clicking Accept : Click Create Connection . Connector Specifics The connector supports Azure Data Lake Storage accounts, although there is a dedicated Azure Data Lake Storage connector available for such use cases." }, { "url": "https://docs.skyvia.com/connectors/file-storages/azuredatalakestorage_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Azure Data Lake Storage [Azure Data Lake Storage](https://azure.microsoft.com/en-us/products/storage/data-lake-storage/) is a massively scalable and secure data lake for your high-performance analytics workloads. Establishing Connection To create a connection to Azure Data Lake Storage, you need to select the authentication method: OAuth 2.0 , Account Access Key or SAS Token . Getting Credentials Tenant ID for OAuth 2.0 To obtain the Tenant ID, follow these steps: In your Azure account, click the User icon in the top-right corner and select Switch directory : Locate the ID of the current directory on the Directories + subscriptions page: Copy the Directory ID and paste it into the Tenant ID field. Account Access Key To obtain the Account Access Key, perform the following steps: Under resources, select your storage account. From the resource menu, under Security + networking , select Access Keys . Click Show to reveal your access key. Copy it. SAS Token To obtain the SAS Token, perform the following steps: Under resources, select your storage account. From the resource menu, under Security + networking , select Shared access signature . You can adjust the permissions to your needs, but the following parameters should be enabled: Allowed services: Blob; Allowed resource types: Container; Allowed permissions: Read, Create; Click Generate SAS and connection string . Copy the token. Creating OAuth 2.0 Connection To connect to Azure Data Lake Storage, perform the following steps: Enter the Storage Account Name and Tenant ID . Click Sign in with Microsoft . In the sign-in window, choose your preferred method (email, phone, or Skype) and click Next : Grant the necessary permissions by clicking Accept : Creating Connection Using Account Access Key To create a connection, specify the Storage Account Name and Account Access Key . Creating Connection Using SAS Token To create a connection, specify the Storage Account Name and SAS Token . Connector Specifics You can use this connector to access Azure Blob Storage via OAuth 2.0. However, Skyvia offers a dedicated Azure Blob Storage connector specifically for this purpose." }, { "url": "https://docs.skyvia.com/connectors/file-storages/azurefilestorage_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Azure File Storage Skyvia supports both Account Key and SAS Token authorization for Azure File Storage. By default, the connection uses the Account Key authorization, but if you want to provide only limited access to Skyvia, just select the SAS Token authorization. You need to specify the following parameters for Azure File Storage connection: Account Name \u2014 Azure storage account name. Authorization Type - Determines authorization to use - Account Key , providing full access to the storage account, or SAS Token , providing limited access. Account Key \u2014 your 512-bit storage access key. This parameter is used only for the Account Key authorization. SAS Token \u2014 a shared access signature token. It is a part of a signed URL to Azure File Storage resources. Endpoints Protocol \u2014 this parameter determines the protocol to use (HTTPS or HTTP). Share Name \u2014 name of the Azure Storage share." }, { "url": "https://docs.skyvia.com/connectors/file-storages/box_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Box When creating a Box connection, simply log in with [Box](https://www.box.com/en-gb/home) , and the [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token is stored on the Skyvia server. Your Box account login and password are not stored on the Skyvia server. Files on Box are accessed not by a file name, but by their unique ids. This causes the following behavior: if you replace a file used in an import on Box, you will need to edit the integration and reload the source file in it. Otherwise, the import won\u2019t be able to find the new file with the same name on Box when running next time. To create a Box connection, perform the following steps: Click +NEW in the top menu. Open the Select Connector page by clicking Connection in the menu on the left. In the opened page, click Box . In the Connection Editor page, in the Untitled field, specify a connection name that will be used to identify the connection. Click Sign In with Box . In the opened window, enter your Box credentials and click the Authorize button. Click Grant access to Box . Click the Create Connection button to create the connection." }, { "url": "https://docs.skyvia.com/connectors/file-storages/dropbox_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Dropbox When creating a [Dropbox](https://www.dropbox.com/?landing=dbv2) connection, simply log in with Dropbox. The [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token is stored on the Skyvia server. Your Dropbox account login and password are not stored on the Skyvia server. To create a Dropbox connection, perform the following steps: Click +NEW in the top menu. Open the Select Connector page by clicking Connection in the menu on the left. In the opened page, click Dropbox . In the Connection Editor page, in the Untitled field, specify a connection name that will be used to identify the connection. Click Sign In with Dropbox . In the opened window, enter your Dropbox credentials and click Sign In . Click the Allow button. Click the Create Connection button to create the connection." }, { "url": "https://docs.skyvia.com/connectors/file-storages/ftp_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages FTP To connect to an FTP server, you need to specify the FTP host, user name to log in, password and, if necessary, port. You need to specify the following parameters for FTP connection: Host \u2014 this parameter specifies the domain name or IP address of the host to connect to. Port \u2014 this parameter specifies the TCP port, the FTP server listens on. User \u2014 user name to log in with. Password \u2014 password to log in with. Encryption Mode \u2014 this parameter determines the secure protocol to use (TLS or SSL or none). It can have the following values: None \u2014 default value, no encryption is used. Implicit \u2014 SSL is used. Explicit \u2014 TLS is used. Skyvia will access your server from the following IPs: 40.118.246.204, 13.86.253.112, and 52.190.252.0." }, { "url": "https://docs.skyvia.com/connectors/file-storages/googledrive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Google Drive Establishing Connection When creating a connection to Google Drive, log in with Google. The [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token is stored on the Skyvia server. Your google account login and password are not stored on the Skyvia server. Files on Google Drive are accessed not by a file name, but by their unique ids. This causes the following behavior: if you replace a file, used in an import, on Google Drive, you will need to edit the integration and reload the source file in it. Otherwise, the import won\u2019t be able to find the new file with the same name on Google Drive when run next time. Creating Connection To create a Google Drive connection, perform the following steps: Click Sign In with Google . Enter your email. Enter your password Click Allow to grant Skyvia permission to access your Google account. Click the Create Connection button to create the connection." }, { "url": "https://docs.skyvia.com/connectors/file-storages/onedrive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages OneDrive When creating a [OneDrive](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage) connection, simply log in with OneDrive. The [OAuth](https://en.wikipedia.org/wiki/OAuth) authentication token is stored on the Skyvia server. Your OneDrive login and password are not stored on the Skyvia server. Files on OneDrive are accessed not by a file name, but by their unique ids. This causes the following behavior: if you replace a file, used in an import, on OneDrive, you will need to edit the integration and reload the source file in it. Otherwise, the import won\u2019t be able to find the new file with the same name on OneDrive when run next time. To create a OneDrive connection, perform the following steps: Click +NEW in the top menu. Open the Select Connector page by clicking Connection in the menu on the left. In the opened page, click OneDrive . In the Connection Editor page, in the Untitled field, specify a connection name that will be used to identify the connection. If you want to connect to a corporate OneDrive account, in the Authentication box, select Microsoft Graph. Click Sign In with OneDrive . In the opened window, enter your Email, phone or Skype and click Next . Enter your password and click Sign in . Click the Create Connection button to create the connection." }, { "url": "https://docs.skyvia.com/connectors/file-storages/sftp_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages SFTP To connect to an SFTP server, you need to specify the host, the authentication type to use, the port, the user name to log in, the password or the SSH Passphrase with SSH Private Key, depending on the Authentication type selected. You need to specify the following parameters for SFTP connection: Host \u2014 this parameter specifies the domain name or IP address of the host to connect to. Port \u2014 this parameter specifies the TCP port, the SFTP server listens on. Working Directory - a path on the SFTP server to use as a root folder in Skyvia. You may use this parameter, for example, if you only have access to a specific folder on the SFTP server, and cannot access its root folder. User \u2014 user name to log in with. Authentication Type \u2014 type of SSH authentication to use: User Name & Password, Keyboard Interactive, multi-Factor or Public Key . When using the user name & password or keyboard interactive authentication you need to specify the SSH password. For Multi-Factor authentication, specify the password, SSH passphrase, and SSH private key. For public key authentication, you need to specify the passphrase for the private key and the private key. Skyvia supports RSA and DSA private keys in both OpenSSH and ssh.com formats. Password \u2014 password to log in with. SSH Passphrase \u2014 passphrase for a private key. You can set it while generating public and private key files through a key generator tool, for example [PuTTygen](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) . Available if Public Key SSH Authentication Type is selected. SSH Private Key \u2014 private key. Paste the content of the key file into the box or click the button to open a multiline editor and paste the content of the key file there. Available if Public Key SSH Authentication Type is selected. Skyvia will access your server from the following IPs: 40.118.246.204, 13.86.253.112, and 52.190.252.0." }, { "url": "https://docs.skyvia.com/connectors/file-storages/sharepointfiles_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages SharePoint Files [SharePoint](https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration) is a cloud based content management and collaboration platform from Microsoft. Establishing Connection To create a connection to SharePoint Files, perform the following steps: Click Sign In with Microsoft . Enter your email. Enter your password, and click Sign in . Click Accept . Click Create connection ." }, { "url": "https://docs.skyvia.com/connectors/file-storages/zohoworkdrive_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors File Storages Zoho WorkDrive [Zoho WorkDrive](https://www.zoho.com/workdrive/) is a secure online file storage and collaboration platform for modern teams, small businesses, and large enterprises. Establishing Connection To create a connection to Zoho WorkDrive, perform the following steps: Click Sign In with Zoho . Enter your email, password, and click Sign In . Click Accept . Click Create connection ." }, { "url": "https://docs.skyvia.com/connectors/odbc_connections.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors ODBC Skyvia allows you to connect to different data sources via [ODBC](https://learn.microsoft.com/en-us/sql/odbc/reference/odbc-overview) - a standard API to access database management systems. This allows connecting Skyvia to a number of data sources, not supported directly, via the corresponding ODBC driver. ODBC connections can work only via Agents . ODBC driver must be installed on the same computer, on which the Agent application is installed. ODBC drivers exist for many major data sources and even for local files, like local Excel workbooks and CSV files. ODBC Connection Limitations ODBC connections are not supported in synchronization and replication integrations, because the data source you connect may have no fields, storing record creation and modification time. ODBC connections also do not support Backup and Connect . Skyvia does not support using Returning feature to return generated values for the result records in import integrations with the ODBC connection as a target. The same limitation applies to the Insert action . How to Create ODBC Connection Requirements Skyvia Agent bitness must correspond the bitness of the respective ODBC driver. Note that Skyvia Agent installs both 64 bit and 32 bit versions on 64 bit systems, so you can run the corresponding Agent app version. Creating Connection To create an ODBC connection, you need to either specify either a DSN or a Connection String (optionally, with Secret Connection String ). If you specify DSN , Connection String and Secret Connection String settings are ignored. DSN (Data Source Name) - the name of an ODBC data source (system or user data source, preconfigured on your computer. An ODBC data source is a combination of the data source itself (a database or a file) and the connection information, required to access it. On Windows you create data sources in ODBC Data Source Manager. You can find more information on Data Source configuration in [Microsoft Documentation](https://support.microsoft.com/en-us/office/administer-odbc-data-sources-b19f856b-5b9b-48c9-8b93-07484bfab5a7#bm1) . User data sources can be accessed only if the Agent application runs on behalf of the corresponding user. If the Agent app is running as a service, you can use only system DSNs. Connection String - the ODBC driver connection string with settings including the driver name and connection parameters, required for establishing the connection. It consists of a list of keyword=value pairs, separated by semicolons. Here are several examples of connection strings: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 Driver={SQL Server};Server=(local);Trusted_Connection=Yes;Database=AdventureWorks;\n\nDriver={Microsoft ODBC for Oracle};Server=ORACLE8i7;Persist Security Info=False;Tusted_Connection=Yes\n\nDriver={Microsoft Access Driver (*.mdb)};DBQ=c:\\bin\\Northwind.mdb\n\nDriver={Microsoft Excel Driver (*.xls)};DBQ=c:\\bin\\book1.xls\n\nDriver={Microsoft Text Driver (*.txt; *.csv)};DBQ=c:\\bin\n\nDriver={Devart ODBC Driver for MySQL};User ID=root;Password=test;Data Source=db;Database=demobase;Port=3310\n\nDSN=dsnname Note that Connection String can be viewed and edited later by users who have access to the workspace, containing the connection. If the connection string contains sensitive parameters, like passwords or tokens, it\u2019s better to move them into Secret Connection String. Secret Connection String - the concealed part of the connection string for storing parameters like passwords, tokens, etc. Once it is specified, it cannot be obtained or viewed on Skyvia. While you can specify any connection parameters here, it is intended for storing sensitive parameters. For example, if your ODBC connection string includes password, it\u2019s better to move it to the Secret Connection String, so it will be like Password=test;" }, { "url": "https://docs.skyvia.com/connectors/triggers/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Triggers Trigger is set up to monitor a specific event or condition in the chosen data source. When the specified event occurs, trigger detects it and starts the automation execution. Connection triggers are connector-specific, trigger availability depends on the chosen connection. Common Triggers The following triggers are common for most of the cloud apps and databases: New Record New/Updated Record Deleted Record" }, { "url": "https://docs.skyvia.com/connectors/triggers/deleted-record.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Triggers Deleted Record The Deleted Record trigger checks if the record was deleted from the chosen object according to the interval set in trigger settings. Deleted Record trigger is not available for use with database connections. Trigger Settings Setting Description Table The object in the cloud app to check for deleted records. Columns The columns in the object to check for deleted records. Poll Interval Time interval between records check for deleted records. Example Here is an example of the New Record trigger in Salesforce:" }, { "url": "https://docs.skyvia.com/connectors/triggers/new-record.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Triggers New Record The New Record trigger checks if there are new records in the selected data source with the interval specified in the settings and starts the automation flow when it finds one. Trigger Settings Database Trigger Settings Setting Description Table The database table to look for new records Columns The columns in the database table to look for new records Unique Key Used to avoid rows duplication. The value should be unique, incremental, and sortable. We recommend to use a Primary Key. Poll Interval Time interval in which Skyvia will check for new records Cloud App Trigger Settings Setting Description Table The object in the cloud app to look for new records Columns The columns in the object to look for new records Poll Interval Time interval in which Skyvia will check for new records Example Here is an example of the New Record trigger in Salesforce:" }, { "url": "https://docs.skyvia.com/connectors/triggers/updated-record.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Connectors Triggers New/Updated Record The New/Updated Record trigger checks if there are new or updated records in the selected data source with the interval specified in the settings and starts the automation flow when it finds any. Trigger Settings Database Trigger Settings Setting Description Table The database table to look for new records Columns The columns in the database table to look for new records Unique Key Used to avoid rows duplication. The value should be unique, incremental, and sortable. We recommend using a primary key. Sort Column A column with a timestamp of the record update. Skyvia tracks these values and when it finds the updated one, it will count it as a trigger event. Only columns with datetime and datetime2 types can be used. Poll Interval Time interval in which Skyvia will check for new records Cloud App Trigger Settings Setting Description Table The object in the cloud app to look for new records Columns The columns in the object to look for new records Poll Interval Time interval in which Skyvia will check for new records Example Here is an example of the New/Updated Record trigger in SQL Server:" }, { "url": "https://docs.skyvia.com/connect/monitoring-endpoint-activity.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect Monitoring Endpoint Activity Skyvia allows you to view all the activity on your endpoints (all your data access and manipulation operations) as well as to view all users, their operations and from which IPs these operations are performed. You can also filter requests depending on their final result \u2014 failed, succeeded or all requests. Moreover, you can select different time periods to display your endpoint requests. To monitor your endpoint activity, click the Log tab in the toolbar and enjoy advanced functionality without any efforts. Selecting a Period to Display Endpoint Requests This tool is very useful for those clients who have high endpoint activity each day. It provides easier visual perception of the results and allows you to view a request history for a specified period of time. You can set time periods to display your requests in the following way: By clicking the Today list, which allows selecting a period from the available options. You can display requests for today or yesterday only, or select a longer display period \u2014 for the whole week/month, for the last 30 days. By clicking Calendar , which allows selecting any period you want. You can either select certain dates from the calendar or change the time period manually by entering the required dates in the box. We do not limit you, you are free to change any parameter in the box \u2014 year, month, day, even minutes or seconds. Using the calendar, you can select days in the middle of the week, only weekends or select longer periods from several months to several years. For example, to specify a period of several month, navigate to the start date of the period you want to select by using the arrows at the top of the calendar to scroll through the months and click it. Then, in the same way, navigate to the end date of the necessary period and click the end date. By selecting a time span on the timeline using the mouse pointer, which allows narrowing down the period in order to see the endpoint activity in more details. You can do it either by moving the mouse pointer along the time span and selecting the right interval or by clicking one of the activity bars on the chart. Clicking a bar displays only this certain period and splits days into hours, months into days, or years into months. Filtering Endpoint Requests by Status (Succeeded, Failed or All) On the Log tab, you can easily view all your endpoint requests filtered according to their result. For example, you can display only succeeded requests, only failed or all of them. To do it, click the Failed or Succeeded buttons at the top left corner of the page. The All button displays all endpoint requests together in one place. To check how many requests were successful and how many failed within a day, hover over the corresponding bar in the bar chart, and the number of requests will be displayed. Successful requests are always highlighted in green, failed ones \u2014 in red. Information on Endpoint Requests Detailed log contains a list of requests for the selected day or specified period. Detailed log is represented as a table with certain parameters under the bar chart and displays the following information for each request: Request date; Request status; Transferred bytes; HTTP request method; Remote IP address of the request; Accessed URL. By clicking an exact endpoint request from the list, you open the History Details window with additional information, such as: User, who performed the request (if authentication is used for the endpoint); Request execution details, including data source access operations and SQL statements executed against the data source and the time these operations took (when applicable)." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints Skyvia Connect is an OData-server-as-a-service product that allows you to quickly and easily expose your data from various sources via OData REST API and make it available in JSON or XML format over the web. It allows you to easily create OData endpoints in convenient GUI, configure them visually, and get a ready-to-use OData endpoint URL, which you can connect to in OData consumer applications and services immediately. You don\u2019t need to care about OData server hosting, deployment, and administration at all. OData Protocol OData is a widely accepted open standard protocol for data access over the Internet. It provides queryable and interoperable RESTful APIs for working with data. It allows bi-directional data access with full CRUD support over the HTTP protocol and it provides a query language via specially constructed URLs and full metadata information. OData standard is developed by [OASIS](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=odata) . You can find more information about the OData standard and more useful OData tools and resources on [www.odata.org](http://www.odata.org/) . Skyvia Connect allows creating OData endpoints, compatible with OData v1 - v3, or with OData v4. You can see the list of supported OData features along with examples of OData protocol request you can use against Skyvia Connect endpoints in the Supported OData Protocol Features topic. OData Server OData server is a software that provides access to data via OData protocol. It accepts OData requests and returns or modifies data accordingly to these requests. If the data of this server must be accessible from the Internet, the server must be hosted somewhere, have a static IP, a domain registered, and a certificate for secure data access. Being OData-server-as-a-service, Skyvia solves all these challenges without any need of user interaction. All you need is to select what data to publish via an OData endpoint, and Skyvia will take the rest. It provides you with a ready-to use OData endpoint link. The service is hosted on the endpoint.skyvia.com subdomain and uses this subdomain certificate. All the deployment and administration tasks are performed by Skyvia engineers, who has taken care about everything needed. OData Endpoint Basics In Skyvia Connect you create OData endpoints by selecting objects to expose from a data source connection, and selecting fields to expose data from. Data, exposed by OData endpoint can be described by entity data model. The main concepts of OData Entity Data Model are entities, entity types, and entity sets, relations, actions, and functions. Entities are instances of entity types. Entity types are named structured types, having a key (a property or several properties that uniquely identify entities). Entity types have named properties and navigation properties, which represent relationships between entities. Entity sets are named collections of entities. Actions and functions are currently not supported in Skyvia. Creating OData Endpoints OData endpoints can be created using either Simple mode or Advanced mode . The Simple mode allows you to quickly and easily select data source objects to publish data from. If necessary, you can configure entities in more details. It does not create any relationships between OData entities. See the detailed procedure of creating an OData endpoints in the How to Configure OData Endpoint in Simple Mode topic. In the Advanced mode you drag data source objects from the object list to the diagram. Skyvia generates the corresponding entity set and entity type, and adds the relations (associations) between added entity types based on the relations between data source objects. The Advanced mode allows not only adjusting entities, but also customizing associations between entity types. See the detailed procedure of creating an OData endpoints in the How to Configure OData Endpoint in Advanced Mode topic. Note that the mode you select cannot be changed. For example, after you create an OData endpoint in the Simple mode, you won\u2019t be able to edit it in the Advanced mode and add associations to it. When you edit an existing endpoint, its Model tab uses the same mode that was used when creating the endpoint. Working with Created Endpoints After you create an OData endpoint, its details open automatically. The detals consist of three tabs: Overview , Model , and Log . You can immediately copy the endpoint URL and use it in your OData consumer applications. Using Created Endpoint The Overview tab displays basic endpoint information (endpoint kind - OData or SQL , endpoint connection, numbers of endpoint users and IP address ranges) and the result endpoint URL. Endpoint URL leads to the OData endpoint with the selected version of OData protocol. You can also access the endpoint via specific OData protocol version by adding odata3/ or odata4/ to the result endpoint URL. You can quickly copy the URL or open it in the new tab, using the corresponding buttons. If your endpoint is valid, and it does not use features, not available in your Connect pricing plan , it is activated immediately, and you can copy and use the endpoint URL in your OData consumer applications. Here you can find some examples of working with data, published via a Skyvia Connect OData endpoint, using OData API calls in Postman. Whenever necessary, you may change the status of your endpoint to Inactive or back to Active in the endpoint title bar or via the corresponding quick action button in the object list. However, you can save an endpoint even if it is invalid (for example, does not have any entity). Such endpoint is considered a draft endpoint . If your endpoint is invalid (is a draft) or uses features, not available in your pricing plan, it is created inactive, and it cannot be activated till you fix it (for a draft endpoint) or upgrade your connect subscription to a plan, which includes the necessary features. Editing Created Endpoint The Model tab allows you to edit the endpoint whenever necessary. You can add or remove entities, configure them, edit associations between them (if the endpoint was created in Advanced mode), change the default OData version and endpoint access mode from the toolbar. The changes are immediately applied when you click the Save button on the toolbar. To modify the endpoint security settings , switch to the Overview tab and click the down arrow buttons near the information about users and IP addresses. Monitoring Endpoint Activity The Log tab allows you to monitor all the data access via the endpoint. Check the Monitoring Endpoint Activity topic for more details." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/adjusting-entities.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints Adjusting Entities Skyvia Connect allows configuring OData entities of your endpoints in both Simple and Advanced mode. You can change the generated names of entity types, their properties, and entity sets. Additionally, you can exclude data source columns from entities so that they were not available via the endpoint. To edit an entity, click the Edit Entity icon. In the Simple mode, this icon can be found near the selected checkbox of the corresponding object. In the Advanced mode it is located in the header of an entity shape on the diagram. It opens the Edit Entity dialog box, displaying entity naming parameters and a table with entity fields with the information about them. To delete an entity from the endpoint in the Advanced mode, click the Delete Entity button in the entity header. In the Simple mode simply clear the corresponding object checkbox. Entity and Property Naming By default, the entity name is generated based on the source table/object name. If the source table/object name is a plural form, it is singularized. The entity set name is generated by pluralizing the source table/object name if necessary. If the source table/object name contains characters, not allowed in names by OData standard, they are omitted. If the generated name coincides with an OData keyword, underscore characters are added on both sides of the name. If the default names don\u2019t suite you, you can edit the generated Entity Name and Entity Set Name in the Edit Entity dialog box. Yo can also specify the data source table to expose data from via this entity in the Source box. Property names are also generated by removing characters, not allowed in names by OData standard, from the underlying column/field names. You can view the source column/field names in the Source column of the grid. If necessary, you may edit the generated property names in the Name columns. This can be useful, for example, if the source has too long field names, and you want to create an OData endpoint for linking the source to Salesforce Connect. Salesforce Connect truncates property names over 50 characters, and sometimes this may lead to errors when two property names in the same entity are truncated to the same 50-character string. Entity Key Entity key is a set of properties, uniquely identifying an entity instance. Each entity must have an entity key. By default, it is generated from the primary key of the underlying database table, or id field of the underlying cloud object. The Edit Entity dialog box displays properties, included to entity key, with a key icon in the Key column. However, you may need to generate an OData entity from a database view or a BigQuery table, which does not have a primary key. If necessary, you may customize (or create) an entity key manually. Just click the Key column for the necessary entity properties to toggle their key status. Please note that we don\u2019t check if the custom key you define is truly unique and non-nullable. If you customize an entity key, you should care about this yourself. Using a non-unique or nullable entity key may lead to errors when working with the endpoint data. For example, if you request an entity by an entity key value, and this value is not unique in the data source, you will get the \u201cOperation is not valid due to the current state of the object.\u201d error. Excluding Source Fields from OData Endpoint If you don\u2019t want some source fields to be available via your OData endpoint, you can easily hide them in the Edit Entity dialog box. For this, select properties that you want to hide by clicking them and using Ctrl or Shift keys and click Hide . To add a hidden field back to the entity as a property, select it and click Show . If you hide a non-nullable source field, which does not have a default value, and its value is not autogenerated in the source, you won\u2019t be able to add new records to this entity via this endpoint." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/how-to-configure-odata-endpoint-in-advanced-mode.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints How to Configure OData Endpoint in Advanced Mode An OData endpoint can be created using the +NEW menu, as any other object. With Skyvia\u2019s OData Endpoint Wizard , you can create an OData endpoint for your data source in just a few simple steps: Select Advanced mode to design the endpoint model. Select a connection to your data source. Define the endpoint model \u2014 select the data to publish, optionally adjust OData entities and associations between them. Configure endpoint security settings. Specify the endpoint name and some other settings and click the Save endpoint button. Endpoint Connection After you select the endpoint creation mode, the OData Endpoint Wizard page opens. On the first wizard step, you need to select a connection or create a new one to publish data from. You can create OData endpoints for all supported cloud applications , databases and cloud data warehouses . Selecting Data to Publish On the second wizard step, you need to define which data to publish via the endpoint. The way you define it depends on the mode, selected on the first step. In the Advanced mode , the left part of the wizard page displays data source tables/objects, and the right part displays the diagram with objects added to endpoint. OData entities on the diagram are presented as rounded rectangle shapes on the diagram, which consist of a header, a list of entity fields, and a list of navigation properties. Properties that are an entity key, have a key icon on the left. To add an object to endpoint, simply drag it from the list to the diagram. When you drag a table to the diagram, Skyvia automatically creates the corresponding entity set and entity type. Skyvia also automatically creates relationships (associations) with other entity types on the diagram, corresponding to tables, with which there are relationships in the data source. If a table references itself in the data source, Skyvia creates a self-referencing association on the diagram. By default, Skyvia exposes all the table fields via the OData protocol. However, you can double-click entities on the diagram and tweak them in more details , if necessary. Customizing Associations Associations can be added to the endpoint only in the Advanced mode . When dragging tables to an endpoint diagram, Skyvia automatically generates the corresponding OData entities and relationships (associations) between them, based on the data source metadata. They are displayed as dark blue lines on the diagram. Skyvia even supports self-referencing relationships, when an entity references itself. For each relation, a navigation property is generated on each side of the relation. Navigation property names are either generated from a foreign key or foreign key field name in the data source, if possible, or simply from the related entity name. These names are then used in OData request URLs when selecting entities by relations. If necessary, you can edit or delete the generated relationships or even create your own custom ones. Editing Associations To edit an association, in an entity on any end of the association click Edit Association . The Edit Association dialog box opens. In this dialog box you can set Names for the corresponding navigation properties on both ends of the association. If needed, you can also select entity columns, on which the association is built, and specify the relation Cardinality \u2014 One to Many, One to One, One or Zero to One. Adding Custom Associations In addition to associations, automatically generated based on data source metadata, Skyvia allows adding custom associations to OData endpoints. To add a custom association, find the plus icon at the bottom of the entity that should become the parent in the new association. Drag this icon to the entity, that should become the child in this association. This opens the Edit Association dialog box. In this dialog box, select the association Cardinality , specify the Names of the corresponding navigation properties, and select the data source Columns , based on which the association will be built. Please note that in case of custom or modified relationships, you should care about the relation integrity of the data in your data source yourself. Endpoint Settings: Security, OData Version, Write Access On the third wizard step, you can change endpoint security settings. Namely, you can add user accounts with passwords to make your endpoint data available only for authenticated users. Additionally, you can allow access to your endpoint only for specific IP addresses. Check Security Settings for more information. After this, click the Next step button on the bottom of the page. The last wizard step allows you to specify the new endpoint name and configure additional settings: default OData protocol version and endpoint access mode. The following OData versions are available: oData Last \u2014 the latest supported version is used. Currently this is OData v4. oData v4 \u2014 OData v4 is used. It uses JSON format for returned data and metadata. oData v1-v3 \u2014 OData v3 (backward compatible with OData v1) is used. By default, it uses ATOM format for returned data and metadata. Please note that regardless of the selected version, Skyvia creates both OData v1-v3 and OData v4 endpoints, which are available by adding odata3/ or odata4/ to the result endpoint URL. The selected version is just a default version, available via base endpoint URL without adding version to it. By default, Skyvia creates an endpoint with read/write access to data. Of course data can be actually written via this endpoint only if the underlying data source allows writing into the corresponding tables/objects. You can optionally forbid writing to an endpoint by clicking the Read Only button." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/how-to-configure-odata-endpoint-in-simple-mode.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints How to Configure OData Endpoint in Simple Mode An OData endpoint can be created using the +NEW menu, as any other object. With Skyvia\u2019s OData Endpoint Wizard , you can create an OData endpoint for your data source in just a few simple steps: Select Simple mode to define the endpoint model. Select a connection to your data source. Define the endpoint model \u2014 select the objects to publish and optionally adjust OData entities. Configure endpoint security settings. Specify the endpoint name and some other settings and click the Save endpoint button. Endpoint Connection After you select the endpoint creation mode, the OData Endpoint Wizard page opens. On the first wizard step, you need to select a connection or create a new one to publish data from. You can create OData endpoints for all supported cloud applications , databases and cloud data warehouses . Selecting Data to Publish On the second wizard step, you need to define which data to publish via the endpoint. In the Simple mode , the wizard displays the list of objects available via the selected connection. To add an object to the endpoint, simply select its checkbox. Select the checkbox in the header of the table with objects to add all of them. For added objects, Skyvia displays the generated Entity Type and Entity Set names. By default, Skyvia exposes all the table fields via the OData protocol. If you want to adjust your entities, modify the autogenerated names, exclude some fields from endpoint, click its Edit Entity icon next to the selected checkbox. Endpoint Settings: Security, OData Version, Write Access On the third wizard step, you can change endpoint security settings. Namely, you can add user accounts with passwords to make your endpoint data available only for authenticated users. Additionally, you can allow access to your endpoint only for specific IP addresses. Check Security Settings for more information. After this, click the Next step button on the bottom of the page. The last wizard step allows you to specify the new endpoint name and configure additional settings: default OData protocol version and endpoint access mode. The following OData versions are available: oData Last \u2014 the latest supported version is used. Currently this is OData v4. oData v4 \u2014 OData v4 is used. It uses JSON format for returned data and metadata. oData v1-v3 \u2014 OData v3 (backward compatible with OData v1) is used. By default, it uses ATOM format for returned data and metadata. Please note that regardless of the selected version, Skyvia creates both OData v1-v3 and OData v4 endpoints, which are available by adding odata3/ or odata4/ to the result endpoint URL. The selected version is just a default version, available via base endpoint URL without adding version to it. By default, Skyvia creates an endpoint with read/write access to data. Of course data can be actually written via this endpoint only if the underlying data source allows writing into the corresponding tables/objects. You can optionally forbid writing to an endpoint by clicking the Read Only button." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/how-to-link-external-data-to-salesforce-via-salesforce-connect-classic.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints How to Link External Data to Salesforce via Salesforce Connect (Salesforce Classic) Salesforce Connect allows you to link external data, available via the OData protocol as external objects and then work with these data as with usual Salesforce objects. Thus, you can publish data from any supported cloud application or database via Skyvia Connect and then link these data to Salesforce via Salesforce Connect. This topic describes how to link an already configured OData endpoint to Salesforce via Salesforce Connect in Salesforce Classic. For Salesforce Lightning see the following topic: How to Link External Data to Salesforce via Salesforce Connect (Salesforce Lightning) . Creating External Data Source and External Objects First, you need to create an external data source in Salesforce. For this, perform the following steps: In the top right corner of the page click your name and then click Setup . In the menu on the left, under App Setup , click Develop , and then click External Data Sources . Click New External Data Source . Enter values for External Data Source (a user-friendly name) and Name (unique external data source identifier) boxes. In the Type list select Salesforce Connect: OData 4.0 Paste the copied endpoint URL into the URL box. Select the Writable External Objects checkbox if you use an endpoint to a writable data source. You may specify other settings, such as High Data Volume depending on your data in the data source. If you created user accounts with passwords for your endpoint in Skyvia, you also need to configure Authentication settings. Set Identity Type to Named Principal , Authentication Protocol to Password Authentication , and specify the endpoint Username and Password . Click Save . Click Validate and Sync . Select exposed tables you want to sync and click Sync . This will create the necessary external objects automatically. Adding Tabs for External Objects After defining external data source and external objects, you may add tabs for external objects in order to work with them via Salesforce UI. In the top right corner of the page click your name and then click Setup . In the menu on the left, under App Setup , click Create , and then click Tabs . Click New . Select the required Object and set the Tab Style and Description . Click Next . Specify the tab visibility settings and click Next . Configure the tab availability for custom apps and click Save . That\u2019s all, your tab is ready, now you can work with your external objects. By default, however, the list view on this tab does not display any useful field. You can edit this view and select the fields from the external object it displays if necessary." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/how-to-link-external-data-to-salesforce-via-salesforce-connect-lightning.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints How to Link External Data to Salesforce via Salesforce Connect (Salesforce Lightning) Salesforce Connect allows you to link external data, available via the OData protocol as external objects and then work with these data as with usual Salesforce objects. Thus, you can publish data from any supported cloud application or database via Skyvia Connect and then link these data to Salesforce via Salesforce Connect. This topic describes how to link an already configured OData endpoint to Salesforce via Salesforce Connect in Salesforce Lightning. For Salesforce Classic see the following topic: How to Link External Data to Salesforce via Salesforce Connect (Salesforce Classic) . Creating External Data Source and External Objects First, you need to create an external data source in Salesforce. For this, perform the following steps: In the top right corner of the page click Setup , and then click the Setup menu item. In the menu on the left, under Platform Tools , click Integrations , and then click External Data Sources . Click New External Data Source . Enter values for External Data Source (a user-friendly name) and Name (unique external data source identifier) boxes. In the Type list, select Salesforce Connect: OData 4.0 Paste the copied endpoint URL into the URL box. Select the Writable External Objects checkbox if you use an endpoint to a writable data source. You may specify other settings, such as High Data Volume depending on your data in the data source. If you created user accounts with passwords for your endpoint in Skyvia, you also need to configure Authentication settings. Set Identity Type to Named Principal , Authentication Protocol to Password Authentication , and specify the endpoint Username and Password . Click Save . Click Validate and Sync . Select the endpoint tables you want to sync and click Sync . This will create the necessary external objects automatically. Adding Tabs for External Objects After defining external data source and external objects, you may add tabs for external objects in order to work with them via Salesforce UI. In the top right corner of the page click Setup , and then click the Setup menu item. In the menu on the left, under Platform Tools , click User Interface , and then click Tabs . In the Custom Object Tabs pane, click New . Select the required Object and set the Tab Style and Description . Click Next . Specify the tab visibility settings and click Next . Configure the tab availability for custom apps and click Save . That\u2019s all, your tab is ready, now you can work with your external objects. By default, however, the list view on this tab does not display any useful field. You can edit this view and select the fields from the external object it displays if necessary." }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/sending-requests-to-skyvia-connect-odata-endpoints.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints Sending Requests to Skyvia Connect OData Endpoints Here we provide basic examples on how you may interact with Skyvia Connect OData endpoints by sending requests via curl or Postman. For more information on OData protocol features that Skyvia Connect OData endpoints support, see Supported OData Protocol Features . OData Protocol Versions Skyvia Connect supports creating both OData v1 \u2013 v3 endpoints and OData v4 endpoints. You can select the default OData version, that is used when accessing the endpoint by its root URL, when you create or edit the endpoint. However, actually, Skyvia makes the published data available via both protocol versions, by adding odata3/ or odata4/ to the endpoint URL: https://endpoint.skyvia.com/********/ OData protocol version, specified in the endpoint editor is used. https://endpoint.skyvia.com/********/odata3/ OData protocol v3 is used. https://endpoint.skyvia.com/********/odata4/ OData protocol v4 is used. When accessed via the OData v3 protocol, endpoints use ATOM format for sending and receiving data by default. They also support JSON format. You can switch to using the JSON format by using the $format Query Option . When accessed via the OData v4 protocol, endpoints use JSON format by default. Sample Endpoint In this examples, we will use an endpoint for an SQL Server database with the following table: 1\n2\n3\n4\n5\n6\n7\n8 CREATE TABLE dept ( deptno INT PRIMARY KEY , dname VARCHAR ( 14 ), loc VARCHAR ( 13 ) ); INSERT INTO dept ( deptno , dname , loc ) VALUES ( 10 , 'Accounting' , 'New York' ); INSERT INTO dept VALUES ( 20 , 'Sales' , 'Dallas' ); INSERT INTO dept VALUES ( 30 , 'Sales2' , 'Chicago' ); Authorization If you have specified usernames and passwords for the endpoint, you need to use basic authentication. In Postman, you can configure it on the Authorization tab. In the Auth Type box, select Basic Auth , and then enter the endpoint Username and Password . In curl, you use the -u parameter with value in the format \"login:password\" , like this: 1 curl https://connect.skyvia.com/g7mwznp0 -u \"testuser:testpassword\" Querying Data To query data from an endpoint, you need to perform a GET request to an endpoint entity set. For example, for OData v4 protocol: 1 curl https://connect.skyvia.com/g7mwznp0/odata4/depts/ -u \"testuser:testpassword\" The response is: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 { \"@odata.context\" : \"https://connect.skyvia.com/g7mwznp0/odata4/$metadata#depts\" , \"value\" :[ { \"deptno\" : 10 , \"dname\" : \"Accounting\" , \"loc\" : \"New York\" },{ \"deptno\" : 20 , \"dname\" : \"Sales\" , \"loc\" : \"Dallas\" },{ \"deptno\" : 30 , \"dname\" : \"Sales2\" , \"loc\" : \"Chicago\" } ] } For OData v3 endpoint, the response is returned in the ATOM format. 1 curl https://connect.skyvia.com/g7mwznp0/odata3/depts/ -u \"testuser:testpassword\" The response is: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61 http://schemas.datacontract.org/2004/07/ <updated> 2024-10-20T14:23:27Z </updated> <link rel= \"self\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts\" /> <entry> <id> https://connect.skyvia.com/g7mwznp0/odata3/depts(10) </id> <category term= \"NS.dept\" scheme= \"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" /> <link rel= \"edit\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(10)\" /> <link rel= \"self\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(10)\" /> <title /> <updated> 2024-10-20T14:23:27Z </updated> <author> <name /> </author> <content type= \"application/xml\" > <m:properties> <d:deptno m:type= \"Edm.Int32\" > 10 </d:deptno> <d:dname> Accounting </d:dname> <d:loc> New York </d:loc> </m:properties> </content> </entry> <entry> <id> https://connect.skyvia.com/g7mwznp0/odata3/depts(20) </id> <category term= \"NS.dept\" scheme= \"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" /> <link rel= \"edit\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(20)\" /> <link rel= \"self\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(20)\" /> <title /> <updated> 2024-10-20T14:23:27Z </updated> <author> <name /> </author> <content type= \"application/xml\" > <m:properties> <d:deptno m:type= \"Edm.Int32\" > 20 </d:deptno> <d:dname> Sales </d:dname> <d:loc> Dallas </d:loc> </m:properties> </content> </entry> <entry> <id> https://connect.skyvia.com/g7mwznp0/odata3/depts(30) </id> <category term= \"NS.dept\" scheme= \"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" /> <link rel= \"edit\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(30)\" /> <link rel= \"self\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(30)\" /> <title /> <updated> 2024-10-20T14:23:27Z </updated> <author> <name /> </author> <content type= \"application/xml\" > <m:properties> <d:deptno m:type= \"Edm.Int32\" > 30 </d:deptno> <d:dname> Sales2 </d:dname> <d:loc> Chicago </d:loc> </m:properties> </content> </entry> </feed> Inserting Data To create a new record via OData, you need to perform the POST request to the respective entity set URL. For OData v4 protocol, use the following request: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 curl 'https://connect.skyvia.com/g7mwznp0/odata4/depts/' \\ -u 'testuser:testpassword' \\ --header 'Content-Type: application/json' \\ --header 'Authorization: \u2022\u2022\u2022\u2022\u2022\u2022' \\ --data '\n{\n \"deptno\": 40,\n \"dname\": \"Development\",\n \"loc\": \"Los Angeles\"\n}' Note that you may need to explicitly add Content-Type header with the application/json value. The response is: 1\n2\n3\n4\n5\n6 { \"@odata.context\" : \"https://connect.skyvia.com/g7mwznp0/odata4/$metadata#depts/$entity\" , \"deptno\" : 40 , \"dname\" : \"Development\" , \"loc\" : \"Los Angeles\" } For OData v3 protocol, use the following request: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15 curl -X POST 'https://connect.skyvia.com/g7mwznp0/odata3/depts/' \\ -u \"testuser:testpassword\" \\ --header 'Content-Type: application/atom+xml' \\ --data-raw '<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?>\n<atom:entry xmlns:atom=\"http://www.w3.org/2005/Atom\"\n xmlns:m=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\"\n xmlns:d=\"http://schemas.microsoft.com/ado/2007/08/dataservices\">\n <atom:content type=\"application/xml\">\n <m:properties>\n <d:deptno m:type=\"Edm.Int32\">40</d:deptno>\n <d:dname>Development</d:dname>\n <d:loc>Los Angeles</d:loc>\n </m:properties>\n </atom:content>\n</atom:entry>' Don\u2019t forget to specify a custom Content-Type header in Postman with the value application/atom+xml . The response is: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 <?xml version=\"1.0\" encoding=\"utf-8\"?> <entry xml:base= \"https://connect.skyvia.com/g7mwznp0/odata3\" xmlns= \"http://www.w3.org/2005/Atom\" xmlns:d= \"http://schemas.microsoft.com/ado/2007/08/dataservices\" xmlns:m= \"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\" xmlns:georss= \"http://www.georss.org/georss\" xmlns:gml= \"http://www.opengis.net/gml\" > <id> https://connect.skyvia.com/g7mwznp0/odata3/depts(40) </id> <category term= \"NS.dept\" scheme= \"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" /> <link rel= \"edit\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(40)\" /> <link rel= \"self\" href= \"https://connect.skyvia.com/g7mwznp0/odata3/depts(40)\" /> <title /> <updated> 2024-10-23T08:07:16Z </updated> <author> <name /> </author> <content type= \"application/xml\" > <m:properties> <d:deptno m:type= \"Edm.Int32\" > 40 </d:deptno> <d:dname> Development </d:dname> <d:loc> Los Angeles </d:loc> </m:properties> </content> </entry> Updating Data OData provides different ways to update entity data via OData. You can use PUT requests to completely replace entity data, but you must specify all its properties that are not of default values. Alternatively, you can perform PATCH requests with only the properties you want to update. Let\u2019s update the location of the new department to \u2018Miami\u2019 in these two ways PUT Request OData v4 Protocol Here is how you perform the request via OData v4 protocol. 1\n2\n3\n4\n5\n6\n7\n8 curl -X PUT 'https://connect.skyvia.com/g7mwznp0/odata3/depts(40)' \\ -u \"testuser:testpassword\" \\ --header 'Content-Type: application/json' \\ --data-raw '{\n \"deptno\": 40,\n \"dname\": \"Development\",\n \"loc\": \"Miami\"\n}' As the result, you get the 204 No Content response, and the record is updated in the data source. OData v3 Protocol Here is how you perform the request via OData v3 protocol. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26 curl -X PUT 'https://connect.skyvia.com/g7mwznp0/odata3/depts(40)' \\ -u \"testuser:testpassword\" \\ --header 'Content-Type: application/atom+xml' \\ --data-raw '<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<entry xml:base=\"https://connect.skyvia.com/g7mwznp0/odata3\" \n xmlns=\"http://www.w3.org/2005/Atom\" \n xmlns:d=\"http://schemas.microsoft.com/ado/2007/08/dataservices\" \n xmlns:m=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\" \n xmlns:georss=\"http://www.georss.org/georss\" \n xmlns:gml=\"http://www.opengis.net/gml\">\n <id>https://connect.skyvia.com/g7mwznp0/odata3/depts(40)</id>\n <category term=\"NS.dept\" scheme=\"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" />\n <link rel=\"edit\" href=\"https://connect.skyvia.com/g7mwznp0/odata3/depts(40)\" />\n <link rel=\"self\" href=\"https://connect.skyvia.com/g7mwznp0/odata3/depts(40)\" />\n <title />\n <author>\n <name />\n </author>\n <content type=\"application/xml\">\n <m:properties>\n <d:deptno m:type=\"Edm.Int32\">40</d:deptno>\n <d:dname>Development</d:dname>\n <d:loc>Miami</d:loc>\n </m:properties>\n </content>\n</entry>' PATCH Request With the patch request, you can update values for one or more entity properties. OData v4 Protocol Here is how you perform the request via OData v4 protocol. 1\n2\n3\n4\n5\n6 curl -X PATCH 'https://connect.skyvia.com/g7mwznp0/odata3/depts(40)' \\ -u \"testuser:testpassword\" \\ --header 'Content-Type: application/json' \\ --data-raw '{\n \"loc\": \"Miami\"\n}' As the result, you get the 204 No Content response, and the record is updated in the data source. OData v3 Protocol Here is how you perform the request via OData v3 protocol. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22 curl -X PUT 'https://connect.skyvia.com/g7mwznp0/odata3/depts(40)' \\ -u \"testuser:testpassword\" \\ --header 'Content-Type: application/atom+xml' \\ --data-raw '<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<entry xml:base=\"https://connect.skyvia.com/g7mwznp0/odata3\" \n xmlns=\"http://www.w3.org/2005/Atom\" \n xmlns:d=\"http://schemas.microsoft.com/ado/2007/08/dataservices\" \n xmlns:m=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\" \n xmlns:georss=\"http://www.georss.org/georss\" \n xmlns:gml=\"http://www.opengis.net/gml\">\n <id>https://connect.skyvia.com/g7mwznp0/odata3/depts(40)</id>\n <category term=\"NS.dept\" scheme=\"http://schemas.microsoft.com/ado/2007/08/dataservices/scheme\" />\n <title />\n <author>\n <name />\n </author>\n <content type=\"application/xml\">\n <m:properties>\n <d:loc>Miami</d:loc>\n </m:properties>\n </content>\n</entry>' Deleting Data You can delete an OData entity by performing the DELETE request to the URL of this entity, regardless of the protocol version used. 1 curl -X DELETE https://connect.skyvia.com/g7mwznp0/depts(40) -u \"testuser:testpassword\"" }, { "url": "https://docs.skyvia.com/connect/odata-endpoints/supported-odata-protocol-features.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect OData Endpoints Supported OData Protocol Features Skyvia Connect supports most OData protocol features. This section contains OData features supported by Skyvia Connect endpoints. The examples are made with an endpoint to a [sample CRM_DEMO database](https://www.devart.com/linqconnect/docs/CrmDemo.html) for SQL Server. Metadata The endpoint URL itself returns the list of Entity sets corresponding to the published database tables/views or cloud objects. The endpoint URL followed by the $metadata keyword returns the endpoint metadata. For example: https://connect.skyvia.com/********/$metadata . These metadata features are fully supported by Skyvia for both OData v1 - v3 and OData v4 endpoints. Data Selection via Resource Paths This is a basic OData functionality. It is supported by Skyvia for both OData v1-v3 and OData v4 endpoints. You can select entities in the following way: Selecting Entities from Entity Sets Adding an entity set name after the endpoint URL returns all the entities from this entity set: https://endpoint.skyvia.com/********/ProductCategories Generated SQL is as follows: 1\n2\n3\n4\n5 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) Selecting Separate Entities by Entity Keys You can also select entities by their key values: https://endpoint.skyvia.com/********/ProductCategories(1) Generated SQL is as follows: 1\n2\n3\n4\n5\n6 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) WHERE ( t . CategoryID = 1 ) Please note that in this example we used an entity with the numeric key. If an entity has a string key, the value must be quoted with single quotation marks. For OData v1-v3 endpoints, unquoted string values may work, for OData v4 endpoints they will not work. If a string literal contains single quotation mark, add the second single quotation mark next to it. Go to [OData documentation](http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part2-url-conventions/odata-v4.0-errata03-os-part2-url-conventions-complete.html#_Toc444868647) to see examples of this case. If an entity has a composite key, the syntax will be the following: https://endpoint.skyvia.com/********/OrderDetails(OrderID=1,ProductID=7101) Specifying property names is mandatory. Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8 SELECT t . OrderID , t . ProductID , t . Price , t . Quantity FROM dbo .[ Order Details ] AS t WITH ( NOLOCK ) WHERE ( t . OrderID = 1 AND t . ProductID = 7101 ) Selecting Multiple or Single Related Entities via Navigation Properties When resource paths return a single entity, you can add the name of the navigation property to the path to get the corresponding related entity or entities: https://endpoint.skyvia.com/********/ProductCategories(1)/Products/ Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . CategoryID = 1 ) If a navigation property returns a collection of entities, you may select separate entities from the collection by their key values: https://endpoint.skyvia.com/********/ProductCategories(1)/Products(7319) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . CategoryID = 1 AND t . ProductID = 7319 ) Then you can use navigation properties of the returned entity to navigate further: https://endpoint.skyvia.com/********/ProductCategories(1)/Products(7319)/OrderDetails Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 SELECT OrderDetails . OrderID , OrderDetails . ProductID , OrderDetails . Price , OrderDetails . Quantity FROM dbo . Products WITH ( NOLOCK ) INNER JOIN dbo .[ Order Details ] AS OrderDetails WITH ( NOLOCK ) ON dbo . Products . ProductID = OrderDetails . ProductID WHERE ( dbo . Products . CategoryID = 1 AND dbo . Products . ProductID = 7319 ) Actually, you can navigate by relations to any depth, but please note that queries generated for such multi-level navigation use JOINs and, thus, may be not optimal: https://endpoint.skyvia.com/********/ProductCategories(1)/Products(7319)/OrderDetails(OrderID=10,ProductID=7319)/OrderDetails_Order/Orders_Company/ Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25 SELECT Orders_Company . CompanyID , Orders_Company . CompanyName , Orders_Company . PrimaryContact , Orders_Company . Web , Orders_Company . Email , Orders_Company . AddressTitle , Orders_Company . Address , Orders_Company . City , Orders_Company . Region , Orders_Company . PostalCode , Orders_Company . Country , Orders_Company . Phone , Orders_Company . Fax FROM dbo . Products WITH ( NOLOCK ) LEFT OUTER JOIN dbo .[ Order Details ] AS OrderDetails WITH ( NOLOCK ) ON dbo . Products . ProductID = OrderDetails . ProductID INNER JOIN dbo . Orders AS OrderDetails_Order WITH ( NOLOCK ) ON OrderDetails . OrderID = OrderDetails_Order . OrderID INNER JOIN dbo . Company AS Orders_Company WITH ( NOLOCK ) ON OrderDetails_Order . CompanyID = Orders_Company . CompanyID WHERE ( dbo . Products . CategoryID = 1 AND dbo . Products . ProductID = 7319 AND OrderDetails . OrderID = 10 AND OrderDetails . ProductID = 7319 ) $select and $expand Query Options In OData, the $select query option allows requesting a specific set of properties. The $expand query option specifies the related entities to be included in the returned data. It is somewhat similar to SQL JOIN \u2014 it accepts the names of the entity navigation properties. Features Supported for Both OData v1-v3 and OData v4 To query separate columns, list them as a value of the $select query option separated by commas. The asterisk character means all fields. https://endpoint.skyvia.com/********/ProductCategories?$select=CategoryID,CategoryName Generated SQL is as follows: 1\n2\n3\n4 SELECT t . CategoryID , t . CategoryName FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) Querying all the columns from the related entity looks like the following: https://endpoint.skyvia.com/********/ProductCategories?$select=Products&$expand=Products Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 SELECT dbo .[ Product Categories ]. CategoryID , dbo .[ Product Categories ]. CategoryName , dbo .[ Product Categories ]. ParentCategory , Products . ProductID , Products . ProductName , Products . CategoryID , Products . UnitName , Products . UnitScale , Products . InStock , Products . Price , Products . DiscontinuedPrice , Products . Discontinued FROM dbo .[ Product Categories ] WITH ( NOLOCK ) LEFT OUTER JOIN dbo . Products AS Products WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = Products . CategoryID A more useful case is to query all the data from related entities and some (or all) fields from the main entity: https://endpoint.skyvia.com/********/ProductCategories?$select=*,Products&$expand=Products Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 SELECT dbo .[ Product Categories ]. CategoryID , dbo .[ Product Categories ]. CategoryName , dbo .[ Product Categories ]. ParentCategory , Products . ProductID , Products . ProductName , Products . CategoryID , Products . UnitName , Products . UnitScale , Products . InStock , Products . Price , Products . DiscontinuedPrice , Products . Discontinued FROM dbo .[ Product Categories ] WITH ( NOLOCK ) LEFT OUTER JOIN dbo . Products AS Products WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = Products . CategoryID While the OData protocol supports selecting only part of fields from the related entities, Skyvia does not support such requests. OData v4 Specific Features The following features are introduced by the fourth version of OData protocol and, thus, are not supported by Skyvia\u2019s OData v1 \u2013 v3 endpoints. In OData v4, $expand can be used without $select. The following query returns all the ProductCategories together with their products: https://endpoint.skyvia.com/********/ProductCategories?$expand=Products Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 SELECT dbo .[ Product Categories ]. CategoryID , dbo .[ Product Categories ]. CategoryName , dbo .[ Product Categories ]. ParentCategory , Products . ProductID , Products . ProductName , Products . CategoryID , Products . UnitName , Products . UnitScale , Products . InStock , Products . Price , Products . DiscontinuedPrice , Products . Discontinued FROM dbo .[ Product Categories ] WITH ( NOLOCK ) LEFT OUTER JOIN dbo . Products AS Products WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = Products . CategoryID The $expand query option in OData v4 also accepts the asterisk character as a value, which means, expanding all the navigation properties: https://endpoint.skyvia.com/********/ProductCategories?$expand=* Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26 SELECT dbo .[ Product Categories ]. CategoryID , dbo .[ Product Categories ]. CategoryName , dbo .[ Product Categories ]. ParentCategory , ProductCategories_Parent . CategoryID , ProductCategories_Parent . CategoryName , ProductCategories_Parent . ParentCategory , ProductCategories . CategoryID , ProductCategories . CategoryName , ProductCategories . ParentCategory , Products . ProductID , Products . ProductName , Products . CategoryID , Products . UnitName , Products . UnitScale , Products . InStock , Products . Price , Products . DiscontinuedPrice , Products . Discontinued FROM dbo .[ Product Categories ] WITH ( NOLOCK ) LEFT OUTER JOIN dbo .[ Product Categories ] AS ProductCategories_Parent WITH ( NOLOCK ) ON dbo .[ Product Categories ]. ParentCategory = ProductCategories_Parent . CategoryID LEFT OUTER JOIN dbo .[ Product Categories ] AS ProductCategories WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = ProductCategories . ParentCategory LEFT OUTER JOIN dbo . Products AS Products WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = Products . CategoryID Skyvia supports the following query options inside $expand: $filter This query option allows filtering query results. It is described in more details below, in the $filter Query Option section. The following query returns only ProductCategories, including discontinued Products, with the corresponding Products. https://endpoint.skyvia.com/********/ProductCategories?$expand=Products($filter=Discontinued%20eq%20true) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17 SELECT dbo .[ Product Categories ]. CategoryID , dbo .[ Product Categories ]. CategoryName , dbo .[ Product Categories ]. ParentCategory , Products . ProductID , Products . ProductName , Products . CategoryID , Products . UnitName , Products . UnitScale , Products . InStock , Products . Price , Products . DiscontinuedPrice , Products . Discontinued FROM dbo .[ Product Categories ] WITH ( NOLOCK ) LEFT OUTER JOIN dbo . Products AS Products WITH ( NOLOCK ) ON dbo .[ Product Categories ]. CategoryID = Products . CategoryID WHERE ( Products . Discontinued = 1 ) $search This query option searches for the specified string (or strings) in all the textual fields of the returned data. It is described in more details below, in the $search Query Option section. https://endpoint.skyvia.com/********/Companies?$select=*,PersonContacts&$expand=PersonContacts($search=west) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47 SELECT dbo . Company . CompanyID , dbo . Company . CompanyName , dbo . Company . PrimaryContact , dbo . Company . Web , dbo . Company . Email , dbo . Company . AddressTitle , dbo . Company . Address , dbo . Company . City , dbo . Company . Region , dbo . Company . PostalCode , dbo . Company . Country , dbo . Company . Phone , dbo . Company . Fax , PersonContacts . ContactID , PersonContacts . Title , PersonContacts . FirstName , PersonContacts . MiddleName , PersonContacts . LastName , PersonContacts . CompanyID , PersonContacts . HomePhone , PersonContacts . MobilePhone , PersonContacts . AddressTitle , PersonContacts . Address , PersonContacts . City , PersonContacts . Region , PersonContacts . PostalCode , PersonContacts . Country , PersonContacts . Phone , PersonContacts . Fax FROM dbo . Company WITH ( NOLOCK ) LEFT OUTER JOIN dbo .[ Person Contact ] AS PersonContacts WITH ( NOLOCK ) ON dbo . Company . CompanyID = PersonContacts . CompanyID WHERE ( LOWER ( PersonContacts . Title ) LIKE '%godwin%' OR LOWER ( PersonContacts . FirstName ) LIKE '%godwin%' OR LOWER ( PersonContacts . MiddleName ) LIKE '%godwin%' OR LOWER ( PersonContacts . LastName ) LIKE '%godwin%' OR LOWER ( PersonContacts . HomePhone ) LIKE '%godwin%' OR LOWER ( PersonContacts . MobilePhone ) LIKE '%godwin%' OR LOWER ( PersonContacts . AddressTitle ) LIKE '%godwin%' OR LOWER ( PersonContacts . Address ) LIKE '%godwin%' OR LOWER ( PersonContacts . City ) LIKE '%godwin%' OR LOWER ( PersonContacts . Region ) LIKE '%godwin%' OR LOWER ( PersonContacts . PostalCode ) LIKE '%godwin%' OR LOWER ( PersonContacts . Country ) LIKE '%godwin%' OR LOWER ( PersonContacts . Phone ) LIKE '%godwin%' OR LOWER ( PersonContacts . Fax ) LIKE '%godwin%' ) $select https://endpoint.skyvia.com/********/Companies?$select=CompanyName,PersonContacts&$expand=PersonContacts($select=FirstName,LastName) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8 SELECT dbo . Company . CompanyID , dbo . Company . CompanyName , PersonContacts . FirstName , PersonContacts . LastName FROM dbo . Company WITH ( NOLOCK ) LEFT OUTER JOIN dbo .[ Person Contact ] AS PersonContacts WITH ( NOLOCK ) ON dbo . Company . CompanyID = PersonContacts . CompanyID Please note that such queries as $levels, $orderby, $skip, $top and $count are not supported inside $expand. $orderby Query Option The $orderby query option allows sorting query results. You can specify one or more fields to sort the results by, use \u2018asc\u2019 and \u2018desc\u2019 modifiers to change sort order, sort by field ordinals, etc. For example: https://endpoint.skyvia.com/********/PersonContacts?$orderby=LastName Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 SELECT t . ContactID , t . Title , t . FirstName , t . MiddleName , t . LastName , t . CompanyID , t . HomePhone , t . MobilePhone , t . AddressTitle , t . Address , t . City , t . Region , t . PostalCode , t . Country , t . Phone , t . Fax FROM dbo .[ Person Contact ] AS t WITH ( NOLOCK ) ORDER BY t . LastName https://endpoint.skyvia.com/********/PersonContacts?$orderby=Country%20desc,%20LastName%20asc Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 SELECT t . ContactID , t . Title , t . FirstName , t . MiddleName , t . LastName , t . CompanyID , t . HomePhone , t . MobilePhone , t . AddressTitle , t . Address , t . City , t . Region , t . PostalCode , t . Country , t . Phone , t . Fax FROM dbo .[ Person Contact ] AS t WITH ( NOLOCK ) ORDER BY t . Country DESC , t . LastName https://endpoint.skyvia.com/********/ProductCategories?$orderby=2 Generated SQL is as follows: 1\n2\n3\n4\n5\n6 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) ORDER BY 2 Please note that the ordinal in the third example is 1-based, the first result field has ordinal 1. $skip and $top Query Options These query options provide paging functionality. The $top query option limits the result to the specified number of entities. The $skip query option allows skipping the specified number of entities, which are first in the order, and starting with the next entity in sequence. https://endpoint.skyvia.com/********/ProductCategories?$top=2 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) ORDER BY ( SELECT NULL ) OFFSET 0 ROWS FETCH FIRST 2 ROWS ONLY https://endpoint.skyvia.com/********/ProductCategories?$skip=3 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) ORDER BY ( SELECT NULL ) OFFSET 3 ROWS https://endpoint.skyvia.com/********/ProductCategories?$skip=3&$top=2 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) ORDER BY ( SELECT NULL ) OFFSET 3 ROWS FETCH FIRST 2 ROWS ONLY $filter Query Option This query option allows filtering the result data. It allows using various conditions, expressions, uniting them with logical operators, etc. Skyvia Connect supports most features of OData protocol v1 \u2013 v3 for this query option, and some of the OData v4 protocol features. Comparison Operators OData supports the following comparison operators: Operator Operator Name Operator in OData = equals eq != not equal ne > greater than gt >= greater than or equal ge < less than lt <= less than or equal le https://endpoint.skyvia.com/********/ProductCategories?$filter=ParentCategory eq 1 Generated SQL is as follows: 1\n2\n3\n4\n5\n6 SELECT t . CategoryID , t . CategoryName , t . ParentCategory FROM dbo .[ Product Categories ] AS t WITH ( NOLOCK ) WHERE ( t . ParentCategory = 1 ) Logical Operators OData and Skyvia Connect support AND, OR, and NOT logical operators. https://endpoint.skyvia.com/********/PersonContacts?$filter=Fax ne null and Country eq 'UK' Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 SELECT t . ContactID , t . Title , t . FirstName , t . MiddleName , t . LastName , t . CompanyID , t . HomePhone , t . MobilePhone , t . AddressTitle , t . Address , t . City , t . Region , t . PostalCode , t . Country , t . Phone , t . Fax FROM dbo .[ Person Contact ] AS t WITH ( NOLOCK ) WHERE ( t . Fax IS NOT NULL AND t . Country = 'UK' ) https://endpoint.skyvia.com/********/PersonContacts?$filter=Country eq 'UK' or Country eq 'United Kingdom' Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 SELECT t . ContactID , t . Title , t . FirstName , t . MiddleName , t . LastName , t . CompanyID , t . HomePhone , t . MobilePhone , t . AddressTitle , t . Address , t . City , t . Region , t . PostalCode , t . Country , t . Phone , t . Fax FROM dbo .[ Person Contact ] AS t WITH ( NOLOCK ) WHERE ( t . Country = 'UK' OR t . Country = 'United Kingdom' ) https://endpoint.skyvia.com/********/PersonContacts?$filter=not(Country eq 'UK' or Country eq 'United Kingdom') Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 SELECT t . ContactID , t . Title , t . FirstName , t . MiddleName , t . LastName , t . CompanyID , t . HomePhone , t . MobilePhone , t . AddressTitle , t . Address , t . City , t . Region , t . PostalCode , t . Country , t . Phone , t . Fax FROM dbo .[ Person Contact ] AS t WITH ( NOLOCK ) WHERE ( NOT ( t . Country = 'UK' OR t . Country = 'United Kingdom' )) Arithmetic Operations Skyvia supports the following arithmetic operations: Operation OData operator Addition add Subtraction sub Multiplication mul Division div Modulus* mod * Please note that the modulus operation is supported only for database endpoints (endpoints, publishing data from a database or cloud data warehouse). https://endpoint.skyvia.com/********/Products?$filter=InStock div 2 gt 10 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE (( t . InStock / 2 ) > 10 ) https://endpoint.skyvia.com/********/Products?$filter=InStock mul Price gt 10000 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE (( t . InStock * t . Price ) > 10000 ) OData Canonical Functions String Functions OData protocol and Skyvia Connect support the following string functions: startswith - checks whether the string specified as the first parameter starts with the string specified as the second parameter. substringof/contains - checks whether one of the specified strings contains another. Note that substringof is supported only for OData v1 \u2013 v3 endpoints, and contains is supported only for OData v4 endpoints. They also have different parameter order. substringof checks whether the second parameter contains the first, and contains \u2013 vice versa. endswith - checks whether the string, specified as the first parameter, ends with the string, specified as the second parameter. length - returns the length of the passed string in characters. indexof - returns the zero-based position of the substring, passed as the second parameter, in a string, passed as the first parameter. If the first parameter value does not include the second, returns -1. replace - replaces the substring, passed as the second parameter, with the string, passed as the third parameter, in a string, passed as the first parameter. substring - returns a substring of the string, passed as the first parameter, which starts at the position, specified in the second parameter, and has length, specified as the third parameter. If the provided length is larger than the length of the remaining string, or is not specified at all, all the remaining string is returned. tolower - returns the passed string in lower case. toupper - returns the passed string in upper case. trim - trims leading and trailing spaces in the passed string. concat - concatenates the passed strings. https://endpoint.skyvia.com/********/Products?$filter=contains(ProductName,'Twain') Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . ProductName LIKE '%Twain%' Datetime Functions The following datetime functions are available in OData v1 \u2013 v3, and are supported in Skyvia Connect for both OData v1 \u2013 v3 endpoints and OData v4 endpoints: year - returns the year of a passed datetime value. month \u2013 returns the number of month of a passed datetime value. day \u2013 returns the number of day the day in the month of a passed datetime value. hour \u2013 returns the hours portion of a passed datetime value in the 24 hours format. minute - returns the minutes portion of a passed datetime value. second - returns the seconds portion of a passed datetime value. Please note that you can apply these functions only to Date or Datetime types or their analogs. Applying them to Time types (which don\u2019t have the date part) is not supported. https://endpoint.skyvia.com/********/Orders?$filter=year(OrderDate) eq 2018 and month(OrderDate) eq 12 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . OrderID , t . CompanyID , t . ContactID , t . OrderDate , t . Freight , t . ShipDate , t . ShipCompanyID , t . Discount FROM dbo . Orders AS t WITH ( NOLOCK ) WHERE ( YEAR ( t . OrderDate ) = 2018 AND MONTH ( t . OrderDate ) = 12 ) The following datetime functions are available in OData v1 \u2013 v3, and are supported in Skyvia Connect for both OData v1 \u2013 v3 endpoints and OData v4 endpoints: now - returns the current datetime of a server. fractionalseconds \u2013 returns the fraction of seconds portion of a passed datetime value. https://endpoint.skyvia.com/********/Orders?$filter=year(OrderDate) eq year(now()) and month(OrderDate) eq month(now()) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT t . OrderID , t . CompanyID , t . ContactID , t . OrderDate , t . Freight , t . ShipDate , t . ShipCompanyID , t . Discount FROM dbo . Orders AS t WITH ( NOLOCK ) WHERE ( YEAR ( t . OrderDate ) = YEAR ( GETDATE ()) AND MONTH ( t . OrderDate ) = MONTH ( GETDATE ())) The following OData v4 datetime functions are not supported by Skyvia Connect: totaloffsetofminutes mindatetime maxdatetime totalnumberofseconds date time Mathematical Functions OData protocol and Skyvia Connect support the following mathematical functions: round \u2013 rounds the passed numeric value to the nearest integer value. floor - rounds the passed numeric value DOWN to the nearest integer value. ceiling - rounds the passed numeric value UP to the nearest integer value. However, the ceiling function is supported only for database endpoints. Correct work of the ceiling function for cloud endpoints is not guaranteed. https://endpoint.skyvia.com/********/Orders?$filter=round(Discount) eq 2 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 SELECT t . OrderID , t . CompanyID , t . ContactID , t . OrderDate , t . Freight , t . ShipDate , t . ShipCompanyID , t . Discount FROM dbo . Orders AS t WITH ( NOLOCK ) WHERE ( ROUND ( t . Discount , 0 ) = 2 ) Special Values (true, false, null) Such special values are fully supported in filters in Skyvia Connect. https://endpoint.skyvia.com/********/Products?$filter=Discontinued eq true and UnitName ne null Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . Discontinued = 1 AND t . UnitName IS NOT NULL ) Not Supported $filter Features Skyvia Connect does not support some OData v4 features for the $filter query option. For example, it does not support geo functions, type functions, lambda operators, $root keyword, etc. It also does not support OData type functions cast and isof . Selecting Single Property Value Skyvia supports selecting a single property value, both via $value resource path or omitting the $value keyword. https://endpoint.skyvia.com/********/Products(7807)/Price/$value https://endpoint.skyvia.com/********/Products(7807)/Price Generated SQL: 1\n2\n3\n4 SELECT t . Price FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . ProductID = 7807 ) $count The $count keyword can be used as a resource path or as a query option. When being used as a resource path, it returns the number of records. Skyvia Connect supports $count as a resource path for both OData v1 \u2013 v3 endpoints and OData v4 endpoints. However, using the $filter query option for this resource path is supported only for OData v1 \u2013 v3 endpoints. https://endpoint.skyvia.com/********/Products/$count Generated SQL is as follows: 1\n2\n3 SELECT COUNT_BIG ( * ) FROM dbo . Products AS t WITH ( NOLOCK ) Skyvia Connect supports $count (for OData v4 endpoints) and $inlinecount (for OData v1 \u2013 v3 endpoints) as a query option. In this case Skyvia Connect executes two queries \u2013 one for count, and one \u2013 for data. https://endpoint.skyvia.com/********/Products?$filter=contains(ProductName,'Twain')&$count=true Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17 SELECT COUNT_BIG ( * ) FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . ProductName LIKE '%Twain%' ) SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . ProductName LIKE '%Twain%' ) Please note that $count is not supported in $filter and in $orderby. $format Query Option The $format query option allows explicitly specifying the format, in which you want to retrieve the request results. \nFor OData v1 \u2013 v3 endpoints, Skyvia uses ATOM format by default. For OData v4 endpoints, JSON format is used by default. \nThe $format query option allows the following file formats: ATOM format (this format is supported for OData v1 \u2013 v3 endpoints only): 1 $format=atom JSON format: 1 $format=json JSON format with controllable metadata: 1\n2\n3 $format=application/json;odata.metadata=none \n$format=application/json;odata.metadata=minimal\n$format=application/json;odata.metadata=full The latter format was introduced in OData v4, and thus, is supported for OData v4 endpoints only. $search Query Option This query option was introduced in OData v4, and thus, it is supported for OData v4 endpoints only. It allows searching for the specified string (or strings) in all the textual fields of the returned data. $search can be used together with $filter. It can include several values, united with AND or OR logical operators. NOT logical operator also can be used. However, please note that NOT operator is supported only for database endpoints in Skyvia Connect. Using it for cloud endpoints may lead to incorrect results. https://endpoint.skyvia.com/********/Products?$search=(Twain OR King) AND NOT Finn Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ((( LOWER ( t . ProductName ) LIKE '%twain%' OR LOWER ( t . UnitName ) LIKE '%twain%' ) OR ( LOWER ( t . ProductName ) LIKE '%king%' OR LOWER ( t . UnitName ) LIKE '%king%' )) AND ( NOT ( LOWER ( t . ProductName ) LIKE '%finn%' OR LOWER ( t . UnitName ) LIKE '%finn%' ))) Please note that $search results in a query with a lot of LIKE clauses for all textual columns, and thus, it may be slow. $apply Query Option This query option was introduced in OData v4, and thus, it is supported for OData v4 endpoints only. It allows applying one or more transformations separated by the slash characters to the returned data. Skyvia connect supports only a subset of transformations included in the OData v4 standard. They are aggregate (which applies aggregation expressions) and filter (filters data). As for multiple transformations, Skyvia supports $apply with multiple filter transformations, separated by the slash characters, optionally followed by one aggregate transformation (it should be the last one). Filter Transformation This transformation takes a boolean expression, like $filter query option, and returns instances, for which this expression evaluates to true. Aggregate Transformation The aggregate transformation takes one or more aggregation expressions, separated with commas, as arguments. Aggregation expression consists of the following parts: expression, returning a single value, like in the $filter query option. In a simple case, this can be just a path to an aggregatable property. This expression will be aggregated. with keyword, followed by the aggregation method. as keyword, followed by an alias for the result value. The alias must not coincide with property names, aggregates, keywords, or other aliases. Skyvia supports the following expression methods: sum , min , max , average , countdistinct . Skyvia does not support aggregating over related entities or using their properties via navigation properties in aggregation expressions. Note that Skyvia does not support the virtual $count property in the aggregate transformation. $apply Example https://endpoint.skyvia.com/********/Products?$apply=filter(Discontinued eq true)/filter(Price gt 35)/aggregate(ProductName with countdistinct as UniqueItems, Price with average as AveragePrice, Price mul InStock with sum as TotalStock) Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7 SELECT COUNT ( DISTINCT t . ProductName ) AS UniqueItems , AVG ( t . Price ) AS AveragePrice , SUM (( t . Price * t . InStock )) AS TotalStock FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . Discontinued = 1 AND ( t . Price > 35 )) Parameter Aliases OData v4 supports parameter aliases. Skyvia Connect supports them in $filter and $orderby for OData v4 endpoints. Parameter aliases start with the @ character. https://endpoint.skyvia.com/********/ProductCategories(1)/Products?$orderby=@p1&$filter=InStock gt @p2&@p1=ProductName&@p2=15 Generated SQL is as follows: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 SELECT t . ProductID , t . ProductName , t . CategoryID , t . UnitName , t . UnitScale , t . InStock , t . Price , t . DiscontinuedPrice , t . Discontinued FROM dbo . Products AS t WITH ( NOLOCK ) WHERE ( t . CategoryID = 1 AND ( t . InStock > 15 )) ORDER BY t . ProductName OData Protocol Versions Skyvia Connect supports creating both OData v1 \u2013 v3 endpoints and OData v4 endpoints. You can select the default OData version, that is used when accessing the endpoint by its root URL, when you create or edit the endpoint. However, actually, Skyvia makes the published data available via both protocol versions, by adding odata3/ or odata4/ to the endpoint URL: https://endpoint.skyvia.com/********/ OData protocol version, specified in the endpoint editor is used. https://endpoint.skyvia.com/********/odata3/ OData protocol v3 is used. https://endpoint.skyvia.com/********/odata4/ OData protocol v4 is used." }, { "url": "https://docs.skyvia.com/connect/security-settings.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect Security Settings Skyvia allows both public and private access to exposed data. Specifying certain user accounts and passwords, you allow access to the exposed data only to authenticated users. Skyvia supports HTTP basic authentication with username and password. It does not support privileges and customizing access rights to different exposed objects. When creating a new endpoint, you define security settings on the Check Security page of the endpoint wizard. As for existing endpoints, you can access them from the overview tab of the endpoint details. Public and Private Access to Endpoints Having created endpoints, you can view their list and information on their security on the OBJECTS page. For this, use either the All tab or the Endpoints tab. Security icons on the right will show what kind of access each endpoint has \u2014 public or private. When you click a certain endpoint, you will be transferred to its details. To change security settings, switch to the Model tab. Please note that security icons will be displayed only if you select List view or List view with grouping . Information on security settings is not displayed when selecting Grid view or Grid view with grouping . User Accounts You can view the number of user accounts, created for an existing endpoint, on the Overview tab of the endpoint details, to the right of the endpoint protocol. If there are no user accounts created, it shows \u201cPublic\u201d. Click the arrow button to the right of the number of users in order to manage users for the endpoint. The Users dialog box will open. To add a user, click +Add new , then enter the username and password. To modify a user, edit their username and password in this dialog box. If you need different users to be able to access different sets of data, you can create different endpoints, exposing these sets of data. If you don\u2019t specify any users, the endpoint data will be publicly available to anyone without authentication. IP Addresses Skyvia allows you to limit the IP addresses, from which the data of the endpoint can be accessed. You can specify one or more allowed IP address ranges. The number of specified IP ranges for an existing endpoint is displayed on the Overview tab of the endpoint details, to the right of number of endpoint users. To manage IP addresses for the endpoint, click the arrow button to the right of the IP ranges number. The IP Addresses dialog box will open. To add a range of IP addresses, click +Add new , then type in the meaningful range name and enter Start IP and End IP addresses of the range. Access from each of the IP addresses within the range will be allowed. To specify a single IP address, enter the same address for start and end addresses. If you don\u2019t specify any IP address ranges, the endpoint data will be available from any IP." }, { "url": "https://docs.skyvia.com/connect/sql-endpoints/", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect SQL Endpoints Skyvia Connect allows you to create API endpoints that allow running SQL statements against your data over the Internet. You can create SQL endpoints visually, in just a few simple steps, and you don\u2019t need to care about service hosting, deployment, and administration at all. The service is hosted on the endpoint.skyvia.com subdomain and uses this subdomain certificate. All the deployment and administration tasks are performed by Skyvia engineers, who has taken care about everything needed. Skyvia Connect allows you to use SQL against both databases and cloud applications. It also provides an additional security layer with it\u2019s own user accounts and passwords, so you don\u2019t need to share your actual data source credentials in order to share your data. All the API calls to the endpoint are logged, and whenever necessary you may monitor and analyze your endpoint activity. Please note, when you query a database, use a database-specific SQL syntax. To learn more about SQL syntax for cloud sources visit Supported SQL for Cloud Sources page. SQL Endpoint Protocol Skyvia uses its own protocol for access to the SQL Endpoint. You send POST HTTP requests to the /execute resource with a JSON object, containing SQL and parameter values, as the body parameter, and retrieve the returned data and metadata in a JSON format. You can see more details on the protocol and other available calls by adding /help to your SQL endpoint URL. However, usually there is no need to study the protocol details , because we provide API Clients for SQL endpoints, so you may work with them via well-known interfaces. You can download them from the details of a created SQL endpoint. Creating SQL Endpoints An SQL endpoint can be created using the +Create New menu, as any other object. SQL Endpoints are created in a convenient SQL Endpoint Wizard, in just three simple steps: Select a connection to your data source. Optionally configure security settings (user accounts and IP ranges, from which access is allowed). Enter name for the new endpoint. That\u2019s all. An SQL endpoint that provides access to all the data, available via the selected connection, is created. Working with Created Endpoints After you create an SQL endpoint, its details open automatically. The details consist of two tabs: Overview and Log . On the Overview tab you can find the endpoint URL, which you can use to work with the created SQL endpoint. Here you can also click Skyvia SQL API Clients and download the client to use. Skyvia offers the following SQL API clients: ADO.NET Provider - a provider to access data from .NET applications. ODBC Driver - a driver for data access via a well-known ODBC interface, supported in different applications, like Excel, and development platforms. Looker Studio connector - allows you to connect the Looker Studio BI tool from Google to your endpoints and analyze their data. The Overview tab also displays basic endpoint information (endpoint kind - OData or SQL , endpoint connection, numbers of endpoint users and IP address ranges) and allows accessing and changing security settings . If your endpoint is valid, and it does not use features, not available in your Connect pricing plan , it is activated immediately, and you can copy and use the endpoint URL in your applications. Whenever necessary, you may change the status of your endpoint to Inactive or back to Active in the endpoint title bar or via the corresponding quick action button in the object list. If your endpoint is invalid or uses features, not available in your pricing plan, it is created inactive, and it cannot be activated till you fix it or upgrade your connect subscription to a plan, which includes the necessary features. The Log tab allows you to monitor all the data access via the endpoint. Check the Monitoring Endpoint Activity topic for more details." }, { "url": "https://docs.skyvia.com/connect/sql-endpoints/ado.net-provider.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect SQL Endpoints ADO.NET Provider dotConnect for Skyvia Connect, an ADO.NET provider for SQL endpoints, offers the standard ADO.NET interface to access data of your SQL endpoints. ADO.NET is Microsoft\u2019s data access technology for .NET Framework, very widely used in .NET projects that easily integrates with other .NET data related technologies and solutions. You can use dotConnect for Skyvia Connect to create software that works with data, published via your SQL endpoints. dotConnect for Skyvia Connect includes standard ADO.NET classes: SkyviaConnectConnection SkyviaConnectConnectionStringBuilder SkyviaConnectCommand SkyviaConnectCommandBuilder SkyviaConnectParameter SkyviaConnectParameterCollection SkyviaConnectDataReader SkyviaConnectDataAdapter SkyviaConnectProviderFactory The provider requires .NET Framework 4.5 or higher installed and is compatible with all Windows platforms (Windows Vista or higher) that support .NET Framework 4.5. It can be used with any .NET Framework development environment that supports .NET Framework 4.5 or higher. To use dotConnect for Skyvia Connect in your projects, you can either download dotConnect for Skyvia Connect installer from your SQL endpoint details or get it as a NuGet package from [NuGet](https://www.nuget.org/packages/Devart.Data.SkyviaConnect/) . For example, in Visual Studio, you can execute the Install-Package Devart.Data.SkyviaConnect command in NuGet Package Manager Console to add it to your project. Adding References to Projects If you are going to install our provider via the installer, you will need to add the following assemblies to the references of your projects, using dotConnect for Skyvia Connect: Devart.Data.dll Devart.Data.SqlShim.dll Devart.Data.SkyviaConnect.dll You can find these assemblies in the folder where the provider is installed or in the GAC. By default, it\u2019s _%ProgramFiles(x86)%\\Devart\\dotConnect\\SkyviaConnect_ Connecting to SQL Endpoint Connecting to an SQL endpoint is easy, you only need to provide the following connection string parameters: Endpoint URL - The URL of an SQL endpoint created on Skyvia. You may copy it from the Overview tab of endpoint details. User - A user ID to connect to a secure SQL endpoint. Required if you added user accounts for your endpoint . Password - A password to connect to a secure SQL endpoint. Required if you added user accounts for your endpoint . To connect to an SQL endpoint, you need to create an instance of Devart.Data.SkyviaConnect.SkyviaConnectConnection class and pass the required connection string to it. You can either assign the whole connection string to the ConnectionString property of SkyviaConnectConnection or pass it to SkyviaConnectConnection constructor. You can also construct connection string from separate connection parameter values, using the Devart.Data.SkyviaConnect.SkyviaConnectConnectionStringBuilder class. C# 1\n2\n3\n4\n5\n6\n7 SkyviaConnectConnectionStringBuilder connectionStringBuilder = new SkyviaConnectConnectionStringBuilder (); connectionStringBuilder . EndpointUrl = \"https://connect.skyvia.com/4v6en3d0\" ; connectionStringBuilder . User = \"TestUser\" ; connectionStringBuilder . Password = \"TestPassword\" ; SkyviaConnectConnection connection = new SkyviaConnectConnection ( connectionStringBuilder . ConnectionString ); Visual Basic 1\n2\n3\n4\n5\n6\n7 Dim connectionStringBuilder As SkyviaConnectConnectionStringBuilder = New SkyviaConnectConnectionStringBuilder () connectionStringBuilder . EndpointUrl = \"https://connect.skyvia.com/4v6en3d0\" connectionStringBuilder . User = \"TestUser\" connectionStringBuilder . Password = \"TestPassword\" Dim connection As SkyviaConnectConnection = New SkyviaConnectConnection ( connectionStringBuilder . ConnectionString ) You may also set additional, not required parameters in the connection string to tweak the provider behavior. You can find the list of supported parameters and their descriptions below . Retrieving Data ADO.NET data providers serve as a bridge between an application and a data source, and allow you to execute commands as well as to retrieve data by using a DataReader or a DataAdapter . Updating data involves using Command and DataAdapter objects. To retrieve or update data in database endpoints, use the SQL syntax of the source database. For cloud applications, use SQLite SQL syntax. In the sample we will use SkyviaConnectCommand and SkyviaConnectDataReader to retrieve data. SkyviaConnectDataReader allows retrieving data in pages. While you read data from it, it automatically queries the next pages from an endpoint. SkyviaConnectDataReader offers higher performance than SkyviaConnectDataAdapter, especially when you query a lot of data. This and other samples use an SQL endpoint that publishes data from Microsoft\u2019s [Northwind sample SQL Server database](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/linq/downloading-sample-databases) . C# 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28 using Devart.Data.SkyviaConnect ; ... class Program { static void Main ( string [] args ) { const string connectionString = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" ; const string sql = \"SELECT ContactName, Phone FROM dbo.Customers\" ; using ( SkyviaConnectConnection connection = new SkyviaConnectConnection ( connectionString )) { connection . Open (); using ( SkyviaConnectCommand command = connection . CreateCommand ()) { command . CommandText = sql ; using ( SkyviaConnectDataReader reader = command . ExecuteReader ()) { while ( reader . Read ()) { Console . WriteLine ( \"{0}\\t{1}\" , reader . GetValue ( 0 ), reader . GetValue ( 1 )); } } } } Console . ReadKey (); } } Visual Basic 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25 Imports Devart.Data.SkyviaConnect . .. Class Program Private Shared Sub Main ( ByVal args As String ()) Const connectionString As String = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" Const sql As String = \"SELECT ContactName, Phone FROM dbo.Customers\" Using connection As SkyviaConnectConnection = New SkyviaConnectConnection ( connectionString ) connection . Open () Using command As SkyviaConnectCommand = connection . CreateCommand () command . CommandText = sql Using reader As SkyviaConnectDataReader = command . ExecuteReader () While reader . Read () Console . WriteLine ( \"{0}\" & vbTab & \"{1}\" , reader . GetValue ( 0 ), reader . GetValue ( 1 )) End While End Using End Using End Using Console . ReadKey () End Sub End Class Updating Data You can update the endpoint data either by modifying data returned by the SkyviaConnectDataAdapter class and then calling its Update method or by performing corresponding DML statements (INSERT, DELETE, UPDATE) via SkyviaConnectCommand. Here is an example showing how to update endpoint data using SkyviaConnectDataAdapter. 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50 using System.Data ; using Devart.Data.SkyviaConnect ; ... class Program { static void Main ( string [] args ) { const string connectionString = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" ; const string sql = \"SELECT CategoryId, CategoryName, Description FROM dbo.Categories\" ; using ( SkyviaConnectConnection connection = new SkyviaConnectConnection ( connectionString )) { connection . Open (); DataTable table = new DataTable ( \"dbo.Categories\" ); using ( SkyviaConnectCommand command = connection . CreateCommand ()) { command . CommandText = sql ; using ( SkyviaConnectDataAdapter adapter = new SkyviaConnectDataAdapter ( command )) { adapter . Fill ( table ); adapter . UpdateCommand = new SkyviaConnectCommand ( \"UPDATE dbo.Categories SET CategoryName = @name, Description = @description WHERE CategoryId = @id\" , connection ); adapter . UpdateCommand . Parameters . Add ( \"id\" , DbType . Int32 ). SourceColumn = \"CategoryId\" ; adapter . UpdateCommand . Parameters [ \"id\" ]. SourceVersion = DataRowVersion . Original ; adapter . UpdateCommand . Parameters . Add ( \"name\" , DbType . String ). SourceColumn = \"CategoryName\" ; adapter . UpdateCommand . Parameters . Add ( \"description\" , DbType . String ). SourceColumn = \"Description\" ; DataRow firstrow = table . Rows [ 0 ]; firstrow [ \"CategoryName\" ] = \"sample name 1\" ; firstrow [ \"Description\" ] = \"sample description\" ; Console . WriteLine ( adapter . Update ( table )); } } Console . WriteLine ( \"Rows after update.\" ); foreach ( DataRow row in table . Rows ) { Console . WriteLine ( \"{0}\\t{1}\\t{2}\" , row [ 0 ], row [ 1 ], row [ 2 ]); } } Console . ReadKey (); } } Visual Basic 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39 Imports System.Data Imports Devart.Data.SkyviaConnect . .. Class Program Private Shared Sub Main ( ByVal args As String ()) Const connectionString As String = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" Const sql As String = \"SELECT CategoryId, CategoryName, Description FROM dbo.Categories\" Using connection As SkyviaConnectConnection = New SkyviaConnectConnection ( connectionString ) connection . Open () Dim table As DataTable = New DataTable ( \"dbo.Categories\" ) Using command As SkyviaConnectCommand = connection . CreateCommand () command . CommandText = sql Using adapter As SkyviaConnectDataAdapter = New SkyviaConnectDataAdapter ( command ) adapter . Fill ( table ) adapter . UpdateCommand = New SkyviaConnectCommand ( \"UPDATE dbo.Categories SET CategoryName = @name, Description = @description WHERE CategoryId = @id\" , connection ) adapter . UpdateCommand . Parameters . Add ( \"id\" , DbType . Int32 ). SourceColumn = \"CategoryId\" adapter . UpdateCommand . Parameters ( \"id\" ). SourceVersion = DataRowVersion . Original adapter . UpdateCommand . Parameters . Add ( \"name\" , DbType . String ). SourceColumn = \"CategoryName\" adapter . UpdateCommand . Parameters . Add ( \"description\" , DbType . String ). SourceColumn = \"Description\" Dim firstrow As DataRow = table . Rows ( 0 ) firstrow ( \"CategoryName\" ) = \"sample name 1\" firstrow ( \"Description\" ) = \"sample description\" Console . WriteLine ( adapter . Update ( table )) End Using End Using Console . WriteLine ( \"Rows after update.\" ) For Each row As DataRow In table . Rows Console . WriteLine ( \"{0}\" & vbTab & \"{1}\" & vbTab & \"{2}\" , row ( 0 ), row ( 1 ), row ( 2 )) Next End Using Console . ReadKey () End Sub End Class The following example updates the endpoint data using SkyviaConnectCommand. C# 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22 using Devart.Data.SkyviaConnect ; ... class Program { static void Main ( string [] args ) { const string connectionString = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" ; const string sql = \"UPDATE dbo.Categories SET CategoryName = 'sample name 2' WHERE CategoryName = 'sample name 1'\" ; using ( SkyviaConnectConnection connection = new SkyviaConnectConnection ( connectionString )) { connection . Open (); using ( SkyviaConnectCommand command = connection . CreateCommand ()) { command . CommandText = sql ; Console . WriteLine ( command . ExecuteNonQuery ()); } } Console . ReadKey (); } } Visual Basic 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 Imports Devart.Data.SkyviaConnect . .. Class Program Private Shared Sub Main ( ByVal args As String ()) Const connectionString As String = \"Endpoint Url=https://connect.skyvia.com/4v6en3d0;User=testuser;Password=testpassword\" Const sql As String = \"UPDATE dbo.Categories SET CategoryName = 'sample name 2' WHERE CategoryName = 'sample name 1'\" Using connection As SkyviaConnectConnection = New SkyviaConnectConnection ( connectionString ) connection . Open () Using command As SkyviaConnectCommand = connection . CreateCommand () command . CommandText = sql Console . WriteLine ( command . ExecuteNonQuery ()) End Using End Using Console . ReadKey () End Sub End Class Metadata dotConnect for Skyvia Connect supports getting the endpoint metadata using the GetSchema method of the SkyviaConnectConnection class. This method retrieves detailed information about the endpoint objects as a DataTable object. It allows obtaining endpoint schema information without writing queries and parsing the output. GetSchema Overloads The GetSchema method is available in three overloads, each of them serves its own purpose: If you call the GetSchema method without parameters, or with a single parameter \u201cMetaDataCollections\u201d (which is actually the same), the table object returned by the method will contain three columns. The first field of every row is a keyword allowed to be passed to the method (as collectionName argument). The second field is the number of restriction values for this keyword (passed through restrictionValues argument). The third field is not used in dotConnect for Skyvia Connect. GetSchema with 1 argument returns general information about the collection queried. For example, GetSchema(\"Tables\") returns the list of the tables (objects) available via your SQL endpoint and the information about them. Instead of specifying the metadata collection name as a string constant, you may use members of System.Data.DbMetaDataCollectionNames and Devart.Data.SkyviaConnect.SkyviaConnectMetadataCollectionNames as the first GetSchema argument values. The members of these classes are the string fields, each field stores the corresponding metadata collection name. It is recommended to use these fields rather than manually input the collection names manually as the string constants because in case of using these fields, you will find misspellings at compile-time, and intellisense will show you all the available metadata collection names. Finally, the third GetSchema overload allows you to specify the collection name and an array of restrictions. These collection-specific restrictions allow you to request information only for those objects of the specified collection that match restrictions or even about one specific object with matching name. Collections Collection Name Returned information Number of restrictions Available restrictions Columns Returns list of columns, their type and some extra information. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a column DataSourceInformation Returns information about the data source. 0 DataTypes Returns information about data types supported by the data source. 0 ForeignKeyColumns Returns the list of columns that participate in foreign keys. 5 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a foreign key \u2022 Referenced table schema \u2022 Referenced table name ForeignKeys Returns the list of foreign keys. 5 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a foreign key \u2022 Referenced table schema \u2022 Referenced table name IndexColumns Returns the list of columns that participate in indexes. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of an index Indexes Returns the list of indexes. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of an index MetadataCollections Returns this list. Same as using the GetSchema() method without parameters. 0 PrimaryKeyColumns Returns the list of columns that participate in primary keys. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a primary key PrimaryKeys Returns the list of primary keys. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a primary key ReservedWords Lists all reserved words used in the server. 0 Tables Returns the list of the SQL endpoint tables (objects). 2 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table UniqueKeyColumns Returns the list of columns that participate in unique keys. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a unique key UniqueKeys Returns the list of unique keys. 3 \u2022 Name of a schema that the GetSchema method should search in \u2022 Name of a table that the GetSchema method should search in \u2022 Name of a unique key There are a few more acceptable collections, but they are intended for internal use. Examples The following code fragment is an elegant way to detect existence of a table. C# 1\n2\n3\n4\n5 string tableName = \"Products\" ; if ( myConnection . GetSchema ( \"Tables\" , new string [] { tableName }). Rows . Count > 0 ) { Console . WriteLine ( \"Table \" + tableName + \" exists in the endpoint.\" ); } Visual Basic 1\n2\n3\n4\n5 Dim tableName As String = \"Products\" Dim restrictions () As String = { tableName } If ( myConnection . GetSchema ( \"Tables\" , restrictions ). Rows . Count > 0 ) Then Console . WriteLine ( \"Table \" + tableName + \" exists in the endpoint.\" ) End If The next sample shows how to retrieve columns information from a table and render it to console. C# 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 static void GetTableInfo ( SkyviaConnectConnection myConnection , string tableName , string schemaName ) { myConnection . Open (); DataTable myDataTable = myConnection . GetSchema ( \"Columns\" , new string [] { schemaName , tableName }); for ( int i = 0 ; i < myDataTable . Columns . Count ; i ++) { Console . Write ( myDataTable . Columns [ i ]. Caption + \"\\t\" ); } Console . WriteLine (); foreach ( DataRow myRow in myDataTable . Rows ) { foreach ( DataColumn myCol in myDataTable . Columns ) { Console . Write ( myRow [ myCol ] + \"\\t\" ); } Console . WriteLine (); } myConnection . Close (); } Visual Basic 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21 Private Shared Sub GetTableInfo ( ByVal myConnection As SkyviaConnectConnection , ByVal tableName As String , ByVal schemaName As String ) myConnection . Open () Dim myDataTable As DataTable = myConnection . GetSchema ( \"Columns\" , New String () { schemaName , tableName }) For i As Integer = 0 To myDataTable . Columns . Count - 1 Console . Write ( myDataTable . Columns ( i ). Caption & vbTab ) Next Console . WriteLine () For Each myRow As DataRow In myDataTable . Rows For Each myCol As DataColumn In myDataTable . Columns Console . Write ( myRow ( myCol ) & vbTab ) Next Console . WriteLine () Next myConnection . Close () End Sub Deployment To deploy applications written with dotConnect for Skyvia Connect, you should register run-time assemblies Devart.Data.SkyviaConnect.dll , Devart.Data.SqlShim.dll , and Devart.Data.dll at Global Assembly Cache (GAC) for appropriate framework or place them in the folder of your application (Bin folder for web projects). These two assemblies should be available in all applications written with dotConnect for Skyvia Connect. When your code uses dotConnect for Skyvia Connect via a factory-based class, you should register configuration information in the DbProviderFactories section of the *.config file to inform your environment about the existence of the provider factory. The provider factory is described either in machine.config (globally), in app.config or in web.config (just for your application), but not in both files. This is done as follows: <system.data>\n <DbProviderFactories>\n <remove invariant=\"Devart.Data.SkyviaConnect\" />\n <add name=\"dotConnect for Skyvia Connect\" invariant=\"Devart.Data.SkyviaConnect\" description=\"Devart dotConnect for Skyvia Connect\" type=\"Devart.Data.SkyviaConnect.SkyviaConnectProviderFactory, Devart.Data.SkyviaConnect, Version=1.0.0.0, Culture=neutral, PublicKeyToken=09af7300eec23701\" />\n </DbProviderFactories>\n</system.data> Replace 1.0.0.0 here with your actual version. Additional Connection String Parameters Parameter Description Connection Lifetime When a connection is returned to the pool, its creation time is compared with the current time, and the connection is destroyed if that time span (in seconds) exceeds the value specified by Connection Lifetime. The default value is 0 (connection always returns to pool). Connect Timeout -or- Connection Timeout The length of time (in seconds) to wait for a connection to the server before terminating the attempt and generating an error. The default value is 30. Default Command Timeout The time in seconds to wait while trying to execute a command before terminating the attempt and generating an error. A value of 0 indicates no limit. The default value is 60. Initialization Command Specifies a data source-specific command that should be executed immediately after establishing the connection. Max Pool Size The maximum number of connections allowed in the pool. Setting the Max Pool Size value of the ConnectionString can affect performance. The default value is 100. Min Pool Size The minimum number of connections allowed in the pool. Setting the Max Pool Size value of the ConnectionString can affect performance. The default value is 0. Persist Security Info Indicates if security-sensitive information, such as the password, is not returned as part of the connection if the connection is open or has ever been in an open state. Pooling If true (by default), the SkyviaConnectConnection object is drawn from the appropriate pool or is created and added to the appropriate pool. Proxy Host If you are connected to the Internet via a proxy server, specify its address in this parameter. To find your Proxy server address, in the Control Panel open Internet Options, switch to the Connections tab, and click LAN settings . Proxy Port If you are connected to the Internet via a proxy server, specify its port in this parameter. You can find it in the same way as its address, as described above. Proxy User If Proxy User authorization is used, specify Proxy user name (ID) in this parameter. Proxy Password If Proxy User authorization is used, specify Proxy password (ID) in this parameter. Readonly Determines whether the connection is read-only (allows only SELECT statements). UTC Dates Specifies whether all the datetime values retrieved from the data source are returned as UTC values or converted to local time and whether the date values specified on the application side (e.g., in SQL statements) are considered UTC or local. The default value is false." }, { "url": "https://docs.skyvia.com/connect/sql-endpoints/google-data-studio-connector.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect SQL Endpoints Looker Studio Connector Skyvia offers Looker Studio connector for SQL endpoints. It allows you to analyze data of supported data sources , including cloud apps, cloud and on-premises databases with [Looker Studio](https://lookerstudio.google.com/) (formerly, Google Data Studio) via SQL endpoints. To analyze your data, you need to create an SQL endpoint first and obtain its URL. Then you can use this URL in the connector. Looker Studio Connector Modes Our Looker Studio connector can work in two modes: table and query. Table mode selects all the data from an SQL endpoint entity (which corresponds to an underlying database table or view or a cloud object). Query mode allows you to enter a custom SQL SELECT statement and link the returned data to a Looker Studio report. Note that you should use an underlying database syntax for databases or data warehouses. As for cloud app SQL endpoints, use SQLite SQL syntax. Connecting Looker Studio to SQL Endpoints In a Looker Studio report, perform the following steps to add the SQL endpoint as a data source: Click Add data button on the Google Data Studio toolbar. In the Search box, type Skyvia and press Enter. Click the Skyvia connector Enter the necessary connection parameters: Endpoint URL - the URL of your endpoint, that you may find and copy in the endpoint details on Skyvia. Username - the name of a user, created in the endpoint security settings. If you haven\u2019t created any users for the endpoint (the endpoint is public), leave this parameter empty. Password - the password of the user, created in the endpoint security settings. If you haven\u2019t created any users for the endpoint, leave this parameter empty. Select Type : whether to use all the data from an endpoint table for the report, or to use a result of an SQL query, and click Next . Then either select the required Object or enter the required sql query text. Click Add . Now the data source is added to your report." }, { "url": "https://docs.skyvia.com/connect/sql-endpoints/odbc-driver.html", "product_name": "Connect", "content_type": "Documentation", "content": "Product: Connect. Documentation Connect SQL Endpoints ODBC Driver ODBC Driver for Skyvia Connect provides a widely used ODBC interface for SQL endpoints . ODBC is supported in a wide number of data-related application, tools, and technologies: Microsoft Excel and Access, OpenOffice and LibreOffice, PHP, Python, SSIS, PowerBI, Tableau, and many more. To connect to an SQL endpoint, you need to install the driver and register an ODBC Data Source for it. ODBC Driver for Skyvia Connect requires .NET Framework 4.5 or higher and is compatible with all Windows platforms (Windows Vista or higher) that support .NET Framework 4.5. Registering Data Source After you installed the driver, you can register your SQL endpoints as ODBC data sources. When you need to connect to them via ODBC on this computer, you will only need to provide the data source name. To register an ODBC data source, in the Control Panel open Administrative Tools and find the ODBC Data Sources tool of the necessary bitness. By default, ODBC Driver for Skyvia Connect installs both 32-bit and 64-bit driver, so select the tool that you will need to use. Here you may either register the data source name system-wide (for all user accounts on this computer) on the System DSN tab, or only for current user on the User DSN tab. You may also store DSN in a text file with DSN extension and reuse it on other computers. Here we show how to register data source name for all users on this computer. On the System DSN tab click Add\u2026 . In the Create New Data Source dialog, select Devart ODBC Driver for Skyvia and click Finish . Now you need to specify the parameters to connect to your endpoint. Specify a meaningful data source name and description. Then you need to specify the three required parameters: your Endpoint Url , User ID and Password . The latter two are required if you added user accounts for your endpoint. You can also optionally specify proxy parameters if you connect to the Internet via proxy. Additionally, you may switch to the Advanced settings tab and tweak the ODBC driver behavior. These settings are described below. After you finished the data source configuration, click OK . After this, you may connect to your endpoint via ODBC by selecting the corresponding data source name. Connection Parameters Main Parameters Parameter Description Endpoint URL The URL of an SQL endpoint created on Skyvia. You may copy it from the Overview tab of endpoint details. User ID A user ID to connect to a secure SQL endpoint. Password A password to connect to a secure SQL endpoint. Proxy Parameters Parameter Description Proxy Host If you are connected to the Internet via a proxy server, specify its address in this parameter. To find your Proxy server address, in the Control Panel open Internet Options, switch to the Connections tab, and click LAN settings . Proxy Port If you are connected to the Internet via a proxy server, specify its port in this parameter. You can find it in the same way as its address, as described above. Proxy User If Proxy User authorization is used, specify Proxy user name (ID) in this parameter. Proxy Password If Proxy User authorization is used, specify Proxy password (ID) in this parameter. Advanced Settings Parameter Description Allow NULL strings To retrieve metadata, not all parameters according to MSDN can accept a null value. If NULL, the driver should return an error. But some 3rd-party tools pass NULL to the parameters. This parameter should be enabled for compatibility with such tools. Empty strings as NULL To retrieve metadata, not all parameters according to MSDN can accept a null value. If NULL, the driver should return an error. But some 3rd-party tools pass NULL to the parameters. This parameter should be enabled for compatibility with such tools. ODBC Behavior This parameter allows enabling the behavior corresponding to the ODBC specification version for a third-party tools that may expect the behavior of a specific version. The behavior of ODBC driver can be changed by setting a value for the SQL_ATTR_ODBC_VERSION attribute by calling the SQLSetEnvAttr function. But some third-party tools expect the driver to exhibit ODBC 2.x behavior, but forget to call SQLSetEnvAttr with the specified version or pass an incorrect value in this call. In this case, the required behavior can be explicitly specified in the Connection String by setting the ODBC Behavior parameter. The possible values are: \u2022 Default - default ODBC behavior determined by a third-party tool. \u2022 Ver 2.x - ODBC 2.x behavior is explicitly set. \u2022 Ver 3.x - ODBC 3.x behavior is explicitly set. String Types Sets the string value types returned by the driver as Default, Ansi or Unicode. \u2022 Default - the driver defines the string types. \u2022 Ansi - all string types will be returned as SQL_CHAR, SQL_VARCHAR and SLQ_LONGVARCHAR. \u2022 Unicode - all string types will be returned as SQL_WCHAR, SQL_WVARCHAR and SLQ_WLONGVARCHAR. It is recommended to change this parameter value only if any third-party tool that you are going to use with the ODBC driver supports only Ansi string types or only Unicode ones. RegionalNumberSettings Enables using local regional settings when converting numbers to string. RegionalDateTimeSettings Enables using local regional settings when converting dates and times to string. ConnectionTimeout The time to wait while trying to establish a connection before terminating the attempt and generating an error. QueryTimeout The time to wait for a query execution result before terminating and generating an error." }, { "url": "https://docs.skyvia.com/data-integration/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Skyvia Data Integration is a cloud ETL (extract-transform-load) solution for no-coding data integration between cloud applications, databases, etc. In this section, you can get familiar with the following topics : Import \u2014 this section describes import integrations in Skyvia. An import loads data from source to target in one direction. A source can be CSV files, a database/data warehouse or a cloud application. A target can be either a database/data warehouse or a cloud application. Export \u2014 this section describes export integrations in Skyvia. An export extracts data from a database or cloud application to CSV files. Replication \u2014 this section describes replication integrations in Skyvia. A replication creates a copy of cloud app data in a relational database and keeps them up-to-date. Synchronization \u2014 this section describes synchronization integrations in Skyvia. A synchronization performs bi-directional synchronization between cloud applications and/or databases. Data Flow \u2014 this section describes data flow integrations in Skyvia. A data flow allows building integrations with powerful transformations between multiple data sources. In one data flow, you can use multiple data transformations and transfer modified data to multiple targets. Control Flow \u2014 this section describes control flow integrations in Skyvia. A control flow allows running data flows or other integrations in a specific order or depending on the specific conditions. It allows you to perform pre- and post-integration actions and even set up some automatic error processing logic within your integration. Common Integration Features \u2014 this section describes Skyvia features used for different integration kinds. Integration Run History \u2014 this section describes the log of integration run results (where to find it, and how the results are displayed). Scheduling Integrations for Automatic Execution \u2014 this section describes how to automate integrations to run on the schedule and schedule settings available. Tutorials \u2014 this section contains import/export/replication/synchronization tutorials." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features This section describes features shared between different types of integrations. Mapping allows you to map source and target fields/columns in Import and Synchronization tasks. Expressions allow you to map fields or columns using mathematical, string and other expressions and functions. Filter Settings allow you to specify filtering conditions for exported data in Replication and Export tasks. Data Types and Limitations describe used field data types and their limitations." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/data-types-and-limitations.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Data Types and Limitations Skyvia supports the following data types: Data type Description DT_AUTO This type is used to specify that the source CSV column type must be determined automatically based on the target field type. DT_BOOL A Boolean value. When reading a CSV file the following are valid DT_BOOL values: true, false, 0, 1. DT_BYTES A binary data value. The length is variable and can be up to 8000 bytes. Check Importing Binary Data for more details on importing such values from a CSV file. DT_CY* A currency value. An eight-byte signed integer with a scale of 4 and a maximum precision of 19 digits. The minimal value is -922,337,203,685,477.5808. The maximal value is 922,337,203,685,477.5807 DT_DBDATE* A date structure, consisting of year, month, and day. The minimal value is 30 December 1899. The maximal value is 30 December 9999. DT_DBTIME* A time structure, consisting of hour, minute, and second. Can hold values that are in the range of 00:00:00 through 23:59:59. DT_DBTIME2* A time structure, consisting of hour, minute, second, and fractional seconds. The fractional seconds have a maximum scale of 7 digits. Can hold values that are in the range of 00:00:00.0000000 through 23:59:59.9999999. DT_DBTIMESTAMP* A timestamp structure, consisting of year, month, day, hour, minute, second, and fractional seconds. The fractional seconds have a maximum scale of 3 digits. Can hold values that are in the range of January 1, 1 A.D. through December 31, 9999. DT_DBTIMESTAMP2* A timestamp structure, consisting of year, month, day, hour, minute, second, and fractional seconds. The fractional seconds have a maximum scale of 7 digits. Can hold values that are in the range of January 1, 1753 A.D. through December 31, 9999. DT_DBTIMESTAMPOFFSET* A timestamp structure, consisting of year, month, day, hour, minute, second, and fractional seconds with a time zone offset. The fractional seconds have a maximum scale of 7 digits. The offset specifies the number of hours and minutes that the time is offset from the Coordinated Universal Time (UTC) and is used by the system to obtain the local time. The time zone offset must include a sign, plus or minus, to indicate whether the offset is added or subtracted from the UTC. The valid number of hours offset is between -14 and +14. Can hold values that are in the range of 0001-01-01 00:00:00 -14:00 through 9999-12-31 23:59:59.9999999 +14:00. DT_DECIMAL* An exact numeric value with a fixed precision and a fixed scale. This data type is a 12-byte unsigned integer with a separate sign, a scale of 0 to 28, and a maximum precision of 29. DT_FILETIME A 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601. The fractional seconds have a maximum scale of 3 digits. DT_GUID A globally unique identifier (GUID). DT_I1 A one-byte, signed integer. The minimal value is -128, the maximal value is 127. DT_I2 A two-byte, signed integer. The minimal value is -32768, the maximal is 32767. DT_I4 A four-byte, signed integer. The minimal value is -2,147,483,648, the maximal is 2,147,483,647. DT_I8 An eight-byte, signed integer. The minimal value is -9,223,372,036,854,775,808, the maximal is 9,223,372,036,854,775,807. DT_NUMERIC* An exact numeric value with a fixed precision and scale. This data type is a 16-byte unsigned integer with a separate sign, a scale of 0 - 38, and a maximum precision of 28. DT_R4 A single-precision floating-point value. The minimal value is -3.402823466 E + 38, the maximal value is 3.402823466 E + 38. The smallest normalized value that can be stored in this type is 1.175494351 E -38. DT_R8 A double-precision floating-point value. The minimal value is -1.7976931348623158 E + 308, the maximal value is 1.7976931348623158 E + 308. The smallest normalized value that can be stored in this type is 2.2250738585072014 E \u2013 308. DT_STR A null-terminated ANSI/MBCS character string with a maximum length of 8000 characters. If a column value contains additional null terminators, the string will be truncated at the occurrence of the first null. DT_UI1 A one-byte, unsigned integer. The minimal value is 0, the maximal value is 256. DT_UI2 A two-byte, unsigned integer. The minimal value is 0, the maximal value is 65536. DT_UI4 A four-byte, unsigned integer. The minimal value is 0, the maximal value is 4,294,967,296. DT_UI8 An eight-byte, unsigned integer. The minimal value is 0, the maximal value is 18,446,744,073,709,551,616 DT_WSTR A null-terminated Unicode character string with a maximum length of 4000 characters. If a column value contains additional null terminators, the string will be truncated at the occurrence of the first null. DT_IMAGE A binary value with a maximum size of 231-1 (2,147,483,647) bytes. Check Importing Binary Data for more details on importing such values from a CSV file. DT_NTEXT A Unicode character string with a maximum length of 230 - 1 (1,073,741,823) characters. When reading a DT_NTEXT value from a CSV file, the value length must not exceed 65536 characters. DT_TEXT An ANSI/MBCS character string with a maximum length of 231-1 (2,147,483,647) characters. When reading a DT_TEXT value from a CSV file, the value length must not exceed 65536 characters. * When exporting the values of this type to CSV files, they are exported in the format, determined from the locale settings, specified in the integration CSV options . When importing CSV files , the format of a value must correspond to the locale, specified in CSV options in the import task, in order for the value to be parsed correctly as a valid value of such type. If a database or a CRM uses types that cannot be mapped to these simple types, Skyvia treats it as strings. When importing data from a CSV file, the values for columns of such types must have format understandable for the target. They will be sent to the target as the same strings that are stored in the CSV file." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/filter-settings.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Filter Settings Filters define what data to extract from a Source, querying only those Source records that match specific conditions.\nYou can use filters in different Skyvia data integration tools. CSV files don\u2019t support filters. To apply a filter, specify a filter condition in the Source settings in your integration. Configuring Filter To configure a filter, add a filter condition. A filter condition is a logical expression that compares a field of a queried object with a specified value. A simple filter condition includes an object and a field to which the filter is applied, a logical operator, and a value. Logical operator The list of supported operators depends on the connector and may vary for different connectors and field types.\nSkyvia supports the following logical operators in filters: Field Type Operator Numeric equals ( = ), not equals ( != ), less than ( < ), less than or equal to ( <= ), greater than( > ), greater than or equal to ( >= ), is null, is not null Text equals, not equals, contains, does not contain, starts with, does not start with, is null, is not null Date or Datetime equals ( = ), not equals ( != ), less than ( < ), less than or equal to ( <= ), greater than( > ), greater than or equal to ( >= ), is null, is not null. Boolean equals( = ), not equals ( != ), is null, is not null. Filter Value Filter value is a constant compared with the filtered field. Skyvia supports the value and relative types of filter values for Data Integration tools. Value The filtered field is compared with the constant. Skyvia supports it for all data types. Relative Skyvia generates complex expressions on the backend for filters by the Date and DateTime fields and interprets these expressions in easy-to-understand syntaxes, such as YESTERDAY, TODAY, TOMORROW, LAST_WEEK, THIS_WEEK, NEXT_WEEK, LAST_MONTH, THIS_MONTH, NEXT_MONTH, LAST_YEAR, THIS_YEAR, NEXT_YEAR, LAST_RUN . Relative filters involve cache if a Source API doesn\u2019t support native filter for a field. It may affect API calls usage. Condition Group You can use more than one filter condition. Using the AND(All) or OR(Any) logical operators, you can join multiple filter conditions into groups. Match all of the following conditions (AND) \u2014 Skyvia selects records with values that meet all the specified conditions. Match any of the following conditions (OR) \u2014 Skyvia selects records with values that meet at least one specified condition. Specifics When you use filters, Skyvia\u2019s behavior depends on whether the Source API supports a filter for the specified field. Cloud Sources APIs may support specific filter operations for particular fields. We call such filters native filters . When you use a filter, the Source API returns the already filtered data directly. You can find details about supported native filters for each connector in a specific connector topic. If a Source API doesn\u2019t support a native filter for a field or an operator, Skyvia writes the source data into the cache. Then, Skyvia applies a filter to this cache. Cached filters take time and consume extra API calls. Example You want to export active Salesforce price book entries modified during the last month and have tea in their Name field. To do that, prepare the export first. Select Salesforce as the Source and add the Export task. On the Source Definition tab, select the PricebookEntry object. To apply a filter, do the following. Select the And logical operator for a root condition group. Click +Condition and select the IsActive field in the drop-down list. Select equals operator and set the value to True . Add one more condition and select the LastModifiedDate field. Select the >= operator and choose the needed date in the calendar box. Let it be the 1st of April. Add the last condition and select the Name field. Select Name in the second drop-down list. Select contains and enter tea in the text box. The result includes records matching all three conditions at once." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping With Skyvia you can load data from cloud application objects, relational database tables (and views), and CSV files. In this topic, we will refer to them as \u201csource objects\u201d. The source cloud application object fields, table columns, and CSV file columns will be referred as \u201csource columns\u201d. Data can be loaded to relational database tables and cloud application objects. In this topic, they will be referred as \u201ctarget objects\u201d, and their fields/columns will be referred as \u201ctarget columns\u201d.\nOn the mapping page of the task editor you should map the target columns to source columns. For synchronization, you configure mapping in both directions \u2014 source columns to target columns and target columns to source columns. To map a target column , click it in the table, then click the tab corresponding to the required type of mapping in the source column, click Column and select mapping type used, and then specify the mapping depending on the type used. The result value of the mapping should have the same data type as the mapped column. If you use column mapping, the column must have the same type in Skyvia. If you use expression mapping, its result should have the same type as the mapped column, etc. If you need to map a column to the column of different data type, you should use the expression mapping and specify a type conversion expression. Check [https://msdn.microsoft.com/en-us/library/ms141704.aspx](https://msdn.microsoft.com/en-us/library/ms141704.aspx) for more information on type conversion expressions. Mapping Types You can use the following types of mapping for target columns: Column \u2014 you simply map a target column to a source column Expression \u2014 you map a target column to an expression, including source columns, different functions, logical and mathematical operators. Source Lookup or Target Lookup or Lookup \u2014 you can obtain a value for a target column from any target or source object. You need to specify the object and its column to get the value from, the lookup key column, and the corresponding source column or constant. Skyvia matches this constant or value from the source column with the the values from lookup key column to find the corresponding record in the lookup object. Constant \u2014 you can set a target column to a constant. External ID \u2014 this mapping type is available for Salesforce only, for the foreign key fields. It maps object references using the referenced object External ID field values. Relation \u2014 this mapping type is available only for the foreign key fields if you load several related source objects. You need to specify the relation between the source data, and Skyvia will automatically build the corresponding relation between target data. Zip File \u2014 this mapping type can be used when importing a CSV file together with a zip file with imported binary data. It is available for the fields of base64 types (Salesforce) or LOB fields (databases) for importing binary files from the uploaded zip archive to these fields. Not supported for Zoho CRM. The table with the target object columns lists only the columns of one target object. If you load the data to multiple target objects, you can select the object to display fields for in the Table Mapping drop-down list. Searching and Filtering Columns If the object being mapped has many columns, and it\u2019s not convenient to select a target column for mapping, you can use column search and filtering. To quickly find a column, start typing the column name in the Type to search box above the table on the left side of the Task Editor. Only the columns with names containing the typed text will be displayed. Additionally, you can filter columns by selecting the Filters checkboxes above the table on the left side of the Task Editor. The checkboxes are as follows: mapped \u2014 if selected this checkbox displays target columns with already specified mapping. unmapped \u2014 if selected this checkbox displays target columns with mapping not specified yet. not required \u2014 if selected this checkbox displays target columns that are not required to be mapped before saving the task. valid \u2014 if selected this checkbox displays target columns that already have a valid mapping defined. Required Target Columns Columns that must be mapped in order to create a valid task are marked with the required label. Other columns may be left unmapped if you don\u2019t want to load any data into them. To filter out not required columns click the not required button. Mapping for Upsert, Update, and Delete Operations in Import Tasks For Update and Delete operations, you need additionally to map the ID (primary key) of the target object in order to identify the records for updating or deleting respectively. You can map it using Column , Expression , Constant , or Lookup mapping respectively. For Upsert operation, you must map the Id (or primary key fields) by default. If the mapping for Id or primary key returns Null for an imported row, the row is inserted. If the mapping returns a non-null value, Skyvia tries updating a row with such Id or primary key. See more details in Performing UPSERT Operation . When performing UPSERT to Salesforce, you can also use an External ID column to find the corresponding records. In this case Salesforce uses this column to determine whether it should create a new record or update an existing one, and there is no need to map the Id column. You must map this External ID field using any of available mapping kinds: Column , Expression , etc. Mapping in Synchronization Tasks The main distinction of mapping in synchronization tasks is that mapping is specified for both directions separately. You can switch the side to map by clicking Source to Target or Target to Source under the task editor header. Column mapping is automatically reflected when switching sides, however other kinds of mapping must be defined separately. Constant mapping has an additional checkbox \u2014 Use this value as filter of target records \u2014 in synchronization tasks. If you set constant mapping for one direction (for example, from source to target) and select this checkbox, only the data having the column values equal to the specified constants participate in synchronization when performing synchronization in opposite direction. For example, when you synchronize the Product2 Salesforce object and map its IsActive field using constant mapping to true, only the objects having IsActive field equal to true will participate in synchronization when this checkbox is selected. Product2 objects that have IsActive equal to false will be ignored." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/column-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Column Mapping Column mapping simply maps the target column to a source column. If source and target columns have the same name, this mapping is applied automatically. For column mapping, select the required source column from the drop-down list. Note that when you import data from a database table or cloud object, you can join the columns from the related source objects to the imported data and select these columns in Column mapping as well as columns from the imported object." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/constant-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Constant Mapping For constant mapping, click the Column list and select Constant , then specify the constant value in the box. String constants should not be quoted (unlike when using expression mapping). Note that in synchronization integrations constant mapping has an additional checkbox \u2014 Use this value as filter of target records . If you set constant mapping for one direction (for example, from source to target) and select this checkbox, only the data having the column values equal to the specified constants participate in synchronization when performing synchronization in opposite direction. For example, when you synchronize the Product2 Salesforce object and map its IsActive field using constant mapping to true, only the objects having IsActive field equal to true will participate in synchronization when this checkbox is selected. Product2 objects that have IsActive equal to false will be ignored." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/expression-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Expression Mapping Expression mapping allows using simple and complex expressions and formulas to transform source data into target field values. To use expression mapping, first click the Column list and select Expression . Then enter the expression in the box. Expression mapping uses different syntax depending on whether old or new data integration engine (runtime) is used. You can determine the engine your integration uses by selecting the Use new runtime checkbox on the tab bar. Old integration runtime uses Microsoft SQL Integration Services expression syntax. You can find the information about it in [Microsoft documentation](https://learn.microsoft.com/en-us/sql/integration-services/expressions/integration-services-ssis-expressions) New integration runtime uses our own expression syntax that is described in Expression Syntax . You can find more information about main syntax differences in the Main Differences between Old and New Runtime Syntax topic." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/external-id-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping External ID Mapping External ID mapping is used only for Salesforce objects to map references to other objects in target connection and for determining whether to create a new record or update an existing one for the UPSERT operation. External ID in Salesforce is a custom field that has the \u201cExternal ID\u201d attribute and uniquely identifies records. For External ID mapping, we need to have the External ID field values of the object, the target object refers to, in the source data. For External ID mapping, click the Column list and select External ID . Then select the source key column in the Source Column drop-down list and the corresponding target External ID column in the External ID Column drop-down list." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/index.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Mapping With Skyvia you can load data from cloud application objects, relational database tables (and views), and CSV files. In this topic, we will refer to them as \u201csource objects\u201d. The source cloud application object fields, table columns, and CSV file columns will be referred as \u201csource columns\u201d. Data can be loaded to relational database tables and cloud application objects. In this topic, they will be referred as \u201ctarget objects\u201d, and their fields/columns will be referred as \u201ctarget columns\u201d.\nOn the mapping page of the task editor you should map the target columns to source columns. For synchronization, you configure mapping in both directions \u2014 source columns to target columns and target columns to source columns. To map a target column , click it in the table, then click the tab corresponding to the required type of mapping in the source column, click Column and select mapping type used, and then specify the mapping depending on the type used. The result value of the mapping should have the same data type as the mapped column. If you use column mapping, the column must have the same type in Skyvia. If you use expression mapping, its result should have the same type as the mapped column, etc. If you need to map a column to the column of different data type, you should use the expression mapping and specify a type conversion expression. Check [https://msdn.microsoft.com/en-us/library/ms141704.aspx](https://msdn.microsoft.com/en-us/library/ms141704.aspx) for more information on type conversion expressions. Mapping Types You can use the following types of mapping for target columns: Column \u2014 you simply map a target column to a source column Expression \u2014 you map a target column to an expression, including source columns, different functions, logical and mathematical operators. Source Lookup or Target Lookup or Lookup \u2014 you can obtain a value for a target column from any target or source object. You need to specify the object and its column to get the value from, the lookup key column, and the corresponding source column or constant. Skyvia matches this constant or value from the source column with the the values from lookup key column to find the corresponding record in the lookup object. Constant \u2014 you can set a target column to a constant. External ID \u2014 this mapping type is available for Salesforce only, for the foreign key fields. It maps object references using the referenced object External ID field values. Relation \u2014 this mapping type is available only for the foreign key fields if you load several related source objects. You need to specify the relation between the source data, and Skyvia will automatically build the corresponding relation between target data. Zip File \u2014 this mapping type can be used when importing a CSV file together with a zip file with imported binary data. It is available for the fields of base64 types (Salesforce) or LOB fields (databases) for importing binary files from the uploaded zip archive to these fields. Not supported for Zoho CRM. The table with the target object columns lists only the columns of one target object. If you load the data to multiple target objects, you can select the object to display fields for in the Table Mapping drop-down list. Searching and Filtering Columns If the object being mapped has many columns, and it\u2019s not convenient to select a target column for mapping, you can use column search and filtering. To quickly find a column, start typing the column name in the Type to search box above the table on the left side of the Task Editor. Only the columns with names containing the typed text will be displayed. Additionally, you can filter columns by selecting the Filters checkboxes above the table on the left side of the Task Editor. The checkboxes are as follows: mapped \u2014 if selected this checkbox displays target columns with already specified mapping. unmapped \u2014 if selected this checkbox displays target columns with mapping not specified yet. not required \u2014 if selected this checkbox displays target columns that are not required to be mapped before saving the task. valid \u2014 if selected this checkbox displays target columns that already have a valid mapping defined. Required Target Columns Columns that must be mapped in order to create a valid task are marked with the required label. Other columns may be left unmapped if you don\u2019t want to load any data into them. To filter out not required columns click the not required button. Mapping for Upsert, Update, and Delete Operations in Import Tasks For Update and Delete operations, you need additionally to map the ID (primary key) of the target object in order to identify the records for updating or deleting respectively. You can map it using Column , Expression , Constant , or Lookup mapping respectively. For Upsert operation, you must map the Id (or primary key fields) by default. If the mapping for Id or primary key returns Null for an imported row, the row is inserted. If the mapping returns a non-null value, Skyvia tries updating a row with such Id or primary key. See more details in Performing UPSERT Operation . When performing UPSERT to Salesforce, you can also use an External ID column to find the corresponding records. In this case Salesforce uses this column to determine whether it should create a new record or update an existing one, and there is no need to map the Id column. You must map this External ID field using any of available mapping kinds: Column , Expression , etc. Mapping in Synchronization Tasks The main distinction of mapping in synchronization tasks is that mapping is specified for both directions separately. You can switch the side to map by clicking Source to Target or Target to Source under the task editor header. Column mapping is automatically reflected when switching sides, however other kinds of mapping must be defined separately. Constant mapping has an additional checkbox \u2014 Use this value as filter of target records \u2014 in synchronization tasks. If you set constant mapping for one direction (for example, from source to target) and select this checkbox, only the data having the column values equal to the specified constants participate in synchronization when performing synchronization in opposite direction. For example, when you synchronize the Product2 Salesforce object and map its IsActive field using constant mapping to true, only the objects having IsActive field equal to true will participate in synchronization when this checkbox is selected. Product2 objects that have IsActive equal to false will be ignored." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Lookup Mapping (Target Lookup and Source Lookup) Lookup mapping fetches a value for the mapped Target field from a record in any Source or Target object based on specific conditions. Source Lookup retrieves a value from a record in the Source object. Target Lookup retrieves a value from a record in the Target object. Lookup mapping is available in import and synchronization integrations. Lookup mapping is helpful in the following cases: When values for mapping are absent in the Source object, but available in another object in Source or Target connection. For getting the unknown IDs required for UPDATE, DELETE , and UPSERT operations. For loading related data when the parent object records already exist in Target. Terms Lookup Object - Source or Target object in which the search is performed. Result Column - Lookup object field that contains the desired value. Lookup Key Column - Lookup object field, based on which the desired value is retrieved. Lookup Condition - Source value that is compared with the Lookup Key. Skyvia allows using more than one lookup key or lookup condition. All used keys or conditions are combined by the AND logical operator. Lookup returns rows that match all of the conditions. How to Configure Lookup Mapping To enable Lookup mapping, do the following. On the Mapping Definition tab, select Target Lookup or Source Lookup . Select the object in the Lookup Object drop-down list. Choose the required field in the Result Column drop-down list. Select the Lookup Key Column from the drop-down list. You can add more than one Lookup key. Specify the Lookup condition. Select the mapping type and map the Lookup key. Set Lookup options. Lookup Condition The Lookup condition defines a value to compare with the Lookup key.\nSelect the mapping type and provide the corresponding value to set the Lookup condition. Column Use this condition to compare the Lookup key field with a source field. Constant Use this condition to compare all Lookup key values with a single constant value. Source Lookup and Target Lookup Use this condition when there is no field in the Source to be compared with the Lookup key. It will retrieve a value for the Lookup key from another object in the Source or Target. Lookup Options You can configure Lookup behavior using Lookup options. Configure them separately for each Lookup mapping. Use Cache This option determines whether to use a Lookup cache. Set null when no match is found This option determines what to do if there is no match for the Lookup condition. If enabled, Skyvia assigns null to mapped fields. If disabled, the record fails. Enable this option for the UPSERT operation . When Lookup finds no match, Skyvia will insert a new record. If Lookup finds a match, Skyvia will update the existing record. Use first match when multiple results This option determines what to do when Lookup finds multiple rows matching the specified Lookup conditions. If enabled, Skyvia maps the first matching value to the target field. If disabled, the record fails. Case insensitive lookup This option enables case-insensitive comparison for string values. By default, Lookup performs case-sensitive comparisons. How It Works Skyvia\u2019s behavior differs depending on the Lookup type and use case. Regular Lookup Skyvia queries the Lookup object for every source record and checks if the Lookup key matches the specified condition. If Skyvia finds a matching record, it takes the desired field value and maps it to the target field. Skyvia queries the Lookup object for every single source record. For instance, if there are 100 source records, Skyvia will perform 100 queries. Cached Lookup If you activated the Use Cache Lookup option, Skyvia queries Lookup object into a cache. During the integration run, Skyvia queries this cache for every source record and checks if the Lookup key matches the specified condition. If Skyvia finds a matching record, it takes the desired field value and maps it to the target field. When to Use Cached Lookup When the source API doesn\u2019t support filters for the fields you use in Lookup. When there are significantly more records in Source than in Target (for Target Lookup). When there are considerably more records in Target than in Source (for Source Lookup). For example, you have 1 million records in Source and 1000 in Target. If you enable a regular Target Lookup, Skyvia will query the Lookup object 1 million times, which will take much time and use many API calls. In this case, Cached Lookup can help you save time and API calls. Skyvia will read the Lookup object and write its records into cache. Then Skyvia will query this cache for each of the million source records. Nested Objects Lookup Different connectors have objects with complex structures. They have fields storing complex structured data, such as Invoice.Lines in QuickBooks or Products.Variants in Shopify. Skyvia allows mapping nested object fields in import integrations.\nYou can use Lookup to import data to a complex structured object from an object with a different structure. In this case, Source Lookup compares the Lookup key with the field in another source object and returns all the matching records in array format. Then, you can map the fields of this array to the nested fields of the Target object.\nFor example, each invoice line refers to the invoice by invoice ID. Lookup returns all the lines of the specific invoice. Example 1. Simple Lookup You import a list of contacts from CSV file to Salesforce. You want to assign these contacts to the existing Salesforce accounts. To do that, you need to specify the related account ID for each contact. You can\u2019t use the Column mapping for the AccountId field, because the source file doesn\u2019t contain the Salesforce account IDs. But it contains company names corresponding to account names in Salesforce. Use Lookup to fetch the absent AccountId values by comparing account names in Salesforce and company names in the Source file. Before configuring your Lookup, prepare an import with CSV as Source and Salesforce as Target. Add an Import task and set the Source and Target objects. To configure the Lookup, do the following: On the Mapping Definition tab select Target Lookup for the AccountId field. AccountId is a foreign key field which associates the Contact object with the Account object. For such fields Skyvia sets the Lookup Object and Result Column values automatically. In other cases, select them manually. Select the Account.Name field as the Lookup Key Column . Choose Column mapping. Select the source file column with which the Lookup will compare the Lookup key. In our example, it is the Company field. Leave Lookup options in the default state. Complete mapping for other fields and run the integration. After the run you have new Salesforce contacts and assigned them to existing accounts. Example 2. Lookup by Constant Let\u2019s say you want to insert the prices for products from the CSV file into the Salesforce Standard Price Book. When you import data to the Salesforce PricebookEntry object, you must map the price book ID. If you don\u2019t know the ID of the desired price book, you can use Lookup to retrieve it by its name. The Lookup will compare the price book name in the Salesforce Pricebook2 object with the provided price book name and map its ID to all imported records. Before you start configuring your Lookup, prepare an import with CSV as the Source and Salesforce as the Target. Add an Import task and set the Source and Target objects. On the Mapping Definition tab, select the target Lookup for the Pricebook2Id field. Pricebook2Id is a foreign key field associating the PriceBookEntry object with the Pricebook2 object. For such fields, Skyvia sets the Lookup Object and Result Column values automatically. In other cases, select them manually. Select the Pricebook2.Name field as the Lookup Key Column . Choose Constant mapping. Specify the price book name with which Lookup will compare the Lookup key. Complete mapping for other fields and run the integration. After the run, you have new PricebookEntry records automatically assigned to the Salesforce Standard Price Book using Lookup. Example 3. Composite Lookup Key You want to update product prices which belong to Salesforce Standard Price Book. You don\u2019t know the IDs of records to update. You have the price book ID and the list of product names and prices. You can\u2019t identify the records to update only by product name, because the same product may belong to multiple price books. But you can identify them by price book entry ID and price book ID. To perform such update, you can use Lookup with composite key. This Lookup will get the PricebookEntry.Id values using two Lookup keys with separate conditions. Before configuring your Lookup, prepare an import with CSV as Source and Salesforce as Target. Add an Import task, set the Source and Target objects. On the Mapping Definition tab select target Lookup for the Id field. Select PricebookEntry as the Lookup Object and Id as the Result Column . Select the PricebookEntry.Name field as the Lookup Key Column and choose Column mapping. Specify the column with which Lookup will compare the Lookup key. This column is Name in this example. Below click + Add Lookup Key and select the Pricebook2Id field. Choose Constant mapping and specify the ID of the needed price book. After the run, you have the updated prices in the price book based on the product name and price book ID. Example 4. Two-Level Lookup You import data from Zoho CRM to Salesforce. You already imported accounts. Now you want to import contacts preserving their relations with accounts in Salesforce as it was in Zoho CRM. The AccountId field in the Contact object stores the reference to the parent account in Salesforce. You can\u2019t use Column mapping for AccountId , because Salesforce accounts have other IDs than Zoho CRM accounts. You can use Lookup to fetch the account IDs comparing account names in Salesforce and Zoho CRM. But you can\u2019t map the Lookup key by Column or Constant, because Zoho CRM Contacts doesn\u2019t store the account name. It stores the related account ID. Thus, you map the Lookup key by Lookup to retrieve the account name from the source accounts using their IDs. Such Lookup compares the account IDs in Contacts and Accounts objects in the Source and gets the accounts names. Then it compares the fetched names with account names in Target and assign their IDs to the AccountId field of the target contacts. Before configuring your Lookup, prepare an import with Zoho CRM as Source and Salesforce as Target. Add an Import task, set the Source and Target objects. On the Source Definition tab, select the relation, which will be used by Lookup. We use Contacts_Account for this example. On the Mapping Definition tab, select Target Lookup for the AccountId field. Select Account as the Lookup Object and select the Id as Result Column . Choose Name as the Lookup Key Column and select the Source Lookup mapping to get the corresponding account names from Zoho CRM Accounts . Select the Accounts object as the Lookup Object and select the Account Name as the Result Column . Select Id as the Lookup Key Column and choose Account in the bottommost drop-down list. After the run you have new contacts assigned to the existing accounts. Example 5. Nested Objects Lookup You import a list of products to Shopify. You store products list in the Products table and their variants in the ProductVariants table. Each ProductVariants record refers to the product it belongs to by the ProductId column. In Shopify, the products and variants are stored in a single Products object. The Variants field has a complex structure and stores a nested array of records.\nTo assign mapping to the fields of the nested object, you can use Lookup. Such Lookup fetches the array of records for mapping to the nested objects of the target field. It compares the ProductVariants.ProductId field with Product.Id , retrieves all the matching records at once, and transforms the result into the required data structure. Before configuring your Lookup, prepare an import with a database such as Source and Shopify as Target. Enable the Nested Objects checkbox in the integration. Add an Import task, set dbo.Products as Source and Products as Target. On the Mapping Definition tab, select Source Lookup for the Variants field. Select ProductVariants as the Lookup Object . Choose ProductId as the Lookup Key Column and select the Column mapping. Choose the Id field to compare with the Lookup key. Complete the Nested fields mapping. After the run, you have a new record in Shopify Products with variants fetched from another table with a different structure." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/relation-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Relation Mapping Relation mapping is used when you create an integration that loads several source objects (each in its own task) with relations between them. Please note that you need to import both objects with relation between them in the same integration (in separate tasks) to use the relation mapping. Relations in Different Data Sources Relations associate objects with other objects. A relation is when one object references another object. In databases, the relations are foreign keys between tables \u2014 when a field (or collection of fields) in one table uniquely identifies a row of another table. In cloud sources, the relations are mostly similar to the foreign keys. For example, in Salesforce, there is a relation between Account and Contact objects. Account can have zero, one, or multiple linked contacts. This relationship is actually determined by the AccountId field of the Contact object, which stores the ID value of an account, the contact belongs to. We can call it a foreign key to the Account table by analogy with databases. When you import related database tables or cloud objects, you can specify the relation mapping simply by selecting the source relation to use \u2014 whether it is a database foreign key or a relationship between cloud objects. CSV files don\u2019t store any relationship definitions. However, when you import several CSV files in one integration, and they contain the related data (to be exact, when a field of one imported CSV file uniquely identifies a row of another imported CSV file), you can use the relation mapping and manually select the referenced CSV file and the corresponding fields, as described below. Skyvia fully supports self-referencing relations (when an object or table references itself), for example, the Parent relation in the Salesforce Account object. However, Skyvia does not support polymorphic relations, when the same foreign key can reference rows from different objects. For example, the Contact object in Dynamics 365 has the parentcustomerid field, which can reference another contact object or an account object. Skyvia does not support such relations. When to Use Relation Mapping Relation mapping can be used when you import several source objects or files with related data to several related objects. It provides the easiest way to build relations between the imported target objects based on relations between source objects. Especially this can be useful when you import data to a cloud application, and the detail object references the master object by its ID, since you cannot get this ID before actually inserting the master data. Relation mapping can be specified only for the foreign key fields of the target objects. You can specify the relation mapping for a field of an imported target object that refers another imported target object, when there is a relation between the corresponding source objects or files. For example, when we import data to Salesforce accounts and contacts from similar related objects of another CRM, you can use Relation mapping for the AccountId field of the Contact object and select the corresponding relation between source objects or files. When performing import, Skyvia will import the parent (Account) objects first and retrieve the ID values of the result accounts. When importing contacts, it will automatically assign the ID values of the imported accounts in target, corresponding to the source objects that are related to the contacts being imported. To see more examples of relation mapping, take a look at our tutorial Importing Related Customer, Order, and Product Data tutorial. How to Configure Relation Mapping First you need to create a task that inserts data into master target object. After this, when you create a task that inserts data into detail target object, you can click the Column list and select Relation . After this, you need to do the following: For database and cloud applications as source simply select the required relation (foreign key) in the list. Check an example from our tutorial How to Import Tables from SQL Azure (mapping the AccountID column of the Opportunity object in Salesforce to a foreign key relation in the source SQL Azure database): In case of database and cloud applications as source, self-referencing relations can be selected in the same way as other relations. For CSV files as source select the master CSV file in the Referenced Object drop-down list, then select the foreign key column in the Column drop-down and the referenced key column of the master CSV file in the Referenced Column drop-down list. Here is an example of importing Salesforce Accounts and Contacts from CSV files. The files contains the related data: the Contacts.CSV file contains the CompanyName column, which stores the name of the account the contact belongs to. Accounts.CSV file stores account names in the Account Name column. We map the AccountID foreign key field of the target Contact object using this relation between our CSV files. We select our master CSV file \u2014 Accounts.CSV in the Referenced Object drop-down list. Then, in the Column list we select our foreign key column from the Contacts.CSV file \u2014 CompanyName . In the Referenced Column list, we select the corresponding column from the Accounts.CSV file \u2014 Account Name . You can use a self-referencing relation (that references the same source file) when importing a CSV file by selecting the Reference itself checkbox. For example, we import a CSV file to the Salesforce Account object. Salesforce Account references itself via the ParentID field. Suppose the imported Account.CSV file has the ParentName column that contains the name of the parent account. In this case we need to configure mapping for the target ParentID field in the following way: First, we select the Relation mapping for the field. Then we select the Reference itself checkbox. In the Column drop-down list, we select the ParentName column. Finally, in the Referenced Column list, we select the Account Name column." }, { "url": "https://docs.skyvia.com/data-integration/common-package-features/mapping/zip-file-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Common Integration Features Mapping Zip File Mapping Zip File mapping is used to import binary data to a column of base64 type. This mapping is available only when you import data from a CSV file together with zip file containing files with binary data to import. For Zip File mapping, click the Column list and select Zip File . Then you need to specify an expression , returning the name of the file, which will be taken from the zip archive to import it for the current data row. Thus, the simplest way to use Zip File mapping is to add a column with the corresponding file name to the imported CSV file and enter the column name to the box." }, { "url": "https://docs.skyvia.com/data-integration/control-flow/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Control Flow Overview Control Flow is a perfect solution when you need to perform different data integration tasks in a specific order or depending on the specific conditions. It allows you to perform pre- and post-integration actions and even set up some automatic error processing logic within your integration. Convenient Designer Control Flow is configured in an easy-to-use and no-coding visual designer that makes configuration of even complex scenarios simple enough and allows both professionals and business users to design their control flows. You design your control flow on a diagram which displays components and branches that connect them. The execution goes from the Start component to the Stop component (from top to bottom), and it goes via the diagram branches and components on them. When you create a diagram, it has two components - Start and Stop, and a branch connecting them. You can add more components to the diagram and place them on any branch between any two components. See more about setting up a control flow in the How to Design or Edit Control Flow topic. Control Flow Components Control Flow components can be divided into two groups: control blocks and tasks. Control blocks control the execution, creating branches and redirecting execution between them depending on some conditions. The second group - tasks, do not affect the control flow execution directly, but they interact with data sources or set variables. They can run other integrations, data flows, perform other actions, etc. See more about specific Control Flow components and their settings in the Components topic. What You Can Achieve with Control Flow Running Integrations in Specific Order One of the easiest scenarios that can be implemented with Control Flow is running several different integrations in a specific order, one after another. For example, you may need to load some data from SQL to Salesforce and then update some different data in SQL Server from Salesforce. Without the Control Flow you could only schedule integrations to run one later that another, but this is inconvenient, because you cannot always know how much data must be loaded and how much time it would take. But with Control Flow you can easily run the next integration right after the current one finishes its work. Just add Execute Integration components to a branch one after another. Or you can add Data Flow components in a sequence and perform a sequence of Data Flows . Perform Pre- and Post-Integration Tasks Another scenario for Control Flow is when you need to do something before or after running the integration or both. For example, you load data to a database, and need to create database table for it in advance. Or you loaded certain records from source to target and want to delete them in source afterwards. In this case you can use the Action components to perform these pre- and post-integration tasks, and Execute Integration or Data Flow component to perform the actual data loading. More Complex Scenarios Control Flow offers great flexibility and can be used for more complex cases than mentioned. It allows you to add error processing logics in case of integration level errors, direct execution into different branches depending on the specified conditions, etc. You can set variables within the control flow directly or within its data flows, and then implement conditional logics and perform different actions depending on their value." }, { "url": "https://docs.skyvia.com/data-integration/control-flow/components.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Control Flow Components Components perform all the necessary actions in Control Flow - managing the flow itself, interacting with the data sources, modifying variables, etc. You can see the list of available components on the left of the Control Flow designer. Depending on their purpose, components are divided into two categories: Core components. These are control blocks - they manage the flow itself - split it into parallel branches, redirect it depending on conditions, stop it, etc. Task components perform the actual interaction with data sources and modify control flow variables. Start This component is the beginning of the control flow. There always is one such component on a control flow diagram, and you cannot delete it or add another one. It also does not have any settings except the name. If This component allows you to specify a condition in your control flow and perform different actions depending on whether this condition is true or not. For example, a data flow, which is executed in the current control flow may assign a number of processed rows to some variable, and you may need to perform certain actions depending on if any rows were successfully processed or not. You need to specify the Condition in the component settings. For this, you may click the button and use a convenient Expression editor . The condition should be specified using our expression syntax and should evaluate to a boolean value - True or False. You can use the control flow variables in the condition. The If component has two branches - True and False . It executes one of them depending on the result of the specified condition. Parallel This component allows you to perform different actions in parallel in your control flow. For example, you may run several integrations, data flows, or other actions in parallel. The Parallel component splits control flow execution into multiple parallel branches. By default, the component splits the current branch into two branches, but you can add more branches in its settings. Alternatively, you can add branches by dragging more components directly to a place directly under the parallel component, from where its branches fork. Try Catch This component allows performing error processing in the control flow if a component fails or an exception is thrown. It has two branches: Try and Catch . The component tries to execute its Try branch. If an error happens or an exception is raised in its Try branch, the execution of this branch stops, and instead the Catch branch is executed. In the settings of this component, you can specify a Variable , to which the error message is written if an error occurs in the Try branch. You can use this variable in your error processing logics, that you can set up in the Catch branch. Exception This component allows you to trigger an error in your Control Flow. If this component is triggered outside of the Try branch of a Try Catch component, an exception stops control flow execution, and the control flow run is considered failed with the specified error message. If used in a Try Catch component, it switches execution to the Catch branch. In the settings of this component, you can specify the Error Message . Stop This component stops control flow execution. There is one Stop component in the end of Control Flow added automatically, but you may add more Stop components to different branches. When this component is triggered, control flow execution stops. However, if this component is triggered inside a branch of a parallel component, other parallel branches are executed till the end. This component does not have any settings except the name. Action This component allows performing an arbitrary action against the specified connection. To configure this component, you need to select a Connection to perform an action against, then select Action , configure it, and map its parameters if the action has them. Data Flow This component executes a data flow . To configure it, click its Open Data Flow link and design the data flow . After you finish configuring the data flow, click the Return to parent button in the title bar to return back to the control flow. This component creates a data flow that you can only edit and run from within the current control flow. If you want to use a data flow that can be run separately or from other control flows, create a separate data flow and then use the Execute Integration component to run it from the control flow. Settings, like parameters and variables , connections , and result setting are shared between the control flow and all its data flows, created via its Data Flow components. You can configure these settings while editing the control flow diagram or any of its data flows. Set Variables This component is used to assign values to control flow variables . To configure this component, click the plus sign near Variables to add a variable. Then select a Variable to assign a value to. In the box below, specify the expression that calculates the value to assign. You can use constant values or other variables and parameters there. You may click the button and use the Expression editor to specify the expression. You may add and set as many variables as you need in this component. Execute Integration This component executes any other integrations available in the workspace, including data flows and other control flows. To configure this component, you need to select the integration to execute and optionally map its parameters. Unmapped parameters will retain their values stored in this integration. You can open the selected integration for editing by clicking the button. It will be opened in a new tab. This component is considered failed only in case of integration-level errors, like, for example, a connection cannot open. In this case, if used in the Try branch of a Try Catch component, it switches execution to the Catch branch. Otherwise, if used not in the Try Catch component, the entire control flow fails with an error." }, { "url": "https://docs.skyvia.com/data-integration/control-flow/how-to-design-control-flow.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Control Flow How to Design Control Flow In Skyvia, control flows are built on the diagrams. It is a place where you add your components to the control flow branches and rearrange them. We\u2019ve developed the diagram canvas to be far larger than your computer screen and added helpful features for you to be able to navigate easily around your canvas and move components as you like. Working with the Control Flow Diagram To create a control flow, click +NEW in the top menu and select Control Flow under the Integration column. When the Control Flow editor opens, on the left side of the page you see a list of components, which you can use to build your control flow. On the diagram, you can see the first two components - Start and Stop, and a branch connecting them. When building a control flow, drag components one after another to the branches of the diagram and then configure settings of each component in the details sidebar. The sidebar opens on the right when you click a component you want to configure setting of. Adding Components to the Diagram To add a component, you need to drag it from the components list to a specific place on one of the diagram branches before/after other components. When you start dragging a component, these places are displayed on the diagram as squares with the plus sign. You can drop the component onto one of such places. Core components are control blocks - they affect how execution is going, add more execution branches, redirect execution, based on conditions, etc. Tasks components perform interaction with data sources or set variables. You can add any of them to any suitable place on the branches. Component Name and Other Settings After a component is placed on a branch, you may configure its settings. They are configured on the Details sidebar on the right, which is displayed when you select the component on the diagram. Component settings are specific for each component, and you can read about these settings in the Components topic. The only setting they share in common is Name , which you can change for any component. You can give any component a meaningful name, which will be displayed on the diagram, to make your control flow easier to understand. Moving Components on the Diagram If necessary, you can move any added and configured component to another place. Just start dragging it, and Skyvia will show all the allowed places where you can drop it. Note that if you move a control block component that adds branches, like If or Try Catch, it will be moved with its branches and all the components on these branches. Deleting Components To delete any component from the diagram, simply click on it. As soon as you do it, you will see the three-dotted button next to the component. Click it and select Delete from the menu. To select another component for deletion, perform the same steps. Note that if you delete a control block component that added branches from a control flow, you will also delete its branches and all other components on these branches. If you want only to temporarily disable the component for testing purposes but not delete it with all its settings, select Skip component instead of delete. The component won\u2019t be executed, but will stay on the diagram, and you will be able to re-enable it anytime. Working with Diagram Canvas As mentioned earlier the diagram canvas is designed to be far larger than your computer screen, so we\u2019ve added helpful features for you to be able to navigate easily around your canvas and move the diagram or components as you like. Navigating For some large control flows, you will need to move the diagram up and down or sideways to see all of its components. For this, you can use either panning or scrolling. To use panning, press and hold the left mouse button anywhere on the canvas and, holding it, move your mouse to move the canvas. The pan occurs in real time, which means you see how the work area moves as you drag it across the screen. The other way to navigate the diagram is to use the scroll wheel of a mouse. Spinning a scroll wheel will move the diagram up and down on the canvas. Holding the shift key and spinning the wheel will shift the canvas horizontally left and right. Zooming You can zoom in and out of the diagram canvas to adjust your view. Click the Zoom In icon to increase the zoom level or click the Zoom Out icon to decrease the zoom level. That\u2019s zooming. It is good to note that the zoom uses the middle of the canvas as its point of reference so combining panning and zooming is a great way to effectively move around the canvas. Please note that you can easily return the original diagram size and location on the canvas by clicking the Default icon. Other Control Flow Settings Skyvia offers additional control flow settings, which you can manually configure in the toolbar on the left. Here you find such settings as Parameters , Variables , Connections , and Result . Adding Parameters and Variables to the Control Flow Parameters allow you to store values between control flow executions. Whenever necessary you can view and edit their values or edit the parameter list. Variables are used to store values during the control flow execution. Note that variables and parameters, created in the Control Flow are also available in its data flows, created via the Data Flow components. You can set them within the data flows and then build logics depending on them, using control block components, depending these variables. To manage data flow parameters and variables, click Parameters or Variables respectively on the toolbar and create new parameters/variables as described in the Parameters and Variables topic. Creating Log Log is a table that you can fill in with data in a data flow. To create a log, click Connections in the toolbar on the left. When the Connections sidebar opens on the right, click to add a log and adjust its settings. In the opened window, add name, schema and save changes. Control flow logs are available in all its data flows, created via the Data Flow component, so you can fill the logs within these data flows. Calculating Result Result feature helps you specify how the number of success and error rows counted by the Row Count component is calculated. To calculate result, click Result in the toolbar on the left. The Result sidebar opens on the right, and there you can specify how to calculate the number of rows. Usually, you can just sum up the necessary variables used in the Row Count components, but you may also specify more complex expressions if needed. For this, click the button and edit the expression in a convenient editor with code completion, validation, and result preview. You can read more about the Result feature here Saving and Naming the Control Flow After you have finished configuring your Control Flow, you can save it, using the Save button. The first time you save a new Control Flow, you need to specify the name for it and the workspace folder where it is stored. The default control flow name is Untitled . Please note, if you omit this step, the control flow name will remain Untitled in the list of created integrations. You may either create a control flow and go to its Overview, or create it and stay in the edit mode. On the Overview page you can change the name of an existing control flow, edit its parameter values and schedule, run it, and start editing it again. Scheduling the Control Flow You can schedule your control flow for automatic execution. It might be very useful if you want to configure control flow to run periodically on certain days and at a particular time or if you want to delay a control flow execution to a later time. To open the schedule editor for a control flow, click the **![ ](/images/icon-schedule-settings.png)** button near Schedule on the control flow Overview. See the Scheduling integrations section for more details." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Overview Data flow allows you to integrate multiple data sources and enables advanced data transformations while loading data. It can be used in the scenarios when data replication or import is not enough: when you need to load data into multiple sources at once when you need complex, multistage transformations, like lookup by an expression when you need to obtain data from one data source, enrich them with data from another one and finally load into the third one In Skyvia, Data Flow processes records in batches. A batch of the records is read from a source, then it is moved to the next component via the link and is processed there while the source obtains the next batch, then is passed to the next component, and so on. Batch size is not a constant, data flow components may split a batch into several smaller batches sent to different outputs. Easy-to-Use Designer Despite being intended for complex integration scenarios, Data Flow is an easy-to-use no-coding tool that can be used both by business users and IT professionals. Skyvia allows you to build and edit data flows on diagrams. Basically, at the beginning, diagram is a blank canvas, which you fill in with components to arrange the entire process of data movement visually. You drag components onto the diagram, connect them with links , make data transformations in-between and pass changed or split data to the final destination. You can easily copy and paste, edit, or delete components out of the diagram anytime you need. Types of Data Flow Components Data Flow components can be divided into three categories: sources , targets and transformation components . Sources . Source is a component, which represents an action of data extraction from external data sources and bringing them into the flow. Skyvia supports data extraction from a variety of source connectors, among them databases and cloud apps, including Salesforce, SQL Server, etc. Transformations . Transformation components modify the data, flowing through them, split flows of records into multiple outputs, or modify data flow variables. Destinations . Destinations accept data and store them. Skyvia supports loading data to lots of target connectors among which \u2014 various cloud apps and databases, as well as files - downloadable manually or placed to file storage services. You may also use outputs of a target component to process and send the successfully stored or failed records further to another destination, applying additional modifications if needed. Inputs, Outputs and Links Components have their inputs and outputs , which are data entry and exit points. You can connect outputs of one component to inputs of other components, creating a link . Links show the direction of data movement between components, and are displayed as arrows on the diagram. A component usually has one input (except source - source component has only output) and one or more outputs. Outputs can be of two kinds - regular outputs and error outputs. Records that have been successfully processed go to regular outputs. A component may have one or more regular outputs. Records that have failed to be processed go to the error output with error message added. A component can have either one error output, or no error outputs, if this component cannot fail processing a record at all. Links, going from regular outputs (regular links) are displayed as blue arrows, and links, going from error outputs are displayed as red arrows. Actions Action is an operation performed by a component to obtain data from the data source or store them. The following components work with data sources: sources , targets and lookups (transformation component). For these components, you need to configure actions . Each type of data flow components has its own set of available actions. Each kind of components has its own actions available, and the list of these actions also depends on the data source. For example, the Target Connector component usually has Insert, Update, Delete, and Execute Command. If a Salesforce connection is selected, there is also Upsert action available. Integration Variables and Parameters Integration parameters allow storing certain values between data flow runs and use them in the next runs. For example, your data flow loads records from a data source with autogenerated numeric Ids, and new records are assigned Ids greater that any existing Ids. You have loaded all the records, existing at the moment, and want next run to load only new records. In this case, you may store a max Id from loaded records in a parameter with a Value component and next time load only records with greater Ids. For each integration parameter, a variable is automatically created. This variable is assigned a value from a parameter when the integration execution is started. When the integration execution is finished, the variable value is automatically saved to a parameter. You can also create additional variables, whose values are not stored in the parameters between integration runs, and are available only within the integration run. You can use them, for example, to count processed records and output results to the data flow log. Data Flow Tutorials You can read the following tutorial, describing creation of a simple data flow and explaining it in details: Loading New Records from Autoincrement Table . This tutorial shows how to load new records from an autoincrement table. Another tutorial Updating Existing Records and Inserting New Ones describes how to check the target table if a record is already present there and avoid creating duplicates when loading data in a data flow." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Components perform all the data querying, processing, and storing in Data Flow. You can see the list of available components on the left of the data flow designer. Depending on what the components do, they are divided into three categories: sources, destinations, and transformations. You can add components to your data flow by dragging them from the components list to the diagram. If the component has an input, connect the output of the necessary component to it. After this, you can click this component and configure its details in a sidebar on the right. Each kind of components has its own settings described below. Sources Source Source is a component that extracts data from a data source and starts the data flow. Source is the very first component you should add to the diagram. After you have added Source, click it and configure its details \u2014 select a connection to get data from, select and configure the action to perform in order to obtain data and map its parameters if needed. You can also give Name to the component displayed on the diagram. Source component has one regular output and no input. Its output returns the data obtained by the selected action. CSV Source CSV Source is a component that extracts data from a CSV file. You can either manually upload the CSV file when creating or editing a package or load it from a file storage service when the data flow runs. CSV File If you want to upload CSV manually, just select the CSV Source component and drag the CSV from an explorer window to your browser, to the drop here your file panel under Source CSV or click browse and open the necessary CSV file. Whenever necessary, you can edit the data flow and replace the uploaded CSV file with another one. If you want to load your file from a file storage service, under CSV Source Type , select From storage service . Then select or create a file storage connection . Finally, in CSV File Path select the CSV file to use. Other Options Optionally, you can click more options and specify CSV file options . Note that Locale and Code page options are not available in the CSV Source component. You can also use the Trim Whitespace setting to trim leading and trailing whitespaces from the data in CSV fields. Columns After you specified the source file, under Columns check the number of detected columns. If this number does not correspond to the number of columns in your file, make sure that the CSV options are specified correctly and that you selected the correct CSV file. For example, if only 1 column is detected, it usually means that incorrect CSV Separator is specified. You can also review column names and rename columns by clicking Edit names and editing column names. The modified names will be used in data flow scope. Custom names are assigned to CSV file columns in the same order as in the CSV file, i.e. the first column name in the list is assigned to the first CSV file column, and so on. Customizing column name can be useful if the CSV file does not include header row with column names at all. Destinations Target Target is a component that stores input records in a data source. It can also write data to logs . To configure Target, select a connection to load data to, select and configure the action to perform in order to store the input data and map its parameters if needed. You can also give Name to the component displayed on the diagram. If batch data loading is supported for a connector used in the selected connection, the Target component loads data in batches into the data source by default. You may change this behavior by setting the Single Row parameter to True . Target component has one input. It also has one regular output, and records, successfully loaded to the data source, are directed to this regular output. For some connectors, actions of the Target component also may obtain values of columns autogenerated in the data source, for example autogenerated IDs, and add the corresponding columns to the scope of the regular output. Target component also has an error output, and records, failed to be loaded to the data source, are directed to this error output. Target component adds a column with the error message to the scope of this output. CSV Target CSV Target is a component that writes data to a CSV file. CSV files can be then downloaded manually or automatically placed to a file storage service. To configure CSV Target, first you need to select the CSV Target Type \u2014 whether you want to download result files manually or place them to a storage service. If you want to upload file to a storage service, you also need to select of create a file storage Connection and select a Folder on the file storage service to place the result file into. Additionally, you need to configure the result file naming. Select the File Name Type - whether you want to specify the file name as a constant or as an expression. Enter the corresponding Name if you need a constant or use Expression Editor to create an expression, to make your namme unique each time. CSV Target can produce a CSV file with or without a header row with column names. This is determined by the Include header setting. You can either let CSV Target automatically determine the result file columns based on its input columns or define the columns yourself. If you want to define columns yourself, under CSV Columns select Explicit columns . Then click the Edit link. In the CSV Columns dialog box, enter the list of column names. Each column name should be on the new line. After this, you need to map the action parameters, corresponding to the specified columns. JSON Target JSON Target component is used to save output data to the JSON file. You will always receive an array as a result. You can download JSON to your PC or upload it to the file storage. To automatically upload JSON to the file storage after running an integration, choose To Storage Service , and select your storage connection and the folder to store your JSON in. If you do not have an established connection with your storage service yet, visit the [Creating Connection](https://docs.skyvia.com/connections/index.html#creating-connections) page, to learn how to create a new connection. JSON file name can have a constant value or can be generated based on the expression. Select Expression as your File Name Type and enter your expression into File Name field to do this. You can either let JSON Target automatically determine the JSON properties based on the input or define the properties yourself by choosing Explicit properties and modifying the Output Schema . Transformations Transformation components either transform the input data, i.e. add new columns to the input records, or modify variable values. They may also split input records into multiple outputs. Split Split is a component that passes all the input records to each of its outputs. It has one input and can have any number of regular outputs, which are exact copies of the same input. It does not need any configuration, you just drag it from the component list to the diagram and connect its input and outputs to the corresponding components. Conditional Split Conditional Split is a component that splits records into different outputs depending on the specified conditions. To configure Conditional Split, you need to create one or more conditional outputs . You can also give Name to the component displayed on the diagram. To create a conditional output, click next to the Conditional Outputs header and then specify the output name and condition. For the condition, you need to enter an expression, using our expression syntax . We recommend to use Expression Editor when you deal with complex condition logic. Note that the expression must evaluate to a boolean value. If it returns a value of another type, the record is sent to an error output. The conditions are fulfilled for each record in the same order as the corresponding outputs are listed in the list of Conditional Outputs . A record is sent to the first output, condition of which evaluates to True, and processing of this record stops. This record is not checked anymore for compliance with other conditions in the next conditional outputs, and it doesn\u2019t go to next outputs even if their conditions would be True for this record as well. If the record does not match any of the conditions specified in the conditional outputs, it goes to the Default output, which is always available for a Conditional Split component. In addition to regular outputs, the Conditional Split component has an error output. It sends rows to this output in two cases: when an expression evaluation fails with an error, and when expression returns a result of type other than bool. When you connect a regular output of the Conditional Split component to the input of another component, you will be asked to select the output name in order to determine the output to connect to. Bufferizer Bufferizer is a component that caches input records and outputs them in batches of the specified size. To configure this component, you only need to specify the Row Count parameter, which determines the size of output batches. You can place this component before a Target component in order to send batches of the fixed size to it. This allows you to control the size of batches actually sent by the Target component to the data source. By default, Source components query and output data in batches of 2000 records. However, while processing data, Conditional Split and Lookup components may split incoming batches into batches of the smaller size, and thus, size of batches, reaching Target components without Bufferizer, can vary. Controlling batch size can be useful when you need both to increase the size of batches sent to data source at once in order to load more records per an API call and improve API call use efficiency, and to decrease batch size if the target has some additional custom limitation or if smaller batches provide better performance. Note that each data source has its own API batch size limitations, and sometimes different objects have different limitations. Some of them don\u2019t support sending records in batches at all. Skyvia cannot exceed data source API limits. If the maximum batch size is allowed by the target data source and table is less than the specified Batch Size, the buffered records are automatically split into multiple batches by the Target component. Row Count Row Count is a component that counts the input rows and writes the result to a data flow variable. It passes its input records to its output without any modification. To configure Row Count, you need just to select a variable to put the number of the rows to. The variable must have a numeric type. You can also give Name to the component displayed on the diagram. Lookup Lookup is a component that matches input records with records from another data source and adds columns of the matched records to the scope. To configure Lookup, you need to specify Behavior , Scope , Property and select a Connection to get data from. Afterwards, select and configure the action to perform in order to obtain data and map its parameters if needed. You can also give Name to the component displayed on the diagram. Behavior defines what property types and values you get in the Lookup component\u2019s output. There are five Behaviour types: FirstOrDefault \u2014 if there are multiple matches, lookup takes the first value. If there are no matches, property values are set to Null . First \u2014 if there are multiple matches, lookup takes the first value. If there are no matches, input properties are sent to the error output. SingleOrDefault \u2014 if there is a single match, lookup takes it and sends it as an output. If there are multiple matches, input properties are sent to the error output. If there are no matches, property values are set to Null . Single \u2014 if there is a single match, lookup takes it and sends it as an output. If there are multiple matches or no matches, input properties are sent to the error output. Array \u2014 if there are matches, lookup takes all the values and add them to the array type property in the output. If there are no matches lookup returns an empty array. If you want to create a new nested type property that will contain lookup results, define its name in the Property field. Note that Property value is required if lookup Behaviour is set to Array . If you do not want to create a new nested type property but want to use the existing one instead, enter its name in Scope and it will be extended with the lookup results. Extend An Extend component adds calculated columns to input records. It allows you to add one or more columns, values of which are calculated based on the input column values and data flow variables. To extend a specific nested type property, select it from the Scope dropdown. To configure Extend, you need to set up its Output Schema using the Mapping Editor . Create parameters, which correspond to the result columns, and configure their mapping. Extend component has one input and one regular Default output for successfully processed rows with calculated columns added. It also has the Error output, to which it passes records, for which one of the expressions failed with an error, and adds the Error column with an error message to them. Unwind Unwind component is used to deconstruct a nested type property into separate properties for each item in the nested type (for each element if the nested type is an arary and for each property if the nested type is an object). It uses Scope to indetify which nested type property to deconstruct. See example of using Unwind component . Value A Value component selects a value obtains the specified value calculated based on the values from input record and assigns it to the specified variable. It can select the min, max, first, or last value from all the processed records. You can use the Value component, for example, to store the maximum value of record creation time in a parameter, and query only records with the created time greater than this value during the next data flow execution. Thus, every time the data flow starts, it will process only new records. To configure the Value component, enter an expression by using our expression syntax into the Value box. In a simple case the expression is just the name of an input column. To create complex expressions use Expression Editor . Then select the Value type \u2014 min, max, first, or last. Finally, specify a Variable to store the value in. You can also provide a Name for the component, which is displayed on the diagram. The Value component can update the variable either after processing each batch or when data flow completes. The latter is useful if you want to use the variable in other places of the data flow before changing its value. If you want to change the variable value only when data flow completes, select the Value Update On Data Flow Completion checkbox. The Value component has one input and one regular Default output for successfully processed rows. It sends successfully processed input rows to its output unchanged. It also has the Error output, to which it passes records, for which the expressions failed with an error, and adds the Error column with an error message to them. Actions Components that interact with data sources perform actions . Action is an operation performed by a component to obtain data from the data source or store them. These are the following components: Source, Lookup, Target. Action parameters connect component input and variables with the action. They are mapped to input columns and data flow variables using expressions in the Mapping Editor . Their values are calculated for each input row in a batch, and then the action is executed for each row. Some actions are executed for each row separately, and some are executed in batches, for the batches of input rows. For more information about actions, their configuration, and mapping their parameters see Actions . There you can also find the list of actions, common for most of supported connectors." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Components are the elements of Data Flow that enable you to query, process, store data, and apply custom logic. Arrange them according to your preference to build your custom ETL pipeline. Source Components Source components get data from the data source so you can use this data later in your data flow. There two types of source components: Source and CSV Source. Choose one depending on where you are getting data from. Component Description Source Gets data from a database or cloud application and adds it to the data flow scope. CSV Source Gets data from the CSV file and adds it to the data flow scope. You can use a local CSV file or the one from the file storage service. Target Components Target components are used to modify data in a data source and create logs if needed. There three types of target components: Target, CSV Target, and JSON Target. To modify data in data source target components use actions. There are custom actions that are specific to certain connections and four actions available for each connection by default: Insert, Update, Delete, and Execute Command. Component Description Target Uses connection to choose a data source and modifies its data based on selected actions. CSV Target Inserts data to the CSV file that can be stored locally or on the storage service. JSON Target Inserts data to the JSON file that can be stored locally or on the storage service. Transformation Components Transformation components are in charge of data transformation and custom data flow logic. With their help you can create custom data pipelines and apply data transformations before uploading to data target destination. Component Description Split Divides data flow into multiple branches and allow actions to be executed in parallel. Conditional Split Enables you to route records based on conditions. Bufferizer Sets a specific size of the batch. Row Count Counts the number of input rows and writes the result to a variable. Lookup Matches input records with records from another data source and adds columns of the matched records to the scope. Extend Receives data from the output of the previous component and modifies it. You can use it to add calculated fields, change the field's type, modify nested objects, and more. Unwind Deconstructs nested type properties." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/bufferizer.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Bufferizer Bufferizer is a component that caches input records and outputs them in batches of the specified size. To configure this component, you only need to specify the Row Count parameter, which determines the size of output batches. You can place this component before a Target component in order to send batches of the fixed size to it. This allows you to control the size of batches actually sent by the Target component to the data source. By default, Source components query and output data in batches of 2000 records. However, while processing data, Conditional Split and Lookup components may split incoming batches into batches of the smaller size, and thus, size of batches, reaching Target components without Bufferizer, can vary. Controlling batch size can be useful when you need both to increase the size of batches sent to data source at once in order to load more records per an API call and improve API call use efficiency, and to decrease batch size if the target has some additional custom limitation or if smaller batches provide better performance. Note that each data source has its own API batch size limitations, and sometimes different objects have different limitations. Some of them don\u2019t support sending records in batches at all. Skyvia cannot exceed data source API limits. If the maximum batch size is allowed by the target data source and table is less than the specified Batch Size, the buffered records are automatically split into multiple batches by the Target component." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/conditional-split.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Conditional Split Conditional Split is a component that splits records into different outputs depending on the specified conditions. To configure Conditional Split, you need to create one or more conditional outputs . You can also give Name to the component displayed on the diagram. To create a conditional output, click next to the Conditional Outputs header and then specify the output name and condition. For the condition, you need to enter an expression, using our expression syntax . We recommend to use Expression Editor when you deal with complex condition logic. Note that the expression must evaluate to a boolean value. If it returns a value of another type, the record is sent to an error output. The conditions are fulfilled for each record in the same order as the corresponding outputs are listed in the list of Conditional Outputs . A record is sent to the first output, condition of which evaluates to True, and processing of this record stops. This record is not checked anymore for compliance with other conditions in the next conditional outputs, and it doesn\u2019t go to next outputs even if their conditions would be True for this record as well. If the record does not match any of the conditions specified in the conditional outputs, it goes to the Default output, which is always available for a Conditional Split component. In addition to regular outputs, the Conditional Split component has an error output. It sends rows to this output in two cases: when an expression evaluation fails with an error, and when expression returns a result of type other than bool. When you connect a regular output of the Conditional Split component to the input of another component, you will be asked to select the output name in order to determine the output to connect to." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/csv-source.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components CSV Source CSV Source is a component that extracts data from a CSV file. You can either manually upload the CSV file when creating or editing an integration or load it from a file storage service when the data flow runs. CSV File If you want to upload CSV manually, just select the CSV Source component and drag the CSV from an explorer window to your browser, to the drop here your file panel under Source CSV or click browse and open the necessary CSV file. Whenever necessary, you can edit the data flow and replace the uploaded CSV file with another one. If you want to load your file from a file storage service, under CSV Source Type , select From storage service . Choose or create a file storage connection and set the File Mode . File Mode File Mode determines how Skyvia will get the file from storage service. Regular Use this mode to specify the path to the file manually. Select the file in the File Path drop-down list. Mask Use this mode to add a date/time string to a file name. It allows you to avoid manually selecting files and overwriting old files each time you run a data flow. When you use a mask, Skyvia replaces it with the current date in the selected time zone and searches for the file automatically. The following storages support this mode: Dropbox , Amazon S3 , FTP , SFTP , and Azure File Storage . To use a mask, perform the following steps: Select the Folder Path , which leads to the needed file. Specify the File Name. Add a date/time format string to the file name, enclosing it in braces {} . Specify the date/time format string in the .NET Framework [custom date and time format](https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings) string. Set the Time Zone to transform your date/time string. Click Check file and load columns . For example, if you use the mask Contact_{MM_dd_yyyy}.csv on March 19, 2024, Skyvia looks for the file named Contact_03_19_2024.csv . Expression Use this mode to generate file names via custom expressions. It allows you to use available operators, functions, integration variables and parameters to build your file name template with Expression Editor . The following storages support this mode: Dropbox , Amazon S3 , FTP , SFTP , and Azure File Storage . To generate your file name template, do the following. Select the Folder Path , which leads to the needed file. Build your expression using our Expression Syntax . Click Check file and load columns . Other Options Optionally, you can click more options and specify CSV file options . Note that Locale and Code page options are not available in the CSV Source component. You can also use the Trim Whitespace setting to trim leading and trailing whitespaces from the data in CSV fields. Columns After you specified the source file, under Columns check the number of detected columns. If this number does not correspond to the number of columns in your file, make sure that the CSV options are specified correctly and that you selected the correct CSV file. For example, if only 1 column is detected, it usually means that incorrect CSV Separator is specified. You can also review column names and rename columns by clicking Edit names and editing column names. The modified names will be used in data flow scope. Custom names are assigned to CSV file columns in the same order as in the CSV file, i.e. the first column name in the list is assigned to the first CSV file column, and so on. Customizing column name can be useful if the CSV file does not include header row with column names at all." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/csv-target.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components CSV Target CSV Target is a component that writes data to a CSV file. CSV files can be then downloaded manually or automatically placed to a file storage service. To configure CSV Target, first you need to select the CSV Target Type \u2014 whether you want to download result files manually or place them to a storage service. If you want to upload file to a storage service, you also need to select of create a file storage Connection and select a Folder on the file storage service to place the result file into. Additionally, you need to configure the result file naming. Select the File Name Type - whether you want to specify the file name as a constant or as an expression. Enter the corresponding Name if you need a constant or use Expression Editor to create an expression, to make your namme unique each time. CSV Target can produce a CSV file with or without a header row with column names. This is determined by the Include header setting. You can either let CSV Target automatically determine the result file columns based on its input columns or define the columns yourself. If you want to define columns yourself, under CSV Columns select Explicit columns . Then click the Edit link. In the CSV Columns dialog box, enter the list of column names. Each column name should be on the new line. After this, you need to map the action parameters, corresponding to the specified columns." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/extend.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Extend An Extend component adds calculated columns to input records. It allows you to add one or more columns, values of which are calculated based on the input column values and data flow variables. To extend a specific nested type property, select it from the Scope dropdown. To configure Extend, you need to set up its Output Schema using the Mapping Editor . Create parameters, which correspond to the result columns, and configure their mapping. Extend component has one input and one regular Default output for successfully processed rows with calculated columns added. It also has the Error output, to which it passes records, for which one of the expressions failed with an error, and adds the Error column with an error message to them." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/json-target.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components JSON Target JSON Target component is used to save output data to the JSON file. You will always receive an array as a result. You can download JSON to your PC or upload it to the file storage. To automatically upload JSON to the file storage after running an integration, choose To Storage Service , and select your storage connection and the folder to store your JSON in. If you do not have an established connection with your storage service yet, visit the [Creating Connection](https://docs.skyvia.com/connections/index.html#creating-connections) page, to learn how to create a new connection. JSON file name can have a constant value or can be generated based on the expression. Select Expression as your File Name Type and enter your expression into File Name field to do this. You can either let JSON Target automatically determine the JSON properties based on the input or define the properties yourself by choosing Explicit properties and modifying the Output Schema ." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/lookup.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Lookup Lookup is a component that matches input records with records from another data source and adds columns of the matched records to the scope. To configure Lookup, you need to specify Behavior , Scope , Property and select a Connection to get data from. Afterwards, select and configure the action to perform in order to obtain data and map its parameters if needed. You can also give Name to the component displayed on the diagram. Behavior defines what property types and values you get in the Lookup component\u2019s output. There are five Behaviour types: FirstOrDefault \u2014 if there are multiple matches, lookup takes the first value. If there are no matches, property values are set to Null . First \u2014 if there are multiple matches, lookup takes the first value. If there are no matches, input properties are sent to the error output. SingleOrDefault \u2014 if there is a single match, lookup takes it and sends it as an output. If there are multiple matches, input properties are sent to the error output. If there are no matches, property values are set to Null . Single \u2014 if there is a single match, lookup takes it and sends it as an output. If there are multiple matches or no matches, input properties are sent to the error output. Array \u2014 if there are matches, lookup takes all the values and add them to the array type property in the output. If there are no matches lookup returns an empty array. If you want to create a new nested type property that will contain lookup results, define its name in the Property field. Note that Property value is required if lookup Behaviour is set to Array . If you do not want to create a new nested type property but want to use the existing one instead, enter its name in Scope and it will be extended with the lookup results." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/row-count.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Row Count Row Count is a component that counts the input rows and writes the result to a data flow variable. It passes its input records to its output without any modification. To configure Row Count, select a variable to put the number of the rows to. The variable must have a numeric type. You can also give Name to the component displayed on the diagram." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/source.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Source Source is a component that extracts data from a data source and starts the data flow. Source is the very first component you should add to the diagram. After you have added Source, click it and configure its details \u2014 select a connection to get data from, select and configure the action to perform in order to obtain data and map its parameters if needed. You can also give Name to the component displayed on the diagram. Source component has one regular output and no input. Its output returns the data obtained by the selected action." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/split.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Split Split is a component that passes all the input records to each of its outputs. It has one input and can have any number of regular outputs, which are exact copies of the same input. It does not need any configuration, you just drag it from the component list to the diagram and connect its input and outputs to the corresponding components." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/target.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Target Target is a component that stores input records in a data source. It can also write data to logs . To configure Target, select a connection to load data to, select and configure the action to perform in order to store the input data and map its parameters if needed. You can also give Name to the component displayed on the diagram. If batch data loading is supported for a connector used in the selected connection, the Target component loads data in batches into the data source by default. You may change this behavior by setting the Single Row parameter to True . Target component has one input. It also has one regular output, and records, successfully loaded to the data source, are directed to this regular output. For some connectors, actions of the Target component also may obtain values of columns autogenerated in the data source, for example autogenerated IDs, and add the corresponding columns to the scope of the regular output. Target component also has an error output, and records, failed to be loaded to the data source, are directed to this error output. Target component adds a column with the error message to the scope of this output." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/unwind.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Unwind Unwind component is used to deconstruct a nested type property into separate properties for each item in the nested type (for each element if the nested type is an array and for each property if the nested type is an object). It uses Scope to indetify which nested type property to deconstruct. See example of using Unwind component ." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/components/value.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Components Value A Value component selects a value obtains the specified value calculated based on the values from input record and assigns it to the specified variable. It can select the min, max, first, or last value from all the processed records. You can use the Value component, for example, to store the maximum value of record creation time in a parameter, and query only records with the created time greater than this value during the next data flow execution. Thus, every time the data flow starts, it will process only new records. To configure the Value component, enter an expression by using our expression syntax into the Value box. In a simple case the expression is just the name of an input column. To create complex expressions use Expression Editor . Then select the Value type \u2014 min, max, first, or last. Finally, specify a Variable to store the value in. You can also provide a Name for the component, which is displayed on the diagram. The Value component can update the variable either after processing each batch or when data flow completes. The latter is useful if you want to use the variable in other places of the data flow before changing its value. If you want to change the variable value only when data flow completes, select the Value Update On Data Flow Completion checkbox. The Value component has one input and one regular Default output for successfully processed rows. It sends successfully processed input rows to its output unchanged. It also has the Error output, to which it passes records, for which the expressions failed with an error, and adds the Error column with an error message to them." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/how-to-build-or-edit-data-flow.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow How to Build or Edit Data Flow Skyvia data flow allows you to integrate multiple data sources and enable advanced data transformations on the way to a final destination. There are several types of data flow components \u2014 source , target and transformation components . Each of these components can be considered as a separate block with its peculiarities and configurations that you need to arrange on the diagram. In Skyvia, flows are built on the diagrams. It is a place where you add and arrange your components visually the way you want, connect components with links and configure the entire data flow scope. We\u2019ve developed the diagram canvas to be far larger than your computer screen and added helpful features for you to be able to navigate easily around your canvas and move components as you like. Working with the Data Flow Diagram To build a data flow, click Create New in the top menu and select Data Flow in the Integration column. When the Data Flow editor opens, on the left side of the page you see a list of components, which you can use to build your flow. When building a flow, drag components one after another to the diagram and configure settings of each component separately in the details sidebar, which appears on the right as soon as you click on the component you want to configure setting of. After you have finished configuring the first component, add the second component and connect configured components with a link, which is displayed on the diagram as an arrow and shows the direction of data movement between components. That is a simple example of data flow structure without data transformations. Additionally, you can use advanced transformations (Extend, Lookup, Conditional Split, etc.) to make changes to your data, such transformations as deleting columns for example. You can read more about data transformation types in the Components topic. Please note you can easily edit, replace or delete one or several components out of the diagram in case of necessity. Adding Components to the Diagram Adding Source Source is the very first component in the data flow. Each data flow starts with this component. It makes data from various external data sources available to other components within the flow. To work with Source, drag it from the components list to the diagram canvas, click on it and, after the Source details sidebar appears on the right, configure source settings. First , you need to select a connection you want to use from the drop-down list. If you haven\u2019t created the source connection yet, click +New connection and specify the connection parameters in the opened Connection page. Check the Connections or Connectors sections for more details. Second , when the needed connection is selected, choose an action you want to perform in order to obtain data ( Execute Command , Execute Query , etc.). As a third step, optionally specify additional action parameters by clicking next to Parameters and opening the Mapping Editor . You can read more about action parameters in the Actions topic. After you have configured a component, click the Schema tab on the right to see a schema with columns added to the data flow. Schema is displayed as a list of added columns and their types (string, decimal, Boolean, etc.). Adding Transformations Transformations are components, which help you modify data halfway between source and target. They include Lookup, Extend, Split, Conditional Split, Row Count, etc. You can use one, two or multiple transformations in one data flow, and how many of them to use depends strictly on your business needs. In our example, first , we add the Extend component. We have a task to find out what sum of money we will receive for each item of goods if we double the price. So, we take our original table, it contains fields \u2014 product.name and product.price . Using Extend, we add a new field, where we multiply our price by 2 \u2014 NewColumn price*2 . As a result, in the final table we receive three columns \u2014 product name, its original price and column with the doubled price. Afterward, as a second transformation step, we add the Conditional Split component. With its help, we split data received from the Extend component into two outputs \u2014 one output is default, another one is conditional output, where we set the condition price is null , i.e. all products with an empty price (those that are given away for free) will be passed to this output. Components can be placed anywhere you want on your diagram. You simlpy drag them around and connect them to each other to pass data. Adding Target Target is a component of the data flow, which writes data from a data flow to a final destination (cloud app/database). Please note, a data flow can contain multiple targets to load data to. As in the case of source, to work with Target, drag it from the components list to the diagram canvas, click on it and, after the Target details sidebar appears on the right, configure target settings. First what you need to know is that Target component may support batch data loading. If the connector selected by you as a target supports batch insertion, batch data loading is used by default in Skyvia. However, if you don\u2019t want to load data in batches for a certain target component, you can set its Single Row parameter to True . In this case, the single row returns a single result row for every row of a queried table in a database or cloud app. Second , select a connector you want to load data to. If you haven\u2019t created the target connection yet, click +New connection and specify the connection parameters in the opened Connection page. Read more about it in the Connections or Connectors sections. Third , select an action you want to be executed. Depending on the target you want to write data to, you can perform different types of actions, among which \u2014 Insert, Upsert, Update, Delete, Execute Command . The types of actions depend strictly on the cloud app or database you have selected as a target. After you have selected an action (for example Insert), in the Table drop-down list, you need to choose which table you want to insert data to. Optionally, you may also specify Returning feature, which is available for an Insert action only. This feature allows returning the ids (or any other fields) of the records inserted to target back to a field (or fields) of the corresponding source record. And finally, you may optionally specify additional action parameters by clicking next to Parameters and opening the Mapping Editor . Connecting Components with Links Links allow to connect components, they are displayed as arrows on the diagram and tell (show) Skyvia how to pass the data across components. To connect components, hover over a small circle on the component and when the circle gets bigger, drag a link (arrow) from one to another component, connecting them. You can also disconnect components by pulling the arrow from the receiving component aside, i.e. pulling it to an empty space of the diagram. Once components are connected, they will remain connected as you move the components around on the diagram. Please note the blue circle is for regular outputs, and red circle is for an error output. If the component has multiple regular outputs, you will be asked for the name of the output to select. When you click a certain link on the diagram, details sidebar appears on the right. Here you can view link details and its scope \u2014 all the record columns, going through this link, their data types and components they originate from. Copying and Pasting Components If you need to add multiple components with similar settings, you can configure one of them, and then copy it and paste copies on the diagram using CTRL+C and CTRL+V keys. The copy will be configured in exactly the same way as the original, but it will not be connected to other components, and thus, it will probably be invalid. You need to connect it to other components, and maybe make some changes to component configuration. You can even select and copy multiple components. In this case, all the connections between the original components will be copied too. Deleting Components and Links To delete any component from the diagram, simply click on it. As soon as you do it, you will see the icon next to the component. Click the icon to delete the component with all its settings. To select another component for deletion, perform the same steps. Another way to delete a component is to click the More Options icon in the component details sidebar and select Delete from the drop-down list. To delete a link between components, click it and pull it aside from the receiving component (to any empty space on the diagram) or click the More Options icon in the details sidebar of the link and select Delete from the drop-down list. Navigating around the Diagram Canvas As mentioned earlier the diagram canvas is designed to be far larger than your computer screen, so we\u2019ve added helpful features for you to be able to navigate easily around your canvas and move the diagram or components as you like. Panning For some large data flows, you will need to move the diagram up and down or sideways to see all of its components. For this, you can use either the pan mode or scrolling. To use the pan mode, click the Pan mode icon in the lower right corner of the canvas to enable the mode and then click anywhere on the canvas and, holding it, move your mouse to move the canvas. The pan occurs in real time, which means you see how the work area moves as you drag it across the screen. The other way to pan is to scroll. For this, use the scroll wheel of a mouse. Spinning a scroll wheel will move the diagram up and down on the canvas. Holding the shift key and spinning the wheel will shift the canvas horizontally left and right. Zooming You can zoom in and out of the diagram canvas to adjust your view. Click the Zoom In icon to increase the zoom level or click the Zoom Out icon to decrease the zoom level. That\u2019s zooming. It is good to note that the zoom uses the middle of the canvas as its point of reference so combining panning and zooming is a great way to effectively move around the canvas. Please note that you can easily return the original diagram size and its location on the canvas by clicking the Default icon. Other Data Flow Settings Skyvia offers additional data flow settings, which you can manually configure in the toolbar on the left. Here you find such settings as Parameters , Variables , Connections , and Result . Adding Parameters and Variables to the Data Flow Parameters allow you to store values between data flow executions. Whenever necessary you can view and edit their values or edit the parameter list. Variables are used to store values during the data flow execution. Components, like Count or Value, can assign values to them, and you can use their values in other components. To manage data flow parameters and variables, click Parameters or Variables respectively on the toolbar and create new parameters/variables as described in the Parameters and Variables topic. Creating Log Log is a table that you can fill in with data in your data flow. To create a log, click Connections in the toolbar on the left. When the Connections sidebar opens on the right, click to add a log and adjust its settings. In the opened window, add name, schema and save changes. Calculating Result Result feature helps you specify how the number of success and error rows counted by the Row Count component is calculated. To calculate result, click Result in the toolbar on the left. The Result sidebar opens on the right, and there you can specify how to calculate the number of rows. Usually, you can just sum up the necessary variables used in the Row Count components, but you may also specify more complex expressions if needed. For this, click the button and edit the expression in a convenient editor with code completion, validation, and result preview. You can read more about the Result feature here . Saving and Naming the Data Flow After you have finished configuring your Data Flow, you can save it, using the Save button. The first time you save a new Data Flow, you need to specify the name for it and the workspace folder where it is stored. The default data flow name is Untitled . Please note, if you omit this step, the data flow name will remain Untitled in the list of created integrations. You may either create a data flow and go to its Overview, or create it and stay in the edit mode. On the Overview page you can change the name of an existing data flow, edit its parameter values and schedule, run it, and start editing it again. Scheduling the Data Flow You can schedule your data flow for automatic execution. It might be very useful if you want to configure a data flow to run periodically on certain days and at a particular time or if you want to delay a data flow execution to a later time. To open the schedule editor for a data flow, click the **![ ](/images/icon-schedule-settings.png)** button near Schedule on the data flow Overview. Check the Scheduling Integrations section for more details." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/parameters-and-variables.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Parameters and Variables Parameters Parameters allow you to store values between data flow executions. You can create any number of parameters of any type in a Data Flow. Besides, for any parameter, a variable with the same name is automatically created in a Data Flow. When the data flow execution starts, Skyvia assigns values of data flow parameters to these variables, and when the run finished, Skyvia stores values of these variables in data flow parameters. You can use parameters whenever you need to pass values from previous data flow executions to next data flow executions. For example, you may query records once, store max value of record creation time in a parameter, and query only records with the created time more than this value during the next data flow execution. Data flow parameters described here are not the same as action parameters , which exist only within an action of a component and are used for passing input column values and variable values into the action. Creating Parameters If you want to add a parameter to the data flow, click Parameters in the toolbar on the left. When the parameter editor window opens, click to add and adjust its settings. Give a name to the parameter, select its type from the drop-down list, enter a value and click Save . The same way you add another parameter if needed. Using Parameters You can use parameters whenever you need to pass values from previous data flow executions to next data flow executions. For example, you can use parameters to load only the new data each time the data flow runs. Whether the record is new, can be determined in different ways \u2014 by storing record creation or last modification time like most data in a cloud sources, or using an autoincrement primary keys or version fields as it is often done in databases. In this way you can store the max value of such a field in a parameter using a Value component , and query only records with this field greater than the value of this parameter in a Source component next time. Variables Variables are used to store values during the data flow execution. Components, like Count or Value, can assign values to them, and you can use their values in other components. Creating Variables To add a variable to the data flow, click Variables in the toolbar on the left. This opens the Variable sidebar, which displays all your variables, including variables created automatically for parameters. Click to add and adjust its settings. In the opened window, name a variable, select its type from the drop-down list and click Save . Using Variables You can use variables, for example, to count records from certain outputs, using Count component , and then define the number of success and failed records based on these variables. These numbers are sent the number to the data flow log as the number of successful and failed records." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/results.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Results Data Flow can be complex and can load data into multiple destinations. Rows failed in one of the components may be rerouted to other components and succeed there. That\u2019s why it is impossible to determine data flow execution outcome automatically. This means that you need to determine the data flow execution results yourself within your data flow and define which rows are error rows, and which rows are succeeded ones. Skyvia displays the results for data flow runs in the same way as for other integration runs . The difference is that you calculate the numbers of success and failed records and determine the data to write to the logs in the data flow itself. Counting Success and Failed Records To determine the number of success and error records, you need to use the Row Count component in the necessary places of your data flow. These components count the processed records and place the result into a variable. Just create the necessary variables and specify them in the corresponding Row Count component. For counting success records, you can add a Row Count component after each of your Target components, loading data to a data source, and connect them to Targets\u2019 regular outputs. To get the number of error records, you probably need to add Row Count components to Error outputs of components, where records may fail. Then click Result on the toolbar. The Result sidebar opens, and there you can specify how the numbers of success and error rows are calculated. Here you can specify expressions for calculating the numbers of successful and error rows. Usually, you can just sum up the necessary variables used in the Row Count components, but you may specify more complex expressions if needed. You may also click the button and edit the expression in a convenient editor with code completion, validation, and result preview. Log Having only numbers of successfully loaded and failed records is usually not enough. If your data flow fails to load records, you usually need a more detailed error information in order to determine the error reasons. That\u2019s why every data flow component that may fail processing a record, has an error output, to which it sends all the failed records with the Error column, containing error messages. In order to obtain these error messages, you may create logs and write data into them. Log is basically a table that you can fill with data within your data flow. After data flow execution, all the created logs can be downloaded as CSV files. The logs can also be used not only to get error messages. You can use them to obtain all the necessary data. For example, you can use them to obtain autogenerated field values of successfully inserted records or even write records to the log in the middle of data flow in order to determine which records actually go where and debug your data flows. Creating and Editing Logs You can manage logs of a data flow in the Connections sidebar. On this sidebar you can see log tables you created, edit or delete them, or create new ones. You can open it by clicking Connections on the toolbar. You can create a new log in the following way: Click the plus sign in the Log header. Object editor opens. In the object editor, specify the log table name. Create the necessary log columns by clicking the plus sign in the Schema header and specifying column name and type. After this, save the newly created log. Whenever necessary you can add or remove log columns, change their name or type, rename the log table. If you want to edit or delete a log, just point it and click the corresponding button. Note, however, that if you edit or rename an existing log that is used in a data flow component, you also need to reconfigure that component \u2014 select the correct log name, map correct log columns, etc. Writing to a Log To write data to a log, you need to use the Target component. You need to add it to the diagram, connect the necessary component output to it and select Log as its Connection . The only action available for the Log connection is Write , so select it in the Action list. For the Write action, you need to select the log Table to write data to and map its columns to the input columns using the Mapping Editor . Obtaining Logs After data flow execution, all the logs, created on the Connections are available for download. You can download the logs by opening the run details on the data flow Monitor or Log tab, selecting the necessary run, and using the Download link near the corresponding log name in History Details ." }, { "url": "https://docs.skyvia.com/data-integration/data-flow/working-with-nested-types.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Data Flow Working With Nested Types With the help of components in Data Flow you can create new, and modify existing nested arrays and objects. You may find the list of the most common use cases in the examples below. Every example is created with the use of Demo Connections . To create a Demo Connection , do the following: Click +New at the top. Choose Discover Platform . Click Create Demo Connections . Once that is done you can start following the examples. Example 1. Create a Nested Object Type With the help of Data Flow components, you can transform input data to include nested objects. Let\u2019s take a closer look at the example. We have a Customers table as an input. We want to receive a JSON with an array of company objects as an output. Each object in the array should contain a nested BillingAddress object with Address , City , and Country properties. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18 [ { \"CompanyName\" : \"Alfreds Futterkiste\" , \"BillingAddress\" : { \"Address\" : \"Obere Str. 57\" , \"City\" : \"Berlin\" , \"Country\" : \"Germany\" } }, { \"CompanyName\" : \"Berglunds snabbkop\" , \"BillingAddress\" : { \"Address\" : \"Berguvsvagen 8\" , \"City\" : \"Lulea\" , \"Country\" : \"Sweden\" } } ] To accomplish the task we use 3 components: Source , to define the data we work with. Extend , to add a new BillingAddress (object) property. JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Customers table. In the Extend component we build an Output Schema for the nested BillingAddress object with a help of Object Map and map its properties to the corresponding properties that we receive from Source . As a result BillingAddress object will be accessible in Properties in the JSON Target component. By default, JSON target automatically generates schema, based on input. To define schema ourselves, we need to select Explicit under JSON Properties in JSON Target component settings. We build an Output Schema in JSON Target to define the look of our output object. It has two properties: CompanyName and BillingAddress . Note, that BillingAddress in the properties list is an object, and when we map BillingAddress in the Output Schema to the BillingAddress property, we do not use ObjectMap anymore. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 2. Lookup Object With the help of the Lookup component in Data Flow, you can transform input data to include a nested object that was looked up in a related table. Let\u2019s take a closer look at the example. We have an Orders table as an input and a lookup Customers table. We want to receive a JSON with an array of order objects as an output. Each object in the array should contain a nested Customer object with Address , City , and CompanyName properties that were looked up in a related Customers table. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18 [ { \"OrderID\" : 1 , \"Customer\" : { \"CompanyName\" : \"Alfreds Futterkiste\" , \"Address\" : \"Obere Str. 57\" , \"City\" : \"Berlin\" } }, { \"OrderID\" : 2 , \"Customer\" : { \"CompanyName\" : \"Berglunds snabbkop\" , \"Address\" : \"Berguvsvagen 8\" , \"City\" : \"Lulea\" } } ] To accomplish the task we use 3 components: Source , to define the data we work with. Lookup , to add a new Customer (object) property by performing a lookup based on CustomerId . JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Orders table. In Lookup component we are looking up for CompanyName , Address , and City columns in Customers table by CustomerId and assign them to Customer object property. As a result, the Customer object will be accessible in Properties in the JSON Target component. We build an Output Schema in JSON Target to define the look of our output object. It has two properties: OrderId and Customer . We map them to the according properties from the list on the right: OrderId from the Source component and Customer object from the Lookup component. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 3. Extend a Nested Object With the help of the Extend component, you can extend a nested object with additional properties. Let\u2019s take a closer look at the example. We have an Orders table as an input and a Customers lookup table. We want to receive a JSON with an array of order objects as an output. Each object in the array should contain a nested Customer object with Address , City , CompanyName , and CityWithAddress properties. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 [ { \"OrderID\" : 1 , \"Customer\" : { \"CompanyName\" : \"Alfreds Futterkiste\" , \"Address\" : \"Obere Str. 57\" , \"City\" : \"Berlin\" , \"CityWithAddress\" : \"Berlin, Obere Str. 57\" } }, { \"OrderID\" : 2 , \"Customer\" : { \"CompanyName\" : \"Berglunds snabbkop\" , \"Address\" : \"Berguvsvagen 8\" , \"City\" : \"Lulea\" , \"CityWithAddress\" : \"Lulea, Berguvsvagen 8\" } } ] To accomplish the task we use 4 components: Source , to define the data we work with. Lookup , to add a new Customer (object) property by performing a lookup based on CustomerId . Extend , to extend the Customer object with a new property \u2014 AddressWithCity . JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Orders table. In the Lookup component we are looking up for CompanyName , Address , and City columns in the Customers table by CustomerId and assign them to the Customer object property. As a result, the Customer object will be accessible in Properties in the Extend component. In the Extend component we set Scope to Customer to extend Customer object with a new property\u2014 AddressWithCity \u2014and map it to the City + ', ' + Address expression. We build an Output Schema in JSON Target to define the look of our output object. It has two properties: OrderId and Customer . We map them to the according properties from the list on the right. Note, that the Customer object in the list of properties already includes an AddressWithCity property that we added in the Extend component. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 4. Rebuild a Nested Object With the help of the Extend component, you can rebuild an object by replacing its properties. Let\u2019s take a closer look at the example. We have a Customers table as an input. We want to receive a JSON with an array of order objects as an output. Each object in the array should contain a nested Customer object with Address , City , CompanyName , and CityWithAddress properties. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16 [ { \"CompanyName\" : \"Alfreds Futterkiste\" , \"BillingAddress\" : { \"Country\" : \"Germany\" , \"AddressWithCity\" : \"Berlin, Obere Str. 57\" } }, { \"CompanyName\" : \"Berglunds snabbkop\" , \"BillingAddress\" : { \"Country\" : \"Sweden\" , \"AddressWithCity\" : \"Lulea, Berguvsvagen 8\" } } ] In this example we use 4 components: Source , to define the data we work with. Extend , to add a new BillingAddress (object) property with Address , City , and Country properties inside. Extend , to rebuild the structure of BillingAddress object by replacing Address and City properties with a single AddressWithCity property. JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Customers table. In the first Extend component we build an Output Schema for the BillingAddress object with Address , City , and Country properties by using Object Map . As a result BillingAddress object will be accessible in the next Data Flow component. In the second Extend component we create a new Output Schema for the same BillingAddress object to rebuild it and replace Address and City properties for AddressWithCity . Note, that we do not set Scope when we want to rebuild the BillingAddress object. We build an Output Schema in JSON Target to define the look of our output object. It has two properties: CompanyName and BillingAddress . We map them to the according properties from the list on the right. The BillingAddress object was rebuilt and now includes a new set of properties: Country , and AddressWithCity . After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 5. Lookup Array With the help of the Lookup component, you can transform input data to include a nested array that was looked up in a related table. Let\u2019s take a closer look at the example. We have a Categories table as an input and a Products lookup table. We want to receive a JSON with an array of category objects as an output. Each object in the array should contain CategoryId and a nested Products array. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32 [ { \"CategoryID\" : 1 , \"Products\" : [ { \"ProductName\" : \"Chai\" , \"CategoryID\" : 1 , \"UnitPrice\" : 18.0 }, { \"ProductName\" : \"Aniseed Syrup\" , \"CategoryID\" : 1 , \"UnitPrice\" : 10.0 } ] }, { \"CategoryID\" : 2 , \"Products\" : [ { \"ProductName\" : \"Sasquatch Ale\" , \"CategoryID\" : 2 , \"UnitPrice\" : 14.0 }, { \"ProductName\" : \"Queso Cabrales\" , \"CategoryID\" : 2 , \"UnitPrice\" : 21.0 } ] } ] To accomplish the task we use 3 components: Source , to define the data we work with. Lookup , to lookup for the ProductName , UnitPrice , QuantityPerUnit properties by CategoryId and assign them to the objects inside the Products array. JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Categories table. In the Lookup component we set Property to Products and select Array as its Behaviour . We are looking up for ProductName , UnitPrice , and QuantityPerUnit columns in the Products table by CategoryId and assign them to objects inside the Products array. As a result, the Products array will be accessible in Properties in the JSON Target component. We build an Output Schema in JSON Target to define the structure of the output object. It has two properties: CategoryId and Products . We map them to the according properties from the list on the right: CategoryId that we get from the Source component and Products array from the Lookup component. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 6. Extend Nested Array With the help of Data Flow components, you can transform input data to include an extended nested array. Let\u2019s take a closer look at the example. We have a Categories table as an input. We want to receive a JSON with an array of category objects as an output. Each object in the array should contain CategoryId and an extended nested Products array. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36 [ { \"CategoryID\" : 1 , \"Products\" : [ { \"ProductName\" : \"Chai\" , \"UnitPrice\" : 18.0 , \"QuantityPerUnit\" : \"10 boxes x 20 bags\" , \"ProductWithQuantityPerUnit\" : \"Chai, 10 boxes x 20 bags\" }, { \"ProductName\" : \"Aniseed Syrup\" , \"UnitPrice\" : 10.0 , \"QuantityPerUnit\" : \"12 - 550 ml bottles\" , \"ProductWithQuantityPerUnit\" : \"Aniseed Syrup, 12 - 550 ml bottles\" } ] }, { \"CategoryID\" : 2 , \"Products\" : [ { \"ProductName\" : \"Sasquatch Ale\" , \"UnitPrice\" : 14.0 , \"QuantityPerUnit\" : \"24 - 12 oz bottles\" , \"ProductWithQuantityPerUnit\" : \"Sasquatch Ale, 24 - 12 oz bottles\" }, { \"ProductName\" : \"Queso Cabrales\" , \"UnitPrice\" : 21.0 , \"QuantityPerUnit\" : \"1 kg pkg.\" , \"ProductWithQuantityPerUnit\" : \"Queso Cabrales, 1 kg pkg.\" } ] } ] To accomplish the task we use 4 components: Source , to define the data we work with. Lookup , to lookup for the ProductName , UnitPrice , and QuantityPerUnit properties by CategoryId . Extend , to add a new ProductWithQuantityPerUnit property to each object within the Products array. JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Categories table. In the Lookup component we set Property to Products and select Array as its Behaviour . We are looking up for ProductName , UnitPrice , and QuantityPerUnit columns in the Products table by CategoryId and assign them to objects inside the Products array. As a result, we receive the Products array in the output of the Lookup component. In the Extend component we set Scope to Products to extend the Products array, define our new ProductWithQuantityPerUnit property and map it to ProductName + ', ' + QuantityPerUnit expression. We build an Output Schema in JSON Target to define the structure of the output object. It has two properties: CategoryId and Products . We map them to the according properties from the list on the right: CategoryId that we get from the Source component and the extended Products array from the Extend component. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 7. Rebuild Nested Array With the help of Extend component, you can rebuild a nested array by replacing its properties. Let\u2019s take a closer look at the example. We have a Categories table as an input. We want to receive a JSON with an array of category objects as an output. Each object in the array should contain CategoryId and a rebuilt nested Products array. You may find the JSON example below: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28 [ { \"CategoryID\" : 1 , \"Products\" : [ { \"Name\" : \"Chai, 10 boxes x 20 bags\" , \"Price\" : 18.0 }, { \"Name\" : \"Aniseed Syrup, 12 - 550 ml bottles\" , \"Price\" : 10.0 } ] }, { \"CategoryID\" : 2 , \"Products\" : [ { \"Name\" : \"Sasquatch Ale, 24 - 12 oz bottles\" , \"Price\" : 14.0 }, { \"Name\" : \"Queso Cabrales, 1 kg pkg.\" , \"Price\" : 21.0 } ] } ] To accomplish the task we use 4 components: Source , to define the data we work with. Lookup , to lookup for the ProductName , UnitPrice , QuantityPerUnit properties by CategoryId. Extend , to rebuild nested Products array and replace ProductName , UnitPrice , QuantityPerUnit properties with Name and Price in each object within Products array. JSON Target , to define the structure of the output data. In the Source component we use Execute Query to get properties from the Categories table. In the Lookup component we set the property name to Products and select Array as its Behaviour . We are looking up for ProductName , UnitPrice , and QuantityPerUnit columns in the Products table by CategoryId and assign them to objects inside the Products array. As a result, the Products array will be accessible in Properties in the Extend component. In the Extend component we create a new Output Schema for the same Products array to rebuild it and replace ProductName , UnitPrice , QuantityPerUnit properties with Name and Price in each object within the Products array. Note, that we do not set Scope when we rebuild the Products array. We build an Output Schema in JSON Target to define the look of our output object. It has two properties: CategoryId and Products . We map them to the according properties from the list on the right. The Products array was rebuilt and now objects inside it have a different set of properties: Name , and Price . After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive a JSON file with the same file structure as shown in the JSON example above. Example 8. Flatten a Nested Array With the help of the Unwind component, you can flatten a nested array and get the list of its properties. Let\u2019s take a closer look at the example. We have a Categories table as an input and a Products lookup table. We want to receive two JSONs: the array of category objects, and the array of product objects. You may find JSON examples below: Categories 1\n2\n3\n4 [ { \"CategoryID\" : 1 , \"CatgoryName\" : \"Beverages\" }, { \"CategoryID\" : 2 , \"CatgoryName\" : \"Condiments\" } ] Products 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 [ { \"ProductName\" : \"Chai\" , \"UnitPrice\" : 18.0 }, { \"ProductName\" : \"Aniseed Syrup\" , \"UnitPrice\" : 10.0 }, { \"ProductName\" : \"Northwoods Cranberry Sauce\" , \"UnitPrice\" : 40.0 } ] To accomplish the task we use 5 components: Source , to define the data we work with. Lookup , to lookup for the ProductName , UnitPrice and QuantityPerUnit properties by CategoryId in Products table. JSON Target , to define the structure of the first JSON file. Unwind , to flatten the Products array we received after the lookup. JSON Target , to define the structure of the second JSON file. In the Source component we use Execute Query to get properties from the Categories table. In the Lookup component we set the property name to Products and select Array as its Behaviour . We are looking up for ProductName , UnitPrice , and QuantityPerUnit columns in the Products table by CategoryId and assign them to objects inside the Products array. As a result, the Products array will be accessible in Properties for the first JSON Target and Unwind components. In the first JSON Target we create an Output Schema to define the structure of the first JSON file. It has two fields: CategoryId and CategoryName . Note, that the Products array is an accessible property but it shouldn\u2019t be part of the first file, so we skip it for now. We set Scope in the Unwind component to Products to flatten Products and receive the list of its properties. In the second JSON Target we create an Output Schema to define the structure of the second JSON file. Output Schema has two properties: ProductName and UnitPrice We map them to the according properties from the list on the right. Note, that properties there are not part of the Products array anymore. After Output Schema for the JSON Target is ready and properties are mapped, we can save and run our integration. As a result, we will receive two JSON files with the file structure shown in the JSON example above." }, { "url": "https://docs.skyvia.com/data-integration/export/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Export Overview Skyvia export operation is used to export cloud application or relational database data to CSV files, either locally downloaded or placed to cloud file storage services. Multiple Export Tasks Skyvia allows performing several export tasks in one integration and, thus, export several database tables or cloud objects together in a single export operation. Exporting Related Object Data Skyvia allows you to select fields to export from the main source object and its related objects. For example when you export prices from the Salesforce PriceBookEntry object, you can easily include product Name from the Product2 object and pricebook name from the Pricebook2 object. Data Filtering Filter data by both exported and not exported data fields. Add multiple filter conditions and unite them in groups with logical operators to set filters of any complexity. If exported object has related objects they will be displayed in the first dropdown. Please note: when you filter data by DateTime value, set it according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) time zone which can be different from your local time zone. SQL and Query Builder Support Skyvia also allows querying data with SQL statements in the export integrations. You can enter SQL statements yourself and edit via code editor if needed or compose SELECT statements with our convenient query builder. Skyvia supports SQL and Query Builder functionality only for the export integrations, which use the new data integration runtime, and only when the advanced task editor mode is selected. Read more about it here . Compression Support Skyvia allows compressing the result files using either zip or gzip compression. When exporting date/time data, Skyvia converts it to UTC. When you export data to a CSV file to your PC, the files are available for download for 7 days. Export Tutorials Skyvia provides the following tutorials on data export: How to Export Salesforce Attachments for Specific Object How to Set Up Daily Backup of Salesforce Contacts to Dropbox" }, { "url": "https://docs.skyvia.com/data-integration/export/export-package.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Export Configuring Export Export is a specific kind of integration, which loads data from cloud applications and relational databases to CSV files. Creating an Export Integration To create an export integration, click + Create New in the top menu and select Export in the Integration column. When the integration details page opens, perform the following steps: Specify a source connection you export data from and a target connection you load data to; Create export tasks ; Optionally schedule the integration for automatic execution. Source and Target Connection Source Connection Under Source , in the Connection list, select a source connection you want to use from the drop-down list or create a new one if you have not created it yet. You can use the Type to filter box to quickly find the necessary source connection. In order for you to create a new connection, click +New connection in the drop-down list ( learn how ). Target Connection There are two scenarios of data export to target: By default, data are exported to CSV file(s) downloaded manually. If you want to save CSV file(s) to a file storage service or FTP, click CSV to storage service in the target type. After this, click Select target and select a target connection or create a new one if you have not created it yet. In order to create a new connection, click +New connection in the drop-down list. After this, select the Folder where to place the result file(s). Additional Options If necessary, specify the Code Page of the exported file with data. Additionally, you may click the More options link and specify CSV options shown below to detect columns correctly. The Append timestamp to the file name checkbox determines whether to add the timestamp to the end of the result CSV file name. It is applied only for export tasks using simple mode , and only if you don\u2019t customize file name in an export task. This check box is selected by default. This means that the result file name will consist of the name of exported object and a timestamp of integration run, for example, Account_20211201_1447.csv . If you clear this check box, the result file name will be the same as the name of the exported object, unless you customize the name in the task. The Exclude header in target CSV checkbox determines whether to exclude the header row (with column names) from the result CSV file. The Create empty files checkbox determines whether to create target files if there are no records to export (the exported object has no records, or no records match filter conditions, or etc.). Creating Export Tasks To create an export task, click the Add new link and configure a new export task as described in the How to Create Export Task topic. How to Download Exported Files After you create an integration, the following three tabs are displayed in the middle of the toolbar of the details page: Model, Monitor and Log. The Model tab shows source and target connections, tasks/schedule/parameters of the integration, etc. The Monitor tab shows 5 most recent integration runs and whether the integration is currently running. The Log tab shows runs for the selected period. When you export data to a CSV file, you can download the result file manually in the following way: \u0421lick the Monitor tab of the integration details page. In the open tab, click the integration run you want to download the results of. After you do it, the History Details window will open with detailed information. Under Result , click the number of records to download the result CSV file. Files are available for download for 7 days since the integration run. Note if you have more than 5 integration runs, you cannot see all of them on the Monitor tab. Instead you should go to the Log tab, which shows all integration runs for any selected period. To download the result file, perform the following steps: Click the Log tab of the integration details page. Select Last 7 Days . Click the integration run you want to download the results of. After you do it, the History Details window will open with detailed information. Under Result , click the number of records to download the result CSV file." }, { "url": "https://docs.skyvia.com/data-integration/export/how-to-create-export-task.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Export How to Create Export Task To create an export task, select a valid source connection and click Add new . This will open a Task Editor. Skyvia offers two task editor modes for export integrations \u2014 Simple and Advanced . The advanced task editor mode allows querying data, using the Execute Command or Execute Query actions. You can enter the SQL statements manually or use the query builder tool to compose queries in a visual designer. Simple Task Editor Mode The simple task editor mode is selected by default in export integrations. Export task setup process consists of three steps: Source Definition, Target Definition, and Output Columns. Source Definition On the Source Definition page, select the object you want to export from the Object drop-down list. Select which object\u2019s fields to include. Skyvia selects all fields by default, use checkboxes to make adjustments manually. To quickly find the necessary field, use the Type to filter box. Additionaly, Skyvia allows you to export fields of the related objects. For example, if the Account object contains an OwnerId foreign key, you can also export the Account.Owner fields. Specify the filter conditions if necessary. To specify the order of the rows, click Add and select the object, field, and either Asc (ascending) or Desc (descending) order. To undo the order settings click Remove . Click Next Step to proceed to the Target Definition . Target Definition On the Target Definition page you can set a custom name for your CSV file. If you click Next Step without setting a name template for your file, Skyvia will name it automatically. By default, the result file name will consist of the the object name and a timestamp in a {yyyyMMdd}_{HHmm} format. For example, if you export the Account object, the name will look like this: Account_20240615_1324 . You can disable adding the timestamp in the export integration options , by clearing the Append timestamp to the file name checkbox. In this case, the file name will be the same as exported object name. To configure custom file naming, click the Target File Name box. Here you can enter a very custom name, and use template parts to speed up the process. To use a prebuilt template, place your cursor at any place in the name box and click on a template to include it in the file name. You can use multiple templates in a single file name if needed. You can find all the available naming options in the Cheat Sheet by clicking (i) . Do not forget to set the proper timezone for the timestamps to be generated correctly. To compress a file select a Compression Type . Skyvia supports zip and gzip compressions. By default the type is set to None . Click Next Step to proceed to Output Columns . Output Columns To change the name of the CSV column enter a new name in the CSV Name box. By default, column names are taken from the data source metadata. However, if you include the related object fields, those will be prefixed with the relation name. For example if we have a User object, and a related Account object. Exported account fileds will look like: Account.Name . To change the column order in the CSV file, drag columns by using the move handle on the left side of the column. Default column order is determined by the order of columns selected on the Source Definition page. To restore default column names or default columns order, click Actions and select the necessary option. Advanced Task Editor Mode Click Add new to open the task editor. Simple mode is selected by default in the task editor. To switch to the advanced mode, click Advanced under Editor Mode and select one of two possible actions. Currently, Skyvia offers the Execute Command and Execute Query actions to users. However, more actions are expected in future. Entering SQL Statements via Command Text Please note that Skyvia supports only SQL SELECT statements in export integrations. When you select an Execute Command action, you type and edit your SQL statement directly in the Command Text box. For example, to select and export Mailchimp lists with the number of subscribed and unsubscribed members, you enter the SELECT statement as on the screenshot below. You can also check public queries via the General Gallery in your Skyvia account. Public queries are predefined templates, which contain most common query samples. You can copy the predefined query if the one you need is available in the Gallery and paste it directly into the Command Text box and edit the way you need (if necessary). To find predefined queries, click the +NEW button in the top menu and switch to Gallery . Read more about it here . Additionally, you may click the Open in Editor link if you need to open the command in the Query Editor. The Query Editor has the Connection Object List on the left, in which you can browse connection objects and view connection metadata in order to enter table and column names without errors in your commands. To see the fields (columns), the query returns, you can click Preview in the bottom left corner of the query editor. If you need to add additional parameters, click icon next in the Parameters section to open the Mapping Editor. In the Mapping Editor, you can add/delete parameters, rename them, and map them using Skyvia expressions . You can read more about the Mapping Editor and how to work with it in the Mapping Editor topic. Before saving the task, you need to specify the target file name in the Target File Name box, otherwise the task will not be saved. Then create a integration and run it. Creating Queries with Visual Query Builder You can compose SELECT queries visually with our convenient query builder without typing a code. That is a good option if you are not much familiar with SQL. When you select an Execute Query action, the Query Model box appears. Click Open Query Builder to open and work in the Query Editor. Query Editor allows viewing connection metadata. The Connection Object list to the left of the query editor displays the information on the metadata of available objects/tables from the source connection. You can drag a necessary table from the Connection Object List to the Result Fields pane to query its data (add it to the SELECT and FROM clauses of the SELECT statement). The Result Fields pane displays fields (columns) the query returns. By clicking a certain table, you can open and view the list of its fields. To filter data by certain columns (i.e. to add them to the WHERE clause of the query), drag these columns from the Connection Object list to the Filters pane. Filters pane displays query filters and filter groups. Please note that query filters are type-specific, the available filters depend on the data type of the column. For example, if you need to filter Salesforce accounts by the CreatedDate column within a certain period, you simply drag the column to the Filters pane and select value to compare the field with (like equal to or between ) in the Filter list on the right. Value filters simply compare the field value with the specified one using different comparison operators or check whether the field value is NULL. Value filters are supported for all field types. Skyvia also offers list filters and relative filters. List filters are supported for fields with string data only. Relative filters are supported for the fields with datetime data only. You can read more about the Query Builder and how to configure queries in the Query section of our documentation." }, { "url": "https://docs.skyvia.com/data-integration/import/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Overview Skyvia Import is a flexible tool for data migration between different platforms. It allows loading data from CSV files, cloud apps or relational databases to other cloud apps or relational databases. With Skyvia you can easily configure data migration even between platforms with a different data structure. Multiple Import Tasks In the system, each import can contain one or more import tasks and, thus, import data from one or more source files, tables or objects to other ones as a single import operation. Source Data Filtering When you load data from a database or a cloud app, you can filter source data to load. See Filter Settings for more information. Importing Query Results If simple settings for source data are not enough, you can use the advanced task editor mode allows importing the results of an SQL SELECT statement or visually built query. Support for All DML Operations Skyvia Import supports all DML operations (including UPSERT) and can be used to perform massive data updates. For more information, check also How to Perform UPDATE and DELETE . Data Splitting Skyvia also allows loading data from a single file, database table, or cloud object to several related cloud objects or database tables (one-to-many) using Data Splitting feature. Skyvia builds the corresponding relations between the imported objects or records automatically (check tutorial ). Returning When you load data from a database or a cloud app, the Returning feature allows getting ids or any other fields from the imported records back to the source. For example, if you load records from a database to Salesforce and want to have the Salesforce IDs (or any other generated fields) loaded back to the database, you can use the returning feature for it. Note that it is available only for INSERT and UPSERT operations. Import Tutorials Skyvia provides the following tutorials on data import: Importing Contacts for Existing Accounts Importing Products with Prices from Dropbox Importing Related Customer, Order, and Product Data Importing Accounts with Binary Attachments Importing Deals and Contacts from Zoho CRM to HubSpot Preserving Relations Importing Data from Salesforce Opportunities to QuickBooks Invoices Importing QuickBooks Invoices to Salesforce Opportunities Easy Importing Invoices and Customers Between QuickBooks Online Accounts" }, { "url": "https://docs.skyvia.com/data-integration/import/advanced-features/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Advanced Import Features Import allows performing different ETL operations with your data. Discover the following Import features: Data Splitting Import data to several objects simultaneously with Data Splitting . Returning Get the values of newly inserted records back to the source with the Returning feature. Relation Mapping Import the related objects and files preserving the relations between their records in Target with the Relation mapping ." }, { "url": "https://docs.skyvia.com/data-integration/import/advanced-features/data-splitting.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Advanced Features Data Splitting What is Data Splitting The Data Splitting feature enables importing data from a single source object to multiple related target objects simultaneously, keeping the relationship between records. How to Enable the Data Splitting Feature To enable Data Splitting, do the following: On the Target Definition tab of your Import task, select the parent target object. Click +Related and select the related target object. Map the fields for both target objects. Specifics Data Splitting feature works only for the related target objects. Data Splitting works for the INSERT operation only. How to Use Data Splitting Example 1. Importing Client information to Account and Contact There is a database table Client , that stores client information: company name, company size, industry, contact information, etc. The task is to import data from Client table to Salesforce. The problem is that Salesforce stores this information in two objects, Account and Contact . Thus, we need to perform import from a single source object to two separate target objects. To accomplish this task, do the following: Create the integration, set source and target connections, and add the import task. Select the Client table as source. On the Target definition tab, select the Account object as target, click +Related , and Select the Contact object. Map the Account fields and then click the drop-down list to switch the object for mapping to Contact . Map the Contact fields. The related Account Id is mapped automatically by target relation. Save the Import task and run it. As a result, you have new records in the Account object and records in the Contact object related to just inserted accounts. Example 2. Importing Products and Prices There is a database table Product that stores information about products in stock: name, quantity, price, units of measure, etc. The task is to create products in Salesforce and add this data to the price book. Salesforce stores products in the Product2 object and the product prices \u2014 in the PricebookEntry object. Thus you need to split data from the Products table between Product2 and PricebookEntry . To complete this task, perform the following steps: Create the integration, set source and target connections, and add the import task. Select the Product table as source. On the Target definition tab, select the Product2 object as target, click +Related and select the PricebookEntry object. Map the Product2 fields and click the drop-down list to switch the object for mapping to PricebookEntry . Map the PricebookEntry fields. The related Product2 Id field is mapped automatically by target relation. Save the Import task and run it. As a result, you have new records in the Product2 object and records in the PricebookEntry object related to just inserted products." }, { "url": "https://docs.skyvia.com/data-integration/import/advanced-features/relation-mapping.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Advanced Features Relation Mapping Relation mapping allows keeping relations between two or more related objects when inserting records from Source to Target. Relation mapping is available for one-to-one or one-to-many relations. Find out general information about data relations, primary and foreign keys, relation types, and common use cases in the How to Import Related Data topic. How it Works When you run the integration with Relation Mapping enabled, Skyvia performs the following actions. Skyvia imports the parent object. It maps Source primary key values to new Target primary key values and caches this mapping. When Skyvia imports child objects, it looks for Source foreign key values in the cache. When Skyvia finds the matching values among Source primary keys, it takes a corresponding Target primary key from the cache and uses it as a foreign key for the child object in Target. Use relation mapping for loading all related objects in the same integration and by the same run. Relation mapping suits the most for massive loads or data migrations, when you load all related data at once. If you need to import new or changed data, or implement other scenarios of loading related data, see cases described in the How to Import Related Data topic. How to Enable the Relation Mapping Relation Mapping for Cloud Apps and Databases To enable the Relation mapping for a cloud or a database source, do the following. Create an import and add an import task for the parent object. Add an import task for the child object. Go to the child object task\u2019s Mapping Definition tab. Select the Relation mapping type for the foreign key field. Pick the needed relation from the drop-down list. Relation Mapping for CSV files To enable the Relation mapping for a CSV file, do the following. Create an import and add an import task for the first file. Add an import task for the second file. Go to the second file task\u2019s Mapping Definition tab. Select the Relation mapping type for the foreign key column. Select the first file from the Referenced Object drop-down list. Select the columns by which you build the relation from the Column and the Referenced Column drop-down lists. Self-referencing Relations Skyvia supports self-referencing relations if the records refer to the same Source file. For example, a Salesforce account belongs to another account whose ID is present in the same file. To refer to the same file, map the ParentId field using the Relation mapping and select the Reference itself checkbox. It takes more API calls to keep self-referencing relations, because Skyvia queries the same records twice. It inserts records into Target and updates the same records to add foreign key values. Specifics Relation mapping works within a single integration run. Each related object or file must have a separate task. Only the INSERT operation supports the Relation mapping. You can use Relation mapping for UPSERT operation, but it will affect the inserted records only, and won\u2019t affect the updated ones. How to Use the Relation Mapping Example 1 Importing Zoho Desk Accounts and Contacts to Salesforce Accounts and Contacts Zoho Desk Accounts is the parent object. Its Id field is the primary key. Zoho Desk Contacts object relates to the Accounts object by the AccountId foreign key field. To import Zoho Desk accounts and their related contacts to Salesforce, perform the following steps. Create an import with Zoho Desk as a Source and Salesforce as a Target. Add an import task for the parent object Accounts . Add another import task for the child Contacts object to the same integration. On the Mapping Definition tab, select Relation mapping for the foreign key AccountId field and pick the Contacts_Accounts relation. After the integration run, you get a new Account record and a new related Contact record. Example 2 Importing Accounts and Contacts from CSV File You import Salesforce Account and Contact objects from two separate CSV files. One file stores accounts, and another file stores contacts. This file has the CompanyName column, which refers to the parent file Name column. To import data from these files to Salesforce, do the following. Create an import with a CSV file as a Source and Salesforce as a Target. Add a task for importing accounts from the parent file. Add another import task for the child file to the same integration. On the Mapping Definition tab, use the Relation mapping for the AccountId column. As the Reference Object , select the file, storing accounts. From the Column drop-down list select the CompanyName . From the Referenced Column drop down list, select the Name column. After the run, you get a new Account record and a new related Contact record in Salesforce." }, { "url": "https://docs.skyvia.com/data-integration/import/advanced-features/returning.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Advanced Features Returning What is Returning Returning is an Import feature that immediately returns values of newly inserted records from target to source during the current integration run. When the Returning feature is enabled, Skyvia performs import, reads the newly inserted records from the target, and then updates the source records with new values from the target. How to Enable the Returning Feature To enable the Returning feature, do the following: Select the Use Returning check box in the Mapping Definition tab of the Import Task Editor. Map the fields you want to return to the source. Returning supports only the Column , Constant and Expression mapping types. Requirements To use the Returning feature, your import must meet the following requirements: Selected source is a database or a cloud app. Source object supports the update operation. Source object must have a primary key.\n If there is no primary key in the source object (relevant for Google BigQuery , Amazon Redshift and SQL Azure , you can assign the column that is considered as primary key manually when mapping fields for returning. Simple task editor mode is selected on the Source Definition tab. INSERT or UPSERT operation is selected on the Target Definition tab. Specifics The UPSERT operation performs both Insert and Update actions. When you import data using the UPSERT operation, Skyvia returns data from the inserted records only, not from the updated ones. Records updated in the source by the Returning feature are not counted toward your subscription limit. You pay only for the successfully inserted records in the target. How to use Returning Example 1. Return the Id of the newly created record. Suppose a source database has a list of clients for import into the Salesforce Account object. There is a SalesforceId column in the database to track whether the record already exists in Salesforce. We create an import, which inserts clients into Salesforce and then returns the Ids of the imported accounts from Salesforce. To accomplish this task, do the following: Create the integration, set source and target connections, and add the import task. Select source and target objects and set the INSERT operation. Map the fields for importing. Select Use Returning check box. Map the field which you want to return to the source. We map the SalesforceId field to the Salesforce account Id field. Save the integration and run it. As a result, we have the records imported to Salesforce. And their Ids are returned to the source. Example 2. Return the calculated field value. Suppose a source database has a list of clients for import into the Salesforce Account object. We need to assign the client status in the database after importing it to Salesforce. There is the Client_Status__c formula field in Salesforce. It takes the value \u201cPremium\u201d when the Type = \u201cTechnology Partner \u201c and Rating = \u201cHot\u201d. We import the Type and Rating values and then return the calculated Client_Status__c field to the source. To accomplish this task, do the following: Create the integration, set source and target connections, and add the import task. Select source and target objects and set the INSERT operation. Map the fields for importing. Select Use Returning check box. Map the field which you want to return to the source. In our case, we map the Status field to the Salesforce account Client_Status__c field. Save the integration and run it. As a result, we insert the Type and Rating values, Salesforce generates the Client_Status__c value, and then Skyvia returns this value to the source." }, { "url": "https://docs.skyvia.com/data-integration/import/configuring-import.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Configuring Import Import is a specific kind of integration, which loads data to a cloud CRM or a relational database. Data can be imported from a cloud CRM, relational database, or CSV files. Creating an Import Integration To create an import integration, click + Create New in the top menu and select Import in the Integration column. When the integration details page opens, perform the following steps: Specify a source connection you import data from and a target connection you import data to; Create import tasks for the integration; Optionally schedule the integration for automatic execution. Source and Target Connection Source Connection There are three scenarios of data import from source: By default, data are imported from manually uploaded CSV file(s). As a second scenario, data can be imported from CSV file(s) uploaded to a file storage service.\nIf you want to load data from CSV file(s) stored in a file storage service or FTP, click CSV from storage service under Source Type . Then, select a connection to your storage service or FTP from the Connection drop-down list or create a new one if you have not created it yet. See the topics in the File Storages section to find out how to create a connection to the corresponding file storage service. As a third scenario, data can be imported from database or cloud app. If you want to select this scenario, click Data Source database or cloud app under Source Type and select the corresponding database server or cloud application from the drop-down list. If you haven\u2019t created the source connection yet, click +New connection and specify the connection parameters in the opened Select Connector page. See the Connections section for more details. Target Connection In the Connection list, select the corresponding database server or cloud application from the drop-down list. You can use the Type to filter box to quickly find the necessary target connection. If you have not created a target connection yet, click +New connection and create a new connection. To learn more, go to the Connections section. Creating Import Tasks To create an import task, click the Add new link and configure a new import task as described in the How to Create Import Task topic. Import Settings You can rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. You can also schedule your integration for automatic execution. Read more about it in the Scheduling Integrations topic. Task Execution Order By default, Skyvia analyzes data relations and lookups and executes tasks in order depending on these relations. If tasks are not related, Skyvia may run up to four tasks in parallel. If you want to run your tasks exactly in the same order they are listed in the integration, select the Preserve task order checkbox in target options. Source Values in the Error Log If something went wrong during the Import run, you can check for errors in the error log. It provides you with the errors linked to the data that caused them. By default, in the error log you receive the already processed values that failed to load into Target. If you enable Source Values in the Error Log option, the error log will provide you with the initial records from Source instead. This option might be helpful during the Import troubleshooting. Batch Size This option is available only for integrations, which use new data integration runtime. You need to make sure that the Use new runtime checkbox is selected in the integration settings. The Batch Size option provides you with better control over the number of rows in batches that are sent to the target data source. It specifies the number of rows to cache in a buffer before sending them to the target. Thus, it allows to always send batches of a fixed size to the target. This can be useful both to increase the batch size for loaded data in order to load more records per an API call to improve API call use efficiency and to decrease batch size if the target has some additional custom limitation or if smaller batches provide better performance. To set custom batch size, click Custom size and then click Edit . In the opened window, you can specify the batch size you prefer and save changes. Afterwards, save the integration. By default, the Batch Size is set to Auto in import integrations, which is equal to 0. This means the old behavior is preserved. From the source, data is extracted in batches of up to 2,000 rows. However, depending on how data is processed, transformed, and filtered, the size of the batch can decrease, and the data can be sent to target in smaller batches of non-fixed size. No caching is performed before sending records. Note that each data source has its own API batch size limitations, and sometimes different objects have different limitations. Some of them don\u2019t support sending records in batches at all. Skyvia cannot exceed data source API limits, so if the max batch size allowed by the target data source and table is less than the specified Batch Size , the buffered records are automatically split into multiple internal batches. Nested Objects Objects in some cloud apps have fields, storing nested objects or nested arrays of objects. By default, Skyvia treats them as text fields, storing nested objects in JSON format. If you want to unwind the content of such fields, and use the fields of their nested objects/arrays in mapping, select the Nested Objects checkbox in the integration settings. Note that the checkbox is displayed only after you select Data Source in the Source Type . See here how to map these nested object fields. Locale The specified Locale determines locale settings including DateTime format, number format, string collation, currency format, etc. This option is applied when non-textual data (like dates, numbers, etc.) is converted to a string. Such conversion can happen in expressions or when mapping a non-textual field to a string field. If this option is not set, InvariantCulture is used. The invariant culture is culture-insensitive; it is associated with the English language but not with any country/region. After you have configured your integration, click the Create button. Editing Existing Integration Editing existing integration is performed via the same integration editor page with the same interface elements as when creating a new integration. The integration editor page allows you to check or edit integration connections, add new tasks to the integration, edit or temporarily disable existing ones, or delete them completely if needed. In the integration editor, you can also enable/disable a new runtime mode as well as change the scheduled execution time of an integration. To edit an integration, click OBJECTS in the top menu, navigate to the Integrations tab and select the required integration. For a quicker search, use multiple integration filters . You can filter integrations by their types, status, by the connector used in the integration, etc. Editing Tasks in the Integration Except editing the entire tasks in the task editor, you can temporarily disable some of them and enable them later when needed in the integration itself. This feature is useful when you have several tasks in the integration and need to run only one or several of them without deleting others. To disable a task, click the More Options icon next to the required task and select Disable in the drop-down menu. Please note that this feature is available only in import integrations, which use New runtime . You can also make notes or leave comments under tasks if needed, which helps you better and quicker identify the required task among several available ones. See the screenshot below. For integrations that import manually uploaded CSV files, there is another option available. You can update an old CSV file with a new one by clicking Reload . In the opened window, browse to the necessary file and upload it. Note that this file must have the same columns as the previous one. Otherwise, you may need to edit a task and reconfigure the mapping." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How to Create Import Task Skyvia integrations can include one or more tasks. Import task imports data from a file, cloud application object, or database table or view to one or more target tables or objects. To create an import task, click the Add new link on the right of integration details page. Note that the integration should have a valid connection before creating or editing tasks. You should also make sure which mode you want to use in the task editor. Skyvia offers two task editor modes for import integrations \u2014 Simple and Advanced . The simple task editor mode can be used both for integrations with old and new data integration runtimes. The advanced task editor mode is available only in the import integrations with the new data integration runtime selected, i.e. the Use new runtime checkbox selected, and only if you import data from a cloud app or database. It is not available for CSV file import. The advanced task editor mode provides powerful data extraction, allowing you to get prepared and preformatted data for import, using the Execute Command or Execute Query action. For example, you can perform aggregations, change capitalization, add various calculated columns, etc. In general, when the task editor opens, you should perform the following steps to create the task: Specify source settings. Select source file, cloud app object, or database table/view to import data from, and, if necessary, configure data filtering. Alternatively, you can select the Execute Command or Execute Query action to query data (in the advanced mode). Specify target settings. Select the target object (table) or multiple objects (tables) to import data to and the operation to apply when loading data. Configure mapping of target columns to source columns, expressions, constants, lookups, etc. Task Editor consists of three pages: Source Definition, Target Definition, and Mapping Definition. Each of these pages provides settings for the corresponding actions listed above. You can switch between these pages by the corresponding icons in the editor header and by clicking the Next step or Previous step buttons. Source Settings Simple Task Editor Mode Settings for CSV File Source If you are importing a CSV file from your computer , drag your file to the drop here your file, or browse area or click the browse link and upload the CSV file to import. If you are importing a CSV file from a file storage service , select this file in the CSV Path drop-down list. This drop-down list displays folders and files from the specified source connections. You can open and close folders by clicking them. By default, data import is configured using single CSV files. You can also configure import using file masks for Dropbox, FTP, and SFTP storage. This allows you to specify a file mask with a date/time template. Each time the integration runs, a date/time template in the file mask will be substituted with the current date and time of a file. To use file masks, click Use File Mask under CSV Mode . Select a folder to load files from, specify file mask and timezone to use. See How to Import CSV Files via File Masks for more details. When importing data to relational databases and to some of the cloud data sources, Skyvia supports importing binary data as a ZIP archive with binary files. You can add this ZIP archive to import together with your CSV file. For example, you can import Salesforce attachments and provide the attachment bodies as files in a ZIP archive. See How to Import Binary Data for more details. Importing binary data as a ZIP archive is supported only to target fields of certain data types. See the list of the supported data types and sources in the How to Import Binary Data topic. Specify the CSV options if necessary. When setting CSV options, take a look on the Columns grid below, which displays the list of the detected columns and allows you to explicitly specify column data types. If the columns are not correctly detected, this often means that CSV Separator , Code Page , or Text Qualifier settings are incorrect, and you need to set them to correct values. If necessary, select the data types of the columns in the corresponding drop-down lists in the Columns grid. This may be necessary when you use a numeric or date/time column in Expression Mapping , and its type is not determined automatically. Click the Next step button at the bottom right of the dialog box to open the Target Definition page. Settings for Database or Cloud Source Select the source table or object from the Source list. You may use the Type to search box to quickly find the necessary object. Optionally configure filters for data to import. See Filter Settings for more details. If the source is a cloud application, like Salesforce, HubSpot, etc., you can optionally import only recently inserted and updated records from it using the State Filter parameter. For this, click Inserted or Updated in the State Filter group. The Inserted filter imports records created since the previous integration run (or integration creation, if the integration was never run). The Updated filter imports both records that were created and records that were modified since the previous integration run. Note that the source object in a cloud app must have fields storing the record creation and last modification timestamps in order for the corresponding filters to be available. If the object does not have one or both of such fields, the corresponding filter or both of them will not be available. See more details in the How to Import Only Recently Added or Changed Data from Cloud Sources topic. If the imported table or object has foreign keys, you can optionally join the fields of the referenced objects to the import. For this select the checkboxes for the corresponding relations under the Filter settings, after Related . After this, you can use the fields of the selected related objects in mapping. Advanced Task Editor Mode The Advanced task editor mode is available when the Use new runtime checkbox is selected on the tab bar of your import, and when the data are imported from database or cloud app, which means you need to click Data Source database or cloud app under Source Type in the integration editor and select the corresponding database server or cloud app from the drop-down list. Then click the Add new link to open the task editor. Please note that Simple mode is selected by default in the task editor. To switch to the advanced mode, click Advanced under Editor Mode and select one of two possible actions. Currently, Skyvia offers the Execute Command and Execute Query actions to users. However, more actions are expected in future. When you select an Execute Command action, you type and edit your SQL statement directly in the Command Text box. When you select an Execute Query action, you compose SELECT queries visually with our convenient query builder without typing a code. That is a good option if you are not much familiar with SQL. You can read more about the Query Builder and how to configure queries in the Configuring Queries with Query Builder section of our documentation. Please note that Skyvia supports only SQL SELECT statements in import integrations. When you import data using the advanced task editor mode, the Returning feature and its settings are not available on the Mapping Definition tab. Target Settings Select the object to import data to from the Target list. If you want to import data to multiple related CRM objects or database tables, you need to select the main object first, and related objects (that have a foreign key to the main one) are selected after it. You may use the Type to search box to quickly find the necessary object. If you want to import data to multiple related objects, click the +Related button and select an object related to the main one in the new drop-down list box. Repeat it until you add all the necessary related objects. Note that you can add not only objects related to the first one, but objects related to any of already added objects by clicking the +Related button near this object. Please note that you can perform only the Insert operation into several related objects. Select the type of operation for the task: Insert , Update , Delete , or Upsert . Click the corresponding button near Operation . You can find more details about Upsert option in the How to Perform UPSERT Operation topic. See also how to Perform UPDATE and DELETE . Click the Next step button at the bottom right of the dialog box to open the Mapping Definition page. Mapping Settings On the Mapping Definition page, you should configure mapping of target columns to source columns, expressions, constants, lookups, etc. Columns with the same names in source and target are mapped automatically. You must map at least the target columns marked as Required in order for the task to be valid. When importing data to multiple related objects, you need to specify mapping for all the target objects (at least, their Required columns) selected on the Target Definition page. Configure mapping for the main object first, then you can select other target object in the target table name list and map the columns of this object. Repeat this for all the objects, selected on the Target Definition page. The Mapping Definition page displays a grid with the target columns on the right and their mapping on the left. To map a target column, you need to click its row, then select the kind of mapping in the topmost drop-down list, and after this specify the necessary parameters for the selected kind of mapping. To see the description of the available mapping kinds and their parameters, go to the Mapping section. The simplest way to import data is to map necessary target properties/columns to the source columns directly. For this, click a target property and then select the corresponding source file column in the corresponding drop-down list. Mapping Nested Object Fields There are three possible scenarios when working with nested object fields: You have similar fields with nested objects or arrays on both sides. Such a scenario can occur when you load data between two instances of the same cloud app or, for example, when you load orders between two accounting apps. In this case you use Column mapping for the fields and map them to each other. Then, you can map the fields of the target nested object/array to the fields of the source nested object/array. You can see an example of such case in the tutorial Easy Importing Invoices and Customers Between QuickBooks Online Accounts . You have a field with nested objects or arrays in the target only. We can load data of one object into other target object fields, and load data of a related object into its field storing nested object data. This scenario can be implemented via Source Lookup mapping . For this lookup you don\u2019t select the result column. The lookup returns whole records, and you can map their fields to the fields of the target nested object/array. You can see an example of such case in the tutorial Importing Data from Salesforce Opportunities to QuickBooks Invoices . You have a field with nested objects or arrays in the source. In this case you can either import contents of this nested object to the target object directly or use data splitting to import data of the source nested object or array to the child object on the target side. You can select the source scope in the drop-down list. This list includes the source object itself and all its fields, storing nested objects and arrays. Then you can map target fields to the fields of the nested object. You can see an example of it in the tutorial Importing QuickBooks Invoices to Salesforce Opportunities . Returning Settings On the Mapping Definition page you may also select the Use returning checkbox to enable Returning Settings. These settings allow returning the ids (or any other fields) of the records inserted to target back to a field (or fields) of the corresponding source record. After you select this checkbox, the Returning page is displayed. You can click the Next step button to open it. To have the Returning feature, i.e the Use returning checkbox available, the following requirements must be met: The integration must import data from a database or cloud app. The integration must use the new integration runtime ( Use new runtime checkbox selected). The source object or table must allow update operation. The Simple task editor mode must be used in the source settings. The task must have INSERT or UPSERT operation selected on the Target Definition page. For the UPSERT operation, source records will be updated only for records inserted to target. They won\u2019t be updated for existing records updated in target. On the Returning page, you should configure mapping to update source records with the values from the imported target records. You configure mapping as usually. Please note that the following kinds of mapping are not available for returning: External ID and Lookup . Records, updated in the source by the Returning feature, are not counted to your subscription limit. Only records, successfully loaded to target, are counted. If you are importing data from a database table or cloud object, and you would like to import related data as well, select the corresponding checkboxes next to related objects on the Source Definition tab. When you do it, you can later select columns of the related data as well as columns of the imported object or table on the Mapping Definition tab. When editing a task, on each of the steps you can reload metadata from source and target connections by clicking the Refresh button in the bottom left corner of the Task Editor." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How to Guides Discover simple step-by-step guidelines on how to effectively operate Import integration in this section: How to Import Related Data How to Import Binary Data How to Import CSV Files via File Masks How to Perform UPSERT Operation How to Perform UPDATE and DELETE How to Import Only Recently Added or Changed Data from Cloud Sources" }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/how-to-import-csv-files-via-file-masks.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Import CSV Files via File Masks When you import CSV files from Dropbox , Amazon S3 , FTP , SFTP , and Azure File Storage you can specify a file mask using a date/time template instead of selecting a file. When integration runs, it will search the file to import by substituting the current date to the mask. With this functionality you can, for example, configure a regular upload of files to Dropbox and schedule import for automatic execution so that it imports a new CSV file every time, and you don\u2019t need to overwrite old files. All the files must have the same set of columns. File Mask Format The file mask uses the following format: a name fragment enclosed in braces \u201d{}\u201d is considered a date/time format string. When integration runs, the date/time of the run is converted to the specified timezone, the specified date/time format string is applied to it, and the result string is substituted with the name fragment enclosed in braces. The date/time format string is specified in the same format and applied in the same way as .NET Framework [custom date and time format strings](https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings) . For example, if the following mask \u201cexport_{MM_dd_yyyy}.csv\u201d is used, when an integration is executed on September 19, 2019, it will try to load a file export_09_19_2019.csv . If no such file is found, the integration run will fail. Here is another example, including hours: \u201cMyFile_{yyyy-MM-dd HH}.csv\u201d . A integration with such a mask, which should run on September 19, 2019, at 18:15, will try to load a file \u201cMyFile_2019-09-19 18.csv\u201d . The timezone used is specified in the task editor together with the mask. It\u2019s not recommended to use seconds or fractions of seconds, and even minutes in the mask , since it\u2019s not guaranteed that integration will run exactly at the specified time. When Skyvia is under peak load, integration run may be delayed for a few seconds or even minutes. And even if the difference of the integration run time and the date/time in the file name is minor, the integration run will fail. This means that files that are exported with various tools, that have full datetime masks, including minutes, seconds, and often even fractions of seconds, currently cannot be imported using this file mask feature. For such cases try configuring these tools to produce file names without minutes and seconds. How to Use File Mask To use the file mask, on the Source Definition page of the Task Editor, click Use File Mask under CSV Mode . Then you need to select the folder to load CSV file(s) from, specify the file mask and timezone to use. Each time the integration runs, a date/time template in the file mask will be substituted with the current date and time of a file. To expand a folder in order to select its subfolder, click the icon of this folder in the list. After you have specified the folder, file mask, and timezone, click Apply . Please note that Skyvia requires a CSV file with the name corresponding to the file mask with the current date/time (when you click Apply ) to be present in the specified folder. This file is used to determine the columns, which will be used in mapping . All the files that will be loaded by the import task must have the same set of columns as the file used during task creation. Without clicking Apply and loading the file to determine the columns, you cannot proceed to further import task editing . If you need to change the set of columns for an existing import task, you can edit the task and click Apply again. The list of source columns will be reloaded from the file with the name corresponding to the current date/time. If you want to import a CSV file together with a ZIP archive with binary files , you need to specify the file mask, folder, and timezone for ZIP files separately. Click Add zip with binary data and then select the folder to load ZIP files from, specify the File Mask and timezone to convert the integration run date/time to before substituting to the mask. Then click Apply . After you have configured your file mask settings, proceed to setting up your target and mapping settings and save the task." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/how-to-import-related-data.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Import Related Data This topic covers different tools and features that simplify working with related data and allow you to automate creating relations in target based on the relation in source. This topic describes several such features used in import. Explaining Data Relationships In relational databases, data is stored in tables as rows. Skyvia presents data in cloud apps in the same way \u2014 as a set of objects storing records. For example, there are objects with contact and company data. There is a relationship between company and contact records that allows the user to know, which contact is from which company. It allows you to get combined data from two objects with consistent information of links between contacts and companies. That\u2019s why it is important to maintain this relationship when loading data between different data sources. One-to-Many, One-to-One, and Many-to-Many Relationships There are several kinds of relationships: one-to-many , one-to-one , or many-to-many . For example, in a usual case, we can have multiple contacts, representing the same company, but the same contact cannot represent multiple companies. This is a one-to-many relationship, which is the most common kind of relationships. A contact, however, can be linked more than to one company. So if both a company can have multiple contacts, and a contact can link multiple companies, this is a many-to-many relationships. If there is always one contact per company, this is one-to-one relationship. Such relationships are more rare than others. How Relationships Work Let\u2019s see how different kinds of relationships are implemented. Almost always, a record in a database table or object has a field or a set of fields, uniquely identifying this record in a table. In database terminology, such a field or a set of fields is called primary key . In most cloud apps this is a single field, named Id or something like _ Id_. One-to-Many Relationship One-to-many relationship is usually implemented in the following way: a table or object on the \u201cmany\u201d side (in our example, Contact ) has a field or fields, corresponding to the primary key of the \u201cone\u201d side table. This field or set of fields is called Foreign key . In cloud apps, it can be called reference . We call the object on the \u201cone\u201d side of the relationship \u2014 a parent object, and the object on the \u201cmany\u201d side \u2014 a child object. In cloud apps you don\u2019t see such foreign key fields in the interface. Instead, you see a link to parent record or a list of child records. However, in most cases, the relations are implemented in the same way internally. There are the fields storing parent record IDs in child records, and you can access these fields in Skyvia. For example, in Salesforce, there is a one-to-many relationship between Account and Contact objects. The Contact object has the AccountID field, which references the IDs of the accounts: One-to-One Relationship One-to-one relationship is usually implemented in the same way as one-to-many. In databases, if the one-to-one relationship needs to be enforced, the primary key of the child table is also the foreign key, referencing the parent table. Many-to-Many Relationship In many-to-many relationship, record on each side of the relationship can reference multiple records. In databases, such relations are implemented via an intermediate table. The related tables do not reference each other, but the intermediate table has foreign keys to both of them. Its records represent the relations between records of the two related tables. In cloud apps many-to-many relationships can be implemented similarly to databases, via an intermediate object. However, the relation can be implemented differently: a record may have a field with the list of the related record IDs. Such a field can be only on one or on both sides of the relationship. Here is an example of a many-to-many relationships in HubSpot. The Engagement object has a number of many-to-many relationships with several objects. For example, it has the Associations_ContactIds field that stores a list of related contact IDs. For users\u2019 convenience, Skyvia also provides intermediate objects for some of HubSpot many-to-many relationships. In this case, this is the EngagementContacts object: Its records represent the relations between contacts and engagements. For example, there is a record with Engagement ID = 4760017877 and Contact ID = 201. This means that the contact with ID = 201 and engagement with ID = 4760017877 are related. Polymorphic Relationships In cloud apps there is another kind of relationships \u2014 a polymorphic relationship. Such a relationship means that a child record can reference a record in different objects with the same foreign key field. For example, a contact\u2019s parent customer record can be either an account or another contact. You can find examples of such relations in Dynamics 365 and some other cloud apps. See example of such relationship below . How to Work with Related Data in Skyvia Import? Skyvia Import offers several features for working with related data. Let\u2019s list them: Joining related data \u2014 Skyvia allows you to join data of the parent objects when extracting data from an object. Lookup mapping \u2014 you can get values from matching records in source and target objects using lookups. Relation mapping \u2014 create relations in target based on the relations in source automatically when you load data from both related objects. External ID mapping \u2014 for some connectors, like Salesforce or Zoho CRM , you can build relations using special custom external ID fields. Why You Should Not Use Column Mapping for Relation Fields Mapping source foreign key fields to target does not work, because IDs or primary keys in different systems have different values and in most cases even different format. For example, we load Accounts and Contacts from Zoho Desk to Salesforce. In both Salesforce and Zoho Desk, the Contact object stores references to Accounts in the AccountId field. However, it does not mean that these fields can be mapped to each other. If you use a mapping like this: The records won\u2019t be imported \u2014 you will get the errors like this: \u201cAccount ID: id value of incorrect type: 114316000006594001\u201d In Zoho Desk IDs are large integer numbers, while in Salesforce IDs are 15 or 18-character sequences of letters and digits. In most cases you cannot use column mapping even when loading data between different instances of the same system, for example, between different Salesforce organizations, because the same accounts in different Salesforce organizations have different ID values. To preserve source relation in target you need to use a different mapping. Let\u2019s see how to use Skyvia\u2019s features in different scenarios of importing related data. Example 1. Loading Data From Two Related Objects This example shows how to load data from both parent and child objects. Let\u2019s show how to load Accounts and Contacts from Zoho Desk to Salesforce once again \u2014 this time how to do it properly. Relation Mapping The easiest way to preserve a relation in Skyvia when loading data from both related tables is to use relation mapping. Just select the relation mapping from the list, and select the respective relation, like this: And that\u2019s all. Skyvia will automatically build relations between accounts and contacts in Salesforce, based on the relations in Zoho Desk. Relation mapping can also be used in more complex cases, when the schemas of loaded data in source and target are not that similar as in this example. Visit our tutorial on importing data from SQL Azure to Salesforce for a more complex example. Relation mapping always requires both related records to be loaded by the same integration, in the same integration run. If a parent or a child record is already present in target, and you load only the opposite one, the relation won\u2019t be built, and the record will fail. Relation mapping thus suites when you import all the data from the related objects, but may fail if you load only new and changed data (if only a parent or only a child record is loaded). For example, you load only new and changed accounts and contacts from Zoho Desk to Salesforce every day. An account is added to Zoho Desk, and Skyvia loads it to Salesforce. Next day, a related contact is added to this account in Zoho Desk. Skyvia loads it to Salesforce separately from its parent account. In this case, relation mapping won\u2019t work. For such cases, use one of the methods described below. Lookup Mapping You can use lookup mapping to find the necessary account in Salesforce to establish the relationship. To maintain the relation between accounts and contacts, you need to assign the ID of the account in Salesforce to the AccountID field of the imported contact. When the account is already inserted, we can use lookup by some other identifying field. For example, if accounts have unique names, we can use lookup by account name. You can use any identifying column or multiple columns with lookup mapping. You can configure your import in two ways: The first way is to use a combination of source and target lookup, first to find the corresponding account in source, by Zoho Desk AccountID value, and get its name, and then to use this name to get the Salesforce ID of the account. See Two-Level Lookup for more information. The mapping would look like the following: The second, easier way is to use import\u2019s ability to query parent objects together with child objects and query the account data from Zoho together with contacts. You need to select the corresponding relation in Source Definition of the import task: This allows you to use parent object fields in mapping, and thus, greatly simplify your lookup: Lookup mapping for loading related data has the following requirement. Since we look for IDs of parent records in target when importing child records, the parent records must be imported to the target prior to child ones. You can do the following to manage the order of loading data: Order your import tasks so that tasks loading parent object data are higher than tasks loading child object data. Select the Preserve task order checkbox in your import. Relation mapping does not have this requirement, because for relation mapping, Skyvia loads parent data first by default. External ID Mapping Some connectors support External ID mapping. External IDs are custom fields, that uniquely identify records. They are used to store IDs from an external system, that\u2019s why they are called External ID. Currently, External ID feature is available in Salesforce and Zoho CRM connectors. To use the External ID mapping, we need to select the External ID field to use (an object can have multiple External ID fields) and map it to the AccountID field. As for our scenario of Zoho Desk data import to Salesforce, Skyvia has the following requirements for using External ID mapping: The Account object must have an External ID field. Note that External ID fields are custom, so there is no such field in Salesforce by default. The field should have a suitable type to store Zoho Desk IDs (Number). When you import accounts, you must load Zoho Desk IDs to this field. Example 2. Loading Child Object Data This example demonstrates how to load child records to the target that already contains parent records. It also demonstrates related data import when some of the child records don\u2019t have a parent record. We load Zendesk tickets to Salesforce opportunities. Zendesk Tickets object has a reference to the Organization object, and Salesforce Opportunity has a reference to Account. Suppose we already have the corresponding accounts in Salesforce, and need to import only Tickets. Besides, in the case of Zendesk, tickets often don\u2019t reference an organization - the OrganizationID field is NULL in this case. The AccountID field in the Opportunity object also can be NULL. Let\u2019s see how such cases are processed in Skyvia. In this case, we cannot use the relation mapping, because we load only child records. We can use lookups or External ID mapping. Relation mapping supports cases when some of the child records don\u2019t have parent records. However, it still requires parent records to be loaded in the same integration with child records. Lookup Mapping As in the previous example, Lookup mapping can be used in two ways: a two-level lookup in source and in target and one-level lookup together with joining parent organizations when extracting data from the source. When the foreign keys can be NULL, we need an additional step when configuring the lookup: we need to select the Set null when no match found check box in the Lookup Options . If you don\u2019t select this lookup option, you will receive the following errors for records with the NULL foreign key field: \u201cNo record found for the specified lookup condition.\u201d Here is the mapping for the first way: Note that in this case we must select this lookup options for both levels of Lookup. As well, as in the first example, the second way requires joining Organization data when extracting data. And then configure lookup like this: External ID For loading data in Salesforce, we can use External ID mapping. There are no additional requirements for cases when a foreign key field can be empty, so the mapping is similar to the previous example: Example 3. Loading Data from Related Objects into Single Target Object This example shows the case when target object contains information from both parent and child objects in source. Let\u2019s consider a case when we load Salesforce contacts into a database. The target SQL Server Customers table has the CompanyName field, and we need to load the name of the contact\u2019s account to it. In this case, the easiest way is to join the account data from the source in your import: And then we can map the corresponding target field to the name of the account: Alternatively, you can use the source lookup mapping and look for the account Name by its Id , but we recommend joining data by relation from source, because it is simpler. When you need to import data in the opposite direction, from one source object into multiple related target objects, you can use Data Splitting . The above examples demonstrated how you can load data with one-to-many relationships in Skyvia. The same applies to one-to-one relationships as well. Example 4. Polymorphic Relationship This example shows how to work with polymorphic and self-referencing relations in Skyvia. Let\u2019s consider an example of migrating contacts and accounts between Dynamics 365 and Salesforce. Dynamics 365 contacts have the parentcustomerid field, which can reference both account and contact records. We can consider cases for both directions: loading data from Salesforce to Dynamics 365 and vice versa. In Salesforce, there are two corresponding relationships: Contact records can referentce Account records via the AccountId field. Contact records can reference other contacts via the ReportsToId field. We need to use two tasks of loading data: one task loads contacts that reference accounts, and another one loads contacts that reference other contacts. Let\u2019s see how we can filter them. Relation Type Field Skyvia provides an additional virtual field for such a relation that determines the type of the parent record. Its name starts with the name of the corresponding foreign key field, and has the \u2019$type\u2019 suffix. In this example, this is the parentcustomerid$type field that can have a value \u2018account\u2019 or \u2018contact\u2019. When importing data from Dynamics 365, use this field to determine whether the source contact record references an account or a contact as a parent. When importing data to Dynamics 365, assign \u2018account\u2019 or \u2018contact\u2019 values to it to determine what object the target record will reference. Splitting Contact Data with Filters When loading data from Salesforce to Dynamics 365, we can filter Salesforce contact by checking if the corresponding foreign key field is NULL. One task will load contacts that have non-NULL AccountID , and another one \u2014 with NULL AccountID . Filter in the first task: Filter in the second task: When you load data into the opposite direction, from Dynamics 365 to Salesforce, filter contacts by the virtual parentcustomerid$type field. One task loads contacts with parentcustomerid$type equal to \u2018account\u2019, and another one - with parentcustomerid$type equal to \u2018contact\u2019. Relation Mapping Whenever you load both parent and child records together, we recommend using Relation mapping for preserving relations. Relation mapping is fully supported for such a polymorphic relationship, as well as for self-referencing relationship (when an object references itself). You can use it both for loading data from Salesforce to Dynamics 365: and for loading data from Dynamics 365 to Salesforce: What If Only Part of Contacts Is Loaded Let\u2019s also consider a case when you load, for example, only new contacts and accounts. In this case some parent records can be already loaded. Relation mapping won\u2019t work in this case. Lookup (and External ID mapping in case of loading to Salesforce) will work for contacts, referencing accounts. However, it may not always work for self-referencing relationship \u2014 contacts referencing contacts, because child contacts can be loaded prior to the parent contacts. In this case, we recommend the following solution. First, add an import task that loads contacts without mapping the foreign key fields at all. Then add extra tasks that update loaded contacts and set the foreign fields using lookups. Don\u2019t forget to select the Preserve task order checkbox. This ensures that parent contacts are already loaded when we create the relationship. Polymorphic relationship are supported not for all connectors. Example 5. Many-to-many Relationship This example demonstrates import of Zendesk tickets to Insightly CRM as opportunities. Zendesk Tickets object references the Users object: In Insightly CRM, there is a many-to-many relationship between the Opportunities and Contacts object. Both Opportunities and Contacts object have the Links field with the list of links to other objects. There are also the corresponding intermediate objects ContactLinks and OpportunityLinks . Note that it is enough to add a record into one of these objects to add a relationship. We suppose that Zendesk users are already synced with Insightly CRM contacts. When importing Zendesk tickets to Insightly CRM opportunities, we need to link the result opportunity to the contact, corresponding to the ticket requester in Zendesk. Integration Configuration When we load tickets, we use Data Splitting and load data into Opportunities and OpportunityLinks objects. We need to configure mapping for both target objects. In mapping for the OpportinityLinks object, foreign key to the contacts object can be mapped using lookup. Foregn key to the parent Opportunities object is filled automatically when using data splitting. As we can see, when relationship is implemented via an intermediate object, you can preserve these relationships when loading data using Import. If such a relation is implemented differently, you may need to use more advanced tools, for example, Data Flow ." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/importing-binary-data.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Import Binary Data Methods of Binary Data Import Skyvia supports importing binary data to all supported databases and cloud data warehouses and to some of the cloud applications, for example, Salesforce, Marketing Cloud, Magento, Shopify, Marketo. When importing CSV files, you can use two ways to import data to binary columns: using encoded values in the CSV files itself, or using a zip archive with the binary files. Importing Encoded Values The first way is to add encoded binary values to the CSV file. For databases you need to encode it to the hexadecimal format, and for cloud data warehouses and cloud apps you need to use base64 encoding. However, it may be inconvenient to place the data for such columns to a CSV file. For example, if you want to import a number of pictures to Salesforce attachments, it definitely won\u2019t be convenient to convert them to the hexadecimal format and place the result into a CSV file. Importing Zip Archive with Binary Files The second way is to import a set of binary files to a binary field, uploading them compressed to a zip archive. You just need to upload a zip archive with these files together with the CSV file with other data. The CSV file must specify which attachment file to import for each data row. The easiest way to specify this information is to add a column with the attachment file names to use to the CSV file. However, you may use more complex ways since Skyvia allows specifying the corresponding file names using Expressions . Supported Field/Column Types Here are examples of field/column types that are supported for binary data import/export. Note that the list of cloud sources, supporting binary mport is not complete. Databases SQL Server: image, binary, varbinary MySQL: binary, varbinary, blob PostgreSQL: bytea Oracle: raw, blob, long raw, bfile Cloud Data Warehouses SQL Azure Data Warehouse: binary, varbinary Google BigQuery: bytes Redshift: bytea Cloud Applications Salesforce: base64 Salesforce Marketing Cloud: ByteA Marketo: Binary Shopify: Binary Magento 2.x: only the ImageContent field of the ProductMedias object. Zoho CRM: Photo field of the Contacts , Leads , Accounts , Products , and Vendors objects and the Content field of the attachments objects: LeadAttachments , AccountAttachments , CampaignAttachments , CaseAttachments , ContactAttachments , DealAttachments , MeetingAttachments , InvoiceAttachments , PriceBookAttachments , ProductAttachments , PurchaseOrderAttachments , QuoteAttachments , SalesOrderAttachments , SolutionAttachments , TaskAttachments , VendorAttachments . How to Import Binary Data To import binary files, upload the zip archive together with the CSV file on the Source Definition page of the Task Editor. If you are importing local file(s) from your computer, drag your file to the drop here your file, or browse area or click the browse link and upload the CSV file to import. When the CSV file is uploaded, select the Add zip with binary data checkbox under Source CSV and drag your ZIP file to the drop here your file, or browse area or click the browse link. The maximum size of a ZIP file that can be uploaded is 400 Mb. If you are importing files from FTP or a file storage service, select the file in the CSV Path drop-down list. Then, select the Add zip with binary data checkbox and find the ZIP file you want to upload in the ZIP Path drop-down list. This drop-down list displays folders and files from the specified source connections. You can open and close folders by clicking them. Then, when mapping a binary field use a special Zip File mapping and specify an expression, returning the name of the attachment file to import for the current data row. To find a more detailed information about binary data import with a zip archive, see the Importing Accounts with Binary Attachments tutorial." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/importing-only-recently-added-or-changed-data-from-cloud-sources.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Import Only Recently Added or Changed Data from Cloud Sources With Skyvia you can easily propagate data changes, such as inserting new records to a cloud data source or updating existing ones, to other data sources via Import . When importing data from cloud sources, Skyvia allows creating integrations that import not all the cloud data from the source object but only the records created or modified since the integration creation or since the previous integration execution . To create such a task in import, you simply need to click the Inserted or Updated button on the Source Definition page of the Task Editor. All other settings are configured as usually . The Inserted button imports records created since the last integration run (or integration creation if the integration was never run). The Updated button imports both records that were created and records that were modified since the last integration run. For example, you can easily create a integration that adds all the new Salesforce leads as Mailchimp subscribers to a Mailchimp list or create QuickBooks customers for new Salesforce accounts, etc. After this, you can schedule it to execute every several minutes, and subscribers will be automatically added for all the new Salesforce leads. This functionality allows you to create trigger-action-like integrations or simply load all the new records and updates from one source to another automatically . However, it is not a full-featured replacement for data synchronization , because an import cannot perform actions for recently deleted records. Importing recently added or updated records is supported only when the Source is a cloud application. It also requires a source object to have fields that store timestamps for the creation time and last modification time. For example, in Salesforce, these fields are CreatedDate and LastModifiedDate. However, there is a workaround for relational databases described below. Please note that not in some cloud sources only some objects have these fields, and thus, for some objects this feature is not available. For example, it cannot be used when importing data from Salesfoce Marketing Cloud data extensions. Some data sources, like Podio or SendPulse, does not have such fields at all. Workaround for Relational Databases To import only recent data from relational databases, you can use the following workaround: Add columns for storing timestamps of creation time and/or last modification time to the database table you want to import data from yourself. The date and time in these columns must be in UTC. Also you need to create triggers that assign current timestamp to these columns whenever a row is inserted or modified. Then you can use source data filters and add filters on these columns that use the LAST_RUN relative constant. Thus, you can import only records with creation or modification date and time more than the date and time of the previous integration run. This relative constant is equal to the timestamp of the previous integration run, and, if a integration has not been run yet, to the date and time of its creation. Example: Adding New Salesforce Leads to Mailchimp List Suppose we need to integrate Mailchimp subscribers with Salesforce leads. We already have subscribers for existing leads and want to create subscribers from new ones automatically. So, to create such an import, let\u2019s do the following: Creating an Integration Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in Source Type , click Data Source database or cloud app and select Salesforce connection from the drop-down list. If you have not created the connection yet, see the Salesforce topic. Under Target , in the Connection list, click Select target and select Mailchimp connection from the drop-down list. If you have not created the connection yet, see the Mailchimp topic. Click the Add new link. Configuring Source In the Source list, select Lead . Click Inserted to import only recently inserted records. Click Next step . Configuring Destination In the Target list , select ListMembers . Click Next step . Configuring Mapping Click ListId. Click Column and then select Target Lookup from the drop-down list. In the Lookup Object list, select Lists . In the Result Column list, select Id . In the Lookup Key Column list, select the Name target column. Click Column and then select Constant from the drop-down list. In the bottom drop-down box, enter the name of the list to add new subscribers to. Click the Last Name column and map it to the Last Name source column using column mapping . Click the First Name column and map it to the First Name source column using column mapping . Click the EmailType column and map it to the constant \u201cHtml\u201d (without quotes) using constant mapping . Click the Status column and map it to the constant \u201cSubscribed\u201d (without quotes) using constant mapping . Click Save . Scheduling Integration Now all you need is to schedule a integration for automatic execution so that it creates Mailchimp subscribers from new Salesforce leads automatically. We will configure it to run every 5 minutes on workdays from 8:00 to 18:00. Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and then select Recur every on the list. Enter 5 in the corresponding box. Click Minutes . Click Set time restrictions . Enter 8:00 and 18:00 to the corresponding boxes. Click Save to schedule integration execution." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/performing-update-and-delete.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Perform UPDATE and DELETE Skyvia supports not only INSERT operation for data import, but also UPSERT , UPDATE, and DELETE. To create an import task that performs UPDATE or DELETE, configure source settings and click Next step as usually. Then, in target settings , selected the target object on the Target Definition page, and click Delete or Update . After this, in Mapping settings , map the target columns. Skyvia will determine the records to update or delete by their ID/primary key values. This means that primary key columns must be mapped. They are marked as Required for the DELETE and UPDATE operations. In fact, for DELETE, you only need to map ID (primary key) columns. For UPDATE operation you also need to provide values for the columns you want to change the value of, and you can additionally map any target object columns. Skyvia requires the target table/object to have either a primary key or a unique column for the UPDATE or DELETE operations. What if ID/Primary Key Values Are Unknown? When ID/Primary Key values are known and present in the source, you can simply specify them using column mapping. However, Skyvia allows performing UPDATE or DELETE operations even without knowing ID or primary key values in the target. When you don\u2019t have target IDs in the source, you can specify them via the Lookup Mapping and get the IDs or primary key values from the target object itself by some other field or a set of fields that uniquely identifies a record. Alternatively, you can use Export or Query to retrieve the necessary ID values. Google BigQuery Specifics Google BigQuery does not have unique or primary keys, but Skyvia still supports UPDATE or DELETE operations for it. You can specify the fields, which will be considered key fields, manually. If target of an import is Google BigQuery, after you select the INSERT or UPDATE operation in the task editor, the Key Columns box is displayed. Click this box to display the list of the target table columns. Select a key column from this list. If you need to use a key of more than one column, repeat the operation. To remove a selected column from the list of key columns, click its \u0421ross button. Example: Updating Product Prices in Salesforce Suppose we need to update product prices in Salesforce standard pricebook, and we only have a CSV file with product names and prices, without knowing the IDs of the corresponding PricebookEntries. Knowing the Pricebook2 ID value and product name, we can uniquely identify the PricebookEntry to update. We use the lookup mapping for the ID field, using these two fields as lookup key columns. The product names (which are stored in the PricebookEntry Name field) are specified as source column values. To retrieve the Pricebook2Id value of the standard pricebook, we will use a second-level lookup on Pricebook2 object to retrieve the ID value by the pricebook name. Creating an Integration Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. If you have not created the connection yet, see the Salesforce topic. Click the Add new link. Configuring Source Drag the source CSV file to the drop here your file, or browse area. Configure CSV Options if necessary. Click Next step . Configuring Destination In the Target list, select PricebookEntry . Click Update . Click Next step . Configuring Mapping First we need to map the Id column of the PricebookEntry. We can uniquely identify the necessary pricebookentries by the product names, which are stored in the Name field of the PricebookEntry table, and by the PricebookId field. So, we will use lookup of these two columns to map the Id field. In case we do not know the Id of the standard pricebook, we can use a second-level lookup on Pricebook2 object by the name \u201cStandard Price Book\u201d. Click Id . Click Column and then click Target Lookup . In the Lookup Object drop-down list, select a target object to get the value from (in our example, the same PricebookEntry table). In the Result Column drop-down list, select a column from the Lookup Object to get the result value for the mapped target column from (in our example, Id ). In the Lookup Key Column drop-down list, select the target lookup key column (in our example, Name ). In the bottommost drop-down list (under the Column drop-down list), select the column with product names (in our example, Product name ). Click the +Add Lookup Key link at the bottom of the mapping area. In the Lookup Key Column drop-down list, select the second lookup key column (in our example, Pricebook2Id ). In the Column drop-down list, select Target Lookup . In the new Lookup Object drop-down list, select a target object to get the value from (in our example, the Pricebook2 table). In the new Result Column drop-down list, select a column from the Lookup Object to get the result value for the mapped target column from (in our example, Id) . In the new Lookup Key Column drop-down list, select the lookup key column (in our example, Name ). In the new Column drop-down list, select Constant . In the box under it, enter \u201cStandard Price Book\u201d (without quotes). Click UnitPrice below. Select the Price column in the corresponding drop-down list. After this, the import task is ready. Click the Save button to save the task. Now you can run it and update prices for the products." }, { "url": "https://docs.skyvia.com/data-integration/import/how-to-guides/performing-upsert-operation.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import How-to Guides How to Perform UPSERT Operation Skyvia supports UPSERT operation in data import for all the supported cloud applications and relational databases, except for Google BigQuery and Amazon Redshift. UPSERT operation requires the primary key to be auto-generated. What is UPSERT? The UPSERT operation updates a record if it exists or inserts a new record. This allows you to avoid inserting duplicate data. You need to map the target ID/Primary key columns for performing UPSERT. In Skyvia, UPSERT determines what action to perform in the following way: if a Null value is specified for the ID or primary key, UPSERT operation inserts the record, and if a non-null value is specified, UPSERT operation tries to update the record with the specified ID or primary key. Skyvia does not actually check if such record exists, and if you provide invalid ID/PK values, it will result in failed records. What if ID/Primary Key Values Are Unknown? If you import CSV files, you can get the necessary values using Export or Query . However, it\u2019s often not an option, especially when you import data from a database or cloud application directly. For this Skyvia provides a more convenient way \u2014 UPSERT operation, even without knowing the ID values . You can use Lookup Mapping for ID/primary key columns and get the IDs or primary key values from the target object itself by some other field that uniquely identifies a record. When using lookup mapping for ID or primary key columns in UPSERT , do not forget to select the Set null when no match found checkbox in Lookup Options . Otherwise, lookup will produce errors if no such record is found, and there would be failed records instead of inserted new ones. How to Configure UPSERT In order to create an UPSERT import task , specify source settings as you need. Then, in target settings , select the target object and click Upsert . After this, in Mapping settings , map the target columns. Primary key columns must be mapped, they are marked as Required for UPSERT. If you have the corresponding primary key values in your source, map PK columns to the corresponding source columns using Column Mapping , and make sure that for records you want to perform INSERT for, there are NULL values in these columns. Otherwise, you can use Lookup Mapping . UPSERT in Salesforce and Zoho CRM Unlike other data sources, Salesforce and Zoho CRM offer native support for UPSERT. They perform UPSERT by an External ID field. When you perform UPSERT to a Salesforce or Zoho CRM object with an External ID field in Skyvia, you can choose whether to use the ID field as described above, or External ID. If you select Use External ID , you need to select the External ID field to use since a Salesforce/Zoho CRM object can have more than one custom External ID field. When using External ID, you do not need to map the ID field. Instead you need to map the selected External ID field, which is mapped as Required on the Mapping Definition page. Example: Upserting Salesforce Contacts from Mailchimp Suppose we need to import subscribers from a mailing list in Mailchimp to Salesforce contacts, and part of the corresponding contacts is already present in Salesforce. We do not want to make duplicate records, so we perform UPSERT instead of usual INSERT. So, to perform such an import, let\u2019s do the following: Creating an integration Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in Source Type , click Data Source database or cloud app and select your Mailchimp connection from the drop-down list Under Target , in the Connection list, select your Salesforce connection from the drop-down list. Click the Add new link Configuring Source When configuring the source data, we need not only to select the source table, but also to configure data filtering in order to import subscribers only from one of the lists. In our example, the name of this list is \u201cTest list\u201d. In the Source list, select ListMembers . Click +Condition . In the first (leftmost) list of the condition, select Lists . In the second list of the condition, select Name . Enter \u201cTest list\u201d to the rightmost box of the condition. Click Next step . Configuring Destination In the Target list, select Contact . Click Upsert . Click Next step . Configuring Mapping First we need to map the Id column of Salesforce Contact. As we can see, it is automatically mapped to the Id column of the Mailchimp ListMembers table, because these columns have the same name. However, Salesforce and Mailchimp Ids have different format and are completely different things, so we cannot map Salesforce Ids to Mailchimp Ids. We will use the lookup mapping to map the target Id field and determine the necessary Contact Ids by email. Click Id. Click Column and then select Target Lookup from the drop-down list. In the Lookup Object list, select Contact . In the Result Column list, select Id . In the Lookup Key Column list, select the Email target column. In the bottom drop-down list, select the Email source column. Click Options . Select the Set null when no match found checkbox Click the LastName target column and map it to the Last Name source column using column mapping . In the same way map the FirstName target column to the First Name source column. Click Save . Now our integration is ready, and you can execute it." }, { "url": "https://docs.skyvia.com/data-integration/import/index.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Import Overview Skyvia Import is a flexible tool for data migration between different platforms. It allows loading data from CSV files, cloud apps or relational databases to other cloud apps or relational databases. With Skyvia you can easily configure data migration even between platforms with a different data structure. Multiple Import Tasks In the system, each import can contain one or more import tasks and, thus, import data from one or more source files, tables or objects to other ones as a single import operation. Source Data Filtering When you load data from a database or a cloud app, you can filter source data to load. See Filter Settings for more information. Importing Query Results If simple settings for source data are not enough, you can use the advanced task editor mode allows importing the results of an SQL SELECT statement or visually built query. Support for All DML Operations Skyvia Import supports all DML operations (including UPSERT) and can be used to perform massive data updates. For more information, check also How to Perform UPDATE and DELETE . Data Splitting Skyvia also allows loading data from a single file, database table, or cloud object to several related cloud objects or database tables (one-to-many) using Data Splitting feature. Skyvia builds the corresponding relations between the imported objects or records automatically (check tutorial ). Returning When you load data from a database or a cloud app, the Returning feature allows getting ids or any other fields from the imported records back to the source. For example, if you load records from a database to Salesforce and want to have the Salesforce IDs (or any other generated fields) loaded back to the database, you can use the returning feature for it. Note that it is available only for INSERT and UPSERT operations. Import Tutorials Skyvia provides the following tutorials on data import: Importing Contacts for Existing Accounts Importing Products with Prices from Dropbox Importing Related Customer, Order, and Product Data Importing Accounts with Binary Attachments Importing Deals and Contacts from Zoho CRM to HubSpot Preserving Relations Importing Data from Salesforce Opportunities to QuickBooks Invoices Importing QuickBooks Invoices to Salesforce Opportunities Easy Importing Invoices and Customers Between QuickBooks Online Accounts" }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Import Tutorials In this section, you can get familiar with the following tutorials on data import operations: Importing Contacts for Existing Accounts This tutorial describes how to import a file with Contacts for Accounts already existing in the Salesforce database. It demonstrates different ways of specifying IDs of existing master Salesforce objects when importing child objects. Importing Products with Prices from Dropbox This tutorial describes how to import a file with product prices from Dropbox to the Salesforce Product2 and PricebookEntry objects. It demonstrates a one-to-many import operation (data splitting). Importing Related Customer, Order, and Product Data This tutorial describes how to import Product, Customer, Order, and Order Detail tables from the Microsoft standard Northwind database on SQL Azure. It demonstrates a complex many-to-many import operation (when you import data from several tables with foreign key relations between them to several related Salesforce objects), creating a connection to SQL Azure, and how to preserve data relations between the source tables. Importing Accounts with Binary Attachments This tutorial describes how to import accounts together with attachments that are imported as a set of binary files. It demonstrates import of binary files. Importing Deals and Contacts from Zoho CRM to HubSpot Preserving Relations This tutorial describes how to set the correct relation between Contacts and Deals to successfully import data from Zoho CRM to HubSpot. Importing Data from Salesforce Opportunities to QuickBooks Invoices This tutorial describes how to import data from Salesforce Opportunities to QuickBooks Invoices using the nested objects feature. Importing QuickBooks Invoices to Salesforce Opportunities This tutorial describes how to import data from QuickBooks Invoices to Salesforce Opportunities using the nested objects feature. Easy Importing Invoices and Customers Between QuickBooks Online Accounts This tutorial describes how to import Customers and Invoices from one QuickBooks account to another using the nested objects feature. Importing Products with Variants from the relational Database into Shopify This tutorial describes importing products together with product variants using the nested objects feature." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-accounts-with-binary-attachments.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Accounts with Binary Attachments In this tutorial, we will show how to import binary files together with a CSV file. Suppose we need to import a CSV file with Salesforce accounts and attachments for these accounts. Attachment data are stored as a field of the base64 type. It would be impractical to convert all the attachment files to base64 values and put them inside a CSV file. Instead we will import them as is \u2014 as a collection of binary files. To import a collection of binary files together with a CSV file, you need to zip them and add information on which file corresponds to which data row to the CSV file. The simplest way would be to add a column, containing a file name, to the CSV file. In our case we have two choices. If we import a fixed number of attachments for each imported account, we may add all the necessary columns for attachments to the CSV file, containing the accounts. However, if the number of attachments for a single account can be different, you will need to create a separate CSV file with information about attachments. In the latter case you will need to specify an account to import an attachment for. To avoid using Salesforce IDs in the CSV file, we consider that imported account names are unique and use account names to specify the attachment owners. We will need the following columns in the Attachments.csv file: AccountName \u2014 will be used to specify the account owner using Relation mapping ContentType \u2014 this column determines the type of the imported attachment file (can be omitted if all the imported attachments have the same type) FileName \u2014 name of the corresponding attachment file in the zip archive. When importing a single CSV file, containing information on accounts and attachments, you will need to add columns, specifying the names of the files to import as attachments. The number of these columns must be the same as the number of attachments per account. In case the attachment file types are different, you will also need the same number of columns, specifying the content type. This tutorial demonstrates both scenarios: with a single CSV file and with separate CSV files. Creating Connection Regardless of the way you want to use to specify parent accounts for contacts, first you need to create a connection to the target Salesforce database, if you haven\u2019t created it before. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on our Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and log in via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click the Create Connection button to create the connection. Importing Accounts and Attachments stored in Single CSV File Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Target , in the Connection list, click Select target and select Salesforce1 in the drop-down list. Click the Add new link to open the Task Editor. Click browse and select the CSV file with Accounts data to import. Select the Add zip with binary data checkbox. Click browse and select the zip file with attachments to import. If necessary, set the CSV Options . Click the Next step button in the bottom of the dialog box to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. Select Account in the Target list. Click +Related near Account and in the new drop-down list select Attachment . If you import several attachments per account, perform this action the number of times equal to the number of attachments per account. Note that you should click the same +Related near Account, not the buttons near added Attachments. Click the Next step button in the bottom of the dialog box to switch to the next editor page or click the Previous step button to return to the previous page. Map the Account fields to the corresponding source columns. To map Attachment object fields, click the target table name ( Account ) and select Account.Attachment in the drop-down list. Map the Name field to the FileName column. Click the Body field and then, in the Column drop-down list, select Zip File . Enter \u201cFileName\u201d to the box (without quotes). If you have all the attachments of the same type and haven\u2019t added the ContentType column to the CSV file, let\u2019s specify the attachment type as an expression. Click the ContentType field and then, in the Column drop-down list, select Constant . In our example all the attachments are PNG images, so enter \u201cimage/png\u201d to the box (without quotes). If you have specified content type in a CSV file column, map the ContentType field to this column instead of using the expression. If you are importing several attachments per account, repeat the steps 15 - 22, each time selecting the next Account.Attachment entry in the tables mapping drop-down list and specifying the corresponding column for the Body field until all the imported attachments are mapped. Click the Save button to save the task. Click the Create button to create the integration. Importing Accounts and Attachments stored in Separate CSV Files In this case we will create two import tasks for importing Accounts and Attachments respectively. Note that in this scenario you can import an arbitrary number of attachments per account. Creating an Integration Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. You can use the Type to filter box to quickly find the necessary connection. Importing Accounts Click the Add new link to open the Task Editor. Click browse and select the CSV file with Accounts data to import. If necessary, set the CSV Options . Click the Next step button in the bottom of the dialog box to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. Select Account in the Target list and then click Next step to switch to the next editor page or click the Previous step button to return to the previous page. Map the Account fields to the corresponding source columns. Click the Save button to save the task. Importing Attachments Click the Add new link to open the Task Editor. Click browse and select the CSV file with Attachments data to import. Select the Add zip with binary data checkbox. Click browse and select the zip file with attachments to import. If necessary, set the CSV Options . Click the Next step button in the bottom of the dialog box to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. In the Target list, select Attachment and then click Next step to switch to the next editor page or click the Previous step button to return to the previous page. Click the ParentID field and then, in the Column drop-down list, select Relation . Under the Referenced Object drop-down list, select the Reference itself checkbox. It automatically selects Accounts.csv . In the Column drop-down list, select Name . In the Referenced Column drop-down list, select Name . Repeat the steps 8-11 for the OwnerId field. Map the Name field to the FileName column. Click the Body field and then, in the Column drop-down list, select Zip File . Enter \u201cFileName\u201d to the box (without quotes). If you have all the attachments of the same type and haven\u2019t added the ContentType column to the CSV file, let\u2019s specify the attachment type as a constant. Click the ContentType field and then, in the Column drop-down list, select Constant . In our example all the attachments are PNG images, so enter \u201cimage/png\u201d to the box (without quotes). If you have specified content type in a CSV file column, map the ContentType field to this column instead of using the expression. Click the Save button to save the task. Click the Create button to create the integration." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-contacts-for-existing-accounts.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Contacts for Existing Accounts In this tutorial, we will show how to import a file with Contacts data for existing Salesforce Accounts. When preparing such a file, you need to somehow specify accounts, for which you insert these contacts. It\u2019s not an easy task to specify accounts in this file by their Salesforce Account IDs \u2014 15- or 18-character strings. In such cases usually account names or some other numeric IDs are used. But when importing this file to Salesforce, you need to assign values to the foreign key AccountID field. Skyvia offers two ways to solve this problem \u2014 external IDs or lookup. Creating Connection Regardless of the way you want to specify parent accounts for contacts, first you need to create a connection to the target Salesforce if you haven\u2019t created it yet. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list, select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on our Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and login via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . Importing Contacts Using External IDs External IDs can be used for importing if the Account Salesforce object has a custom external ID field. External ID fields are the fields that uniquely identify records. If your Account object has such an external ID custom field, and your CSV file with Contacts contain a column, identifying the parent accounts, use the following steps to import the contacts file: Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. You can use the Type to filter box to quickly find the necessary connection. Click the Add new link to open the Task Editor. Drag your file to the drop here your file area or click the browse link and upload the CSV file with Contacts data to import. Specify the CSV options if necessary. Click the Next step button in the bottom of the dialog box to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. In the Target list, select Contact . Click the Next step button in the bottom of the dialog box to switch to the next editor page or click the Previous step button to return to the previous page. Click the AccountId field. Click Column and then click External ID on the drop-down list. In the Source Column drop-down list, select the CSV file column, containing external id values, determining parent accounts. In the External ID Column drop-down list, select the Account external ID field name. If the names of columns in the CSV file are not the same as Contact field name, map other unmapped fields to the source file columns. Click the Save button to save the task. Click the Create button to create the integration. Now your integration is ready, and you can run it to import your contacts. Importing Contacts Using Lookup By default, Account object does not have external ID fields. If you don\u2019t want to modify the Account object and add a custom external ID field to it, you can use lookup and identify parent accounts by their names. To import a CSV file of Contacts with the column containing parent account names, perform the following steps: Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. Click the Add new link to open the Task Editor. Drag your file to the drop here your file area or click the browse link and upload the CSV file with Contacts data to import. Specify the CSV options if necessary. Click the Next step button in the bottom of the dialog box. In the Target list, select Contact . Click the Next step button in the bottom of the dialog box. Click the AccountId field. Click Column and then click Target Lookup in the drop-down list. In the Lookup Object drop-down list, select Account . In the Result Column drop-down list, select the ID . In the Lookup Key Column drop-down list, select the column from the Account table. In the drop-down list under Column , select Company name from CSV file. If the names of columns in the CSV file are not the same as Contact field name, map other unmapped fields to the source file columns manually. Click the Save button to save the task. Click the Create button to create the integration. Now your integration is ready, and you can run it to import your contacts." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-deals-and-contacts-from-zoho-crm-to-hubspot-preserving-relations.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Deals and Contacts from Zoho CRM to HubSpot Preserving Relations In this tutorial, we will show how to import Deals and Contacts from Zoho CRM to HubSpot preserving relations. The difficulty was that in Zoho CRM, the Deal object refers to Contact (id), i.e. one deal can have only one contact, and contact has many deals. Meantime, in Hubspot, the relation between Deals and Contacts is many-to-many, which is implemented through a JOIN table. So we need to set the correct relation between Contacts and Deals to import data from Zoho CRM to HubSpot. We will do it through the DealContacts table. To find out how to do it, read our step-by-step instruction below. Creating Connection Regardless of what integration you configure, first you need to create connections that you will use as source and target. In our tutorial, we create connections to Zoho CRM and HubSpot cloud apps. If you have already created the necessary connections, you may skip these steps. Creating Zoho CRM Connection To create a connection to Zoho CRM, perform the following steps: Click +NEW in the top menu; Click the Connection button in the menu on the left; In the opened Select Connector page, select Zoho CRM . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Zoho CRM , select the CRM category); The default name of a new connection is Untitled . Click it to rename the connection, for example, to Zoho CRM1 ; In the Domain list, select the domain to connect to: crm.zoho.com, crm.zoho.eu, crm.zoho.com.cn, crm.zoho.in or crm.zoho.com.au ; In the Environment list, select an environment type \u2014 Production or Sandbox ; Click Sign In with Zoho ; In the opened window, enter email and password to Zoho CRM and click the Sign In button; In the opened window, choose the Org you want to access (if you have several ones) and click Submit ; In the next window, click Accept to allow Skyvia to access data in your Zoho account; The authentication token is generated; Optionally change the Metadata Cache parameter value. It determines how often to update cached metadata for the connection. By default, Skyvia caches metadata of available objects for cloud sources. You can configure how often the cache is refreshed automatically or reset it manually on the details page of the corresponding connection by clicking the Clear now link next to the Metadata Cache parameter; Click Create Connection . Creating HubSpot Connection To create a connection to HubSpot, repeat the following steps: Click +NEW in the top menu; Click the Connection button in the menu on the left; In the opened Select Connector page, select HubSpot . For quick search, use either the Type to filter box or filter connectors by categories using the All list (for HubSpot , select the CRM category); The default name of a new connection is Untitled . Click it to rename the connection, for example, to HubSpot1 ; Click Sign In with HubSpot ; In the opened window, enter your HubSpot credentials and click Log in ; Select your account to use; In the opened page, click Grant access to approve access request; After the access token is generated, click Create Connection button. Creating Import To create a new import, follow this step-by-step instruction: Click +NEW in the top menu; In the Integration column, click Import . The integration editor opens; Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations; Under Source , in Source Type click Data Source database or cloud app and select Zoho CRM from the Connection drop-down list; Under Target , select HubSpot from the Connection drop-down list; Click the Add new link to open the Task Editor. Creating Task for Contacts In the opened Task Editor window, select the Contacts object from the Source drop-down list; Next, add desired filters. For example, you can import only contacts that have non-empty email addresses (email is not null). For this, click +Condition to set a specific condition. You can read more about conditions and filters in the Filter Settings topic. In our case, we set the condition for the Contact Email to be not null ; Click the Next step button at the bottom of the window to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition; In the Target drop-down list, select Contacts ; Click the Next step button at the bottom of window to switch to the next editor page or click the Previous step button to return to the previous page; On the Mapping Definition tab, check whether all required columns are mapped. You can read more about different types of mapping here . The First Name, Last Name, Salutation, and Email fields have the same names in HubSpot and Zoho CRM, so they are mapped automatically. You can also map other fields, which does not have identical names, for example, the Phone field to Phone Number , Fax to Fax Number , etc. if needed; Click Save to save the first task; Click the Add new link again to create another task. Creating Task for Deals In the opened Task Editor window, select the Deals object from the Source drop-down list; Please note that you import only Deals that relate to the Contacts imported in the first task. So you need to add a filter for the Contacts not to be null. Moreover, if you added some extra filters to Contacts in the previous task, then in Deals you also need to add similar filters (in the first task, the filter was email is not null , so add it to Deals as well). To start setting a condition, click +Condition ; In the first drop-down list, select Deals , in the second drop-down list, select Contact , in the third drop-down list, select is not null ; Click +Condition again to set the second condition; In the first drop-down list, select Deals_Contact , in the second drop-down list, select Email , in the third drop-down list, select is not null ; Click the Next step button at the bottom of the window to switch to the next editor page; In the Target drop-down list, select Deals ; Click +Related and select DealContacts[Deals_DealContacts] ; Click Next step to proceed with mapping; On the Mapping Definition tab, map Deal Name column to Contact Name column by selecting Contact Name value. You can also map any other fields if needed; To map fields of the related DealContacts object, click the target table name (Deals) and select Deals.DealContacts in the drop-down list; Note that the Deal ID column is mapped to the generated value of the corresponding Deal record automatically. Here we only need to map the Contact ID column through relations to Deals_Contact ; Click the Save button to save the task. Finally, when two tasks are added, click the Create button to create the integration. When your integration is ready, run it to import data. The data will be imported into two tables, and the correct IDs generated in Hubspot will be inserted into the JOIN table. You can check whether results are successful or not on the Monitor tab. For this, click the Run ID to open the History Details window." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-in-quickbooks-online.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Easy Importing Invoices and Customers Between QuickBooks Online Accounts In this tutorial, we will show how to import Customers and Invoices from one QuickBooks account to another one. If the Customer object is simple for import, the Invoice object is not so easy in mapping. The main problem of importing the Invoce object is that its Line column is not represented as separate fields, it has an array data structure, and if during transfer of invoices from one QuickBooks company to another, you need to make changes to the quantity of goods for each invoice for example, you should specify it somehow inside json strings, which is quite inconvenient and problematic. However, when you select the Nested Objects checkbox in your import, the nested Line object is divided into separate fields, and it becomes possible to use Expression or Constant, then map necessary columns easily and import data with changes made. To find out how to create an import to load customers and invoices into QuickBooks Online and configure mapping settings of nested objects correctly, read our step-by-step instruction below. Please note that the Nested Objects functionality is available when the new data integration runtime is selected. Make sure you have selected the Use new runtime checkbox in your import before adding a task. Creating Connections First, you need to create connections in Skyvia, which you will use as source and target in your import. As we are going to import Invoices and Customers between two different QuickBooks accounts, we create two connections to two QuickBooks accounts. One connection will serve as source, another one \u2014 as target. Read the QuickBooks topic to find out how to configure settings of the necessary cloud app and check the Connections topic to know how to create a connection following several simple steps. Creating Integration for Importing into QuickBooks To create and run a new integration, you need to adjust general integration settings and configure task editor settings. Follow these simple steps to adjust general integration settings: Click +NEW in the top menu; In the Integration column, click Import ; In the open integration editor, rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note that if you omit this step, the integration name will remain Untitled in the list of created integrations. Select the Use new runtime checkbox to activate the Nested Objects option; Click Data Source database or cloud app and select one QuickBooks connection as source from the drop-down list; Select another QuickBooks connection as target from the next drop-down list; Select the Preserve task order checkbox to run your tasks exactly in the same order you create them in the import. This feature is optional; Select the Nested Objects checkbox to be able to map the columns of the nested QuickBooks Line objects; Click the Add new link to open the Task Editor. Configuring Task Editor Settings Each import can contain one or more import tasks, which are configured in the convenient task editor. For our scenario, we need to create and configure settings for two import tasks. The first task will contain settings and mapping for importing QuickBooks Customers, the second one \u2014 for importing invoices into QuickBooks. Creating Task to Import Customers Source Definition Tab On this task editor tab, you select a source object to load data from and if needed specify filter settings to filter data being imported. To configure settings, perform the following quick steps: When the task editor opens, select the task editor mode you want to work with \u2014 Simple or Advanced . You can read more about task editor modes here . Next select the Customer object from the Source drop-down list. Click the Next step button at the bottom of the window to switch to the next editor tab. Target Definition Tab On the Target Definition tab, select Customer from the Target drop-down list. Select an operation type you want to use (for our scenario, we select Insert). Click the Next step button to continue. Mapping Definition Tab Since we import Customers from one QuickBooks account to another one, most columns are identical and are mapped by Skyvia automatically. The DisplayName column is a required one to run the integration successfully. Make sure it is mapped via Column mapping or use another type of mapping if need, for example Expression mapping as shown below. After saving the first task, proceed with creating the second one. Creating Task to Import Invoices into QuickBooks Source Definition Tab On this tab, repeat the same steps as for the Customer object. Select the source Invoice object to import and if needed specify filter settings to filter data being imported. Follow the steps as shown below: When the task editor opens, select the task editor mode you want to work with \u2014 Simple or Advanced . Next select the Invoice object from the Source drop-down list. Specify one filter condition or multiple conditions to select records from the Invoice object by clicking the +Condition button. Let us suppose we need to import all Invoices created from September 1, 2021 (inclusive) from one QuickBooks account to another one. We set the filter condition like this: Click the Next step button at the bottom of the window to switch to the next editor tab. Target Definition Tab On the Target Definition tab, select Invoice from the Target drop-down list. Select an operation type you want to use (for our scenario, we select Insert). Click the Next step button to continue. Mapping Definition Tab Since we import Invoices from one QuickBooks account to another one, most columns are identical and are mapped by Skyvia automatically. The mapping of the nested Line column and its collections should look correct as well. Make sure that everything fits your needs and change mapping for the listed fields if necessary. Please also note that there is a relation between Customer and Invoice objects. This relationship is actually determined by the CustomerRefId field of the Invoice object. To import these source objects with related data, you need to set the relation. For this, click the CustomerRefId column, select Relation mapping and map by Customer . Saving Import Task When everything is ready, click Save to save the second task. You will see created tasks in your integration details page. If you selected the Preserve task order checkbox in your integration settings, tasks are executed in the same order they\u2019ve been created in the integration. Running Import \u0421lick Create to create the integration and run the import process to import Invoices and Customers between QuickBooks accounts. You can check whether results are successful on the Monitor tab of your integration. On the Monitor tab, click the Run ID to open the History Details window. You will see the Result table with two objects (Customers and Invoices) and the number of successful rows in each of them. The imported rows are displayed as links. You can click a link to download a CSV file with the result values of the mapped target columns." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-products-with-prices-from-dropbox.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Products with Prices from Dropbox In this tutorial, we will show how to import a file, uploaded to Dropbox and containing data on products, including their prices. The main problem of such an import operation is that, while prices are often stored in the product data, Salesforce uses more flexible approach, and offers a possibility to create several pricebooks, having different prices for the same products. Therefore, in Salesforce price data are stored separately from the product data, in the PricebookEntry object. Skyvia supports importing data from a single file to several Salesforce objects and builds relations between the corresponding objects automatically. Creating Connections In order to import data from a file, uploaded to Dropbox, to Salesforce, first we need to create connections to Salesforce and Dropbox. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce. To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce, select the CRM category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list, select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on our Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and login via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to Dropbox, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Dropbox . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Dropbox , select the Storage category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Dropbox1 . Click Sign In with Dropbox . In the window that opens, enter your Dropbox credentials and click Sign In . In the window that opens, click the Allow button. Click Create Connection . Now we have created the necessary connections. Let\u2019s create a integration that performs the necessary data import operation. Creating Integration Click +NEW in the top menu. In the Integration column, click Import . The import details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. To load data from Dropbox, click CSV from storage service . Under Source , in the Connection list, click Select source and select Dropbox connection from the drop-down list. You can use the Type to filter box to find the connection quicker. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. Importing Products with Prices The next task will import the Products table data to Product2 and PricebookEntry Salesforce objects. Perform the following steps: Click the Add new link to open the Task Editor. Select the file to import from the CSV Path drop-down list. This drop-down list displays folders and files from the specified source connections. You can open and close folders by clicking them. Click the Next step button in the bottom of the dialog box to switch to next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. Select Product2 in the Target list. Click the +Related button. A new drop-down list will appear. In this new drop-down list, select PricebookEntry . Click the Next step button in the bottom of the dialog box to switch to next editor page or click the Previous step button to return to previous page. Map the Product2 fields to the corresponding source columns. To map PricebookEntry object fields, click the target table name ( Product2 ) and select Product2.PricebookEntry in the drop-down list. For PricebookEntry we will map the IsActive column to be always true. Click the IsActive field and then, in the Column drop-down list, select Constant . Select True in the drop-down list below. Map the UnitPrice field to the source column, containing the prices. We also need to map the Pricebook2Id field, which stores the ID of Pricebook, the PricebookEntry belongs to. We will map it to the ID of the standard pricebook, which is automatically created for any Salesforce database. It has the name \u201cStandard Price Book\u201d by default. We will use target lookup by constant to map the Pricebook2Id field. For this, perform the following steps: Click Column and then, in the drop-down list, click Target Lookup . In the Lookup Object list, select Pricebook2 . In the Result Column list, select Id . In the Lookup Key Column list, select Name . Under Lookup Key Column , click Column and then, in the drop-down list, click Constant . Enter \u201cStandard Price Book\u201d (without quotes) to the box below. Click the Save button to save the task. Click the Create button to create the integration." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-products-with-variants-to-shopify.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Products with Variants from the Relational Database to Shopify In this tutorial we will show you how to set up the integration between MySQL and Shopify importing Shopify products with variants from MySQL database. Shopify product variants are stored as an array in the Variants field of the Products table. Let\u2019s imagine, you need to import a list of products, stored in one database table, with their variant stored in another database table, to Shopify. Data about Product Variants in Shopify is an array which is nested into the Variants field of the Products table. To perform such kind of import you need to map the nested items of the Variants field. For such integrations, Skyvia has Nested Objects feature that allows mapping the nested object properties and nested array items directly, without the need to construct complex JSON values via expressions. Prerequisites You need to have both Source and Target connections created to be able to set up the import and implement this use case. You can do it in advance or right in the integration editor when selecting the Source and Target connections. For details on how to establish the connection to [MySQL](https://docs.skyvia.com/connectors/databases/mysql_connections.html) and [Shopify](https://docs.skyvia.com/connectors/cloud-sources/shopify_connections.html) refer to the detailed manuals. Creating Import Create new Import integration. Edit the integration name. Select the MySQL connection as a source and Shopify connection as a Target. Select the Nested Objects checkbox. Click Add New on the right to add the import task Configuring Task Editor Settings Source Definition tab Select the task editor mode in the opened window. Here you can select Simple or Advanced [Task Editor](https://docs.skyvia.com/data-integration/import/how-to-create-import-task.html) modes. Select Simple Mode for our case. Select the table to import data from in the Source list. In our case it is the Products table. You also can use filters to limit the number of imported records, if needed. Target Definition Select the Target object. In this case this is the Products table. Select the action you want to execute. We select the Insert operation for our case. Mapping Definition tab Map the needed Product table fields, such as Title, ProductType , and others. To map the nested Variants field lines you should select the right mapping type: Use Column mapping if the source field is also an array. In this case source nested fields could be mapped to the target nested fields directly. Use Source Lookup mapping, if you importing product variants items from the regular table and you want to import its fields into the nested array. Click Shopify Variants column and select Source Lookup in the source Column drop-down list on the left. With the Nested Objects checkbox selected, the Variants field provides access to fields of its nested object, which you can easily map. Select the table which fields you want to map to the nested target fields. In our case the lookup object is Productvariants table. Lookup key is ProductId field. Now the nested fields become available for mapping. You can map other fields inside the Variants array. When everything is ready, save the task and you get the integration ready to run. Click Create and run it. Integration Results You can check the run result on the Monitor tab. Click on the run and you will see the History details on the right. \nIf you click on the number of successfully processed row, you can see the integration log. As a result we have created a new Product record with product variants as a nested array." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/how-to-import-tables-from-sql-azure.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Related Customer, Order, and Product Data In this tutorial, we will show how to import data from the Products, Customers, Orders, and Order Details tables from the Microsoft standard Northwind database on SQL Azure to Salesforce. Here is the schema of the imported data. This schema differs from the target Salesforce schema. The data from the Customers table should be imported to Salesforce Accounts and Contacts. Data from the Orders table goes to the Opportunity table. Information from the Products tables should be divided between the Product2 and PricebookEntry Salesforce objects, and information from the Order Details table goes to the OpportunityLineItem Salesforce object. The problem of such operation is to preserve the relations of the source data when importing them to Salesforce. For example, when we import the Customers table to Accounts and Contacts, the relation should be created between the corresponding Account and Contact in the Salesforce database. This is easy with Skyvia. When inserting data from one table or CSV file, to multiple Salesforce objects, it builds such relations automatically. Creating Connections In order to import data from SQL Azure to Salesforce, first we need to create connections to Salesforce and SQL Azure databases. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories, using the All list (for Salesforce , select the CRM category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and login via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for SQL Server , select the Database category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201cTCP:<server name>\u201d . Replace \u201c<server name>\u201d with your actual server name. Specify your User Id, Password, and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary data import operation. Creating an Integration To create an integration, perform the following steps: Click +NEW in the top menu. In the Integration column, click Import . The import integration details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in Source Type , click Data Source database or cloud app and select SQL Azure from the Connection drop-down list. Under Target , in the Connection list, click Select target and select Salesforce connection from the drop-down list. You can use the Type to filter box to quickly find the necessary connection. Now we have an empty integration ready for adding tasks. A task is a unit of data extracting, transforming and loading process. When creating an import integration, we need to add a task for each source table or CSV file. Importing Customers Table The first task we create is a task for importing the Customers table. When creating an integration that imports several data files or tables that have complex relations between them, it\u2019s better to start with \u201cmaster\u201d tables or files that does not depend on other tables, then add tasks for tables that depend only on these master tables, then move on to the next dependency level. To create the task, perform the following steps: Click the Add new link to open the Task Editor. Select Customers in the Source list and click the Next step button in the bottom of the dialog box to switch to the next editor page. You can also switch between the editor pages by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition. Select Account in the Target list. Click the +Related button. A new drop-down list will appear. In this new drop-down list, select Contact . Click the Next step button in the bottom of the dialog box to switch to the next editor page or click the Previous step button to return to the previous page. On this step we need to map the target Salesforce object fields to the source table columns. At first, fields of Account Salesforce object are displayed. As you can see, some columns, such as Phone and Fax were mapped automatically. We use simple column mapping for Account fields - each field is mapped to a source table column. To map a field, simply click it and select the corresponding column from the drop-down list. We need to map the following fields: Name \u2014 to the CompanyName column; BillingStreet \u2014 to the Address column; BillingCity \u2014 to the City column; BillingState \u2014 to the Region column; BillingPostalCode \u2014 to the PostalCode column; BillingCountry \u2014 to the Country column. To map Contact object fields, click the target table name ( Account ) and select Account.Contact in the drop-down list. Note that the AccountId column is mapped to the generated value of the corresponding Account record automatically. Here we only need to map the LastName field to the ContactName column. Click the Save button to save the task. Importing Products Table The next task will import the Products table data to Product2 and PricebookEntry Salesforce objects. Perform the following steps: Click the Add new link to open the Task Editor. Select Products in the Source list and click the Next step button in the bottom of the dialog box. Select Product2 in the Target list. Click the +Related button. A new drop-down list will appear. In this new drop-down list, select PricebookEntry . Click the Next step button in the bottom of the dialog box. On this step we will map the Product2 fields to the source columns. First, map the Name field to the ProductName column \u2014 click Name and select ProductName from the drop-down list. The Products table has the Discontinued column, that determines whether the product is discontinued. Product2 has the IsActive field, which means the opposite. So we need to insert the opposite of the Discontinued column values to the IsActive field. We will use the Expression mapping for this case. Click the IsActive field and then in the Column drop-down list select Expression . Enter \u201c! Discontinued\u201d (without quotes) to the Expression box. The exclamation mark is the boolean NOT operator, which returns the opposite of its argument. To map PricebookEntry object fields, click the target table name ( Product2 ) and select Product2.PricebookEntry in the drop-down list. Note that the Product2Id column is mapped to the generated value of the corresponding Account record automatically. For PricebookEntry we will map the IsActive column to be always true. Click the IsActive field and then, in the Column drop-down list, select Constant . Select True in the drop-down list below. We also need to map the Pricebook2Id field, which stores the ID of Pricebook, the PricebookEntry belongs to. We will map it to the ID of the standard pricebook, which is automatically created for any Salesforce database. Retrieve this ID and copy it to the clipboard. For example, you may retrieve this ID in the following way: Open a new browser tab or window (do not close the current page). Log in to the salesforce.com website. Open the Products tab. In the Maintenance section on the bottom of the Products tab, click Manage Price Books . Click the link to your standard pricebook. The part of the url of the pricebook page after \u201chttps://***.salesforce.com/\u201d will be the pricebook ID. Copy it. After you copied the ID of the standard pricebook, switch back to the task editor. Click Pricebook2Id and then, in the Column drop-down list, select Constant . In the Constant box paste the copied ID value. Click the Save button to save the task. Importing Orders Table After this we need to import the Orders table. This case is a bit more complex since the Orders table has a foreign key to the Customers table, and we want to preserve this relationship in the Salesforce database as the corresponding reference to the Account object. Skyvia offers two ways to map this relationship: external IDs and source relations. External IDs are used when the referenced data are already imported, and the referenced Salesforce object has an External ID field, which can be used to identify the record to get ID value from. Since by default Account object does not have External ID field, we will use the relation mapping. Click the Add new link to open the Task Editor. Select Orders in the Source list and click the Next step button in the bottom of the dialog box. Select Opportunity in the Target list. Click the Next step button in the bottom of the dialog box. Now we map the AccountId foreign key field using the relation mapping. Click AccountId and then, in the Column drop-down list, select Relation . Select dbo.Customers [FK_Orders_Customers] in the drop-down list below. This list contains the foreign key relations (note that the name of the item to select may differ if you use schema other than dbo). Now map the Name and CloseDate fields to the ShipName and OrderDate columns respectively. We will use constant mapping to map the StageName field to a constant. Click the StageName field and then in the Column drop-down list select Constant . Select Closed Won in the drop-down list below. Click the Save button to save the task. Importing Order Details Table When importing the Order Details table, we will also use the relation mapping to preserve the foreign key between Order and Order Details as the relation between Opportunity and OpportunityLineItem. Also we need to store the foreign key between Product and Order Details as the relation between Opportunity and PricebookEntry. Click the Add new link to open the Task Editor. Select Order Details in the Source list and click the Next step button in the bottom of the dialog box. Select OpportunityLineItem in the Target list. Click the Next step button in the bottom of the dialog box. Now we map the OpportunityId foreign key field using the relation mapping. Click OpportunityId and then, in the Column drop-down list, select Relation . Select dbo.Orders [FK_Order_Details_Orders] in the drop-down list below (note that the name of the item to select may differ if you use schema other than dbo). After this we map the PricebookEntryId foreign key field using the relation mapping. Click PriceBookEntryId and then, in the Column drop-down list, select Relation . Select dbo.Products [FK_Order_Details_Products] in the drop-down list below (note that the name of the item to select may differ if you use schema other than dbo). Click the Save button to save the task. Now your integration is ready. The default integration name is Untitled . You can rename your integration by clicking and editing it. In our case, we rename it to SQL Azure Tutorial . Click the Create button to create it. After this, you can run the integration by clicking the Run button. After you run it, you will see the results of integration execution on the Monitor or Log tabs. Click a certain integration run to open the History Details window with detailed information." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/import-from-salesforce-to-quickbooks.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing Data from Salesforce Opportunities to QuickBooks Invoices In this tutorial, we will show how to configure Salesforce - QuickBooks integration: import data from Salesforce Opportunities to QuickBooks Online Invoices. The main problem of such an import operation is that when you import data from Salesforce Opportunities into QuickBooks Invoices, there are no fields in Salesforce that correspond to the Invoice Line field in QuickBooks, and, to successfully load data, you need to map the required Invoice Line field to a constant in the form of an array. The Invoice Line field is not represented as separate fields, it has an array data structure. However, when you select the Nested Objects checkbox in your import, the nested Line object is divided into separate fields, and it becomes possible to map its columns to the corresponding columns from the Salesforce OpportunityLineItem. When you map fields of a nested object, you do not need to define how many lines should be for each Invoice. Skyvia automatically detects it. For example, if the Opportunity object has three line items, then Skyvia generates 3 lines for Invoice in QuickBooks Online. To find out how to create an import and configure mapping settings of nested objects correctly, read our step-by-step instruction below. Please note that the Nested Objects functionality is available when the new data integration runtime is selected. Make sure you have selected the Use new runtime checkbox in your import before adding a task. Creating Connections First, you need to create connections in Skyvia, using Salesforce and QuickBooks connectors. You will use them as source and target in your import. For our scenario, we create connections to Salesforce (source) and QuickBooks (target) respectively. Read the Salesforce and QuickBooks topics to find out how to configure settings of necessary cloud apps as well as view the Connections topic to know how to create a connection following several simple steps. Creating Import To successfully create and run a new integration, you need to adjust general integration settings and configure task editor settings. Follow these simple steps to adjust general integration settings: Click Create New in the top menu; In the Integration column, click Import ; In the open integration editor, rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note that if you omit this step, the integration name will remain Untitled in the list of created integrations; Select the Use new runtime checkbox to activate the Nested Objects option; As you import data from the cloud app, click Data Source database or cloud app and select Salesforce as source from the Connection drop-down list; In the next drop-down list, select QuickBooks as target; Select the Nested Objects checkbox to be able to map the columns of the nested QuickBooks Line object to the corresponding columns from the Salesforce OpportunityLineItem; Click the Add new link to open the Task Editor. Configuring Task Editor Settings Each import can contain one or more import tasks, which are configured in the convenient task editor. For our scenario, we need to create and configure settings for one import task. Source Definition Tab On this task editor tab, you select a source object to load data from and specify filter settings to filter source data the way you need. To configure settings, perform the following quick steps: When the task editor opens, select the task editor mode you want to work with \u2014 Simple or Advanced . You can read more about task editor modes here . Next select the Opportunity object from the Source drop-down list. Optionally, below you can add a filter by clicking +Condition and setting a specific condition. For example, if you need to insert only specific Opportunities, you can filter them by ID as shown below. Please note that here you need to filter only Opportunities, which have OpportunityLineItem. It is important in order to map correctly the Invoice Line field. If some Opportunities do not have OpportunityLineItem, records will fail in the import. You can read more on how to configure filters and conditions in the Filter Settings topic. Optionally, if you want this integration to only sync new invoices from Salesforce to QuickBooks , you can select Inserted to load only new opportunities every time the integration is run. Click the Next step button at the bottom of the window to switch to the next editor tab. You can also switch between the editor tabs by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition; Target Definition Tab On the Target Definition tab, select Invoice from the Target drop-down list. Select an operation type you want to use (for our scenario, we select Insert). Click the Next step button to continue. Mapping Definition Tab On the Mapping Definition tab, you need to use lookup to map the columns of the nested Line object to the corresponding columns from the Salesforce OpportunityLineItem. If you selected the Nested Objects checkbox for the integration, the Invoice Line field provides access to fields of its nested object, which currently you can easily map. To apply Lookup mapping for the Line column, follow the steps below: Click QuickBooks Invoice Line column and, in the Salesforce Column drop-down list on the left, select Source Lookup . In the Lookup Object drop-down list, select OpportunityLineItem . Under Lookup Key Column, select OpportunityId . Next, select the Column mapping and map by Id . Proceed with other fields beneath: Map the QuickBooks Amount field to the Salesforce TotalPrice field, using Column mapping. Map the QuickBooks DetailType field using Constant mapping. Select SalesItemLineDetail field This field must be mapped to run the integration successfully. Map the QuickBooks SalesItemLineDetail_UnitPrice field to the Salesforce UnitPrice field. You can read more about different types of mapping here . Start mapping the required CustomerRefId column: Please note that the CustomerRefId column is one of the main columns of the Invoice object. It must be mapped correctly to run the integration successfully. You can map this column, using either a constant or a lookup. When you use \u0421onstant mapping, you get less flexible data import because the constant limits us to one customer. But if you need to import Opportunities of one customer only, constant will be a good option for it. When you use Lookup mapping, you should remember that Salesforce Accounts should correspond to the QuickBooks Customers because the Opportunity object has an AccountId column. Since the CustomerRefId column is required in QuickBooks, you need to specify the AccountId column for the Opportunity object to perform a successful import operation. Otherwise an error appears stating that the value is not found by lookup. You also need to take into account that the name of the QuickBooks customer should correspond to the Salesforce account. Map the CustomerRefId column, using Lookup settings as shown below. Saving Import Task When everything is ready, click Save to save the task. You will see the created task in your integration settings. Running Import \u0421lick Create to create the integration and run it to import data from Salesforce to QuickBooks. You can check whether results are successful or not on the Monitor tab. For this, click the Run ID to open the History Details window. The Opportunity object had three line items, Skyvia detected it and generated 3 lines for Invoice in QuickBooks." }, { "url": "https://docs.skyvia.com/data-integration/import/tutorials/quickbooks-and-salesforce-integration-tutorial.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Import Tutorials Importing QuickBooks Invoices to Salesforce Opportunities In this tutorial, we will show how to configure Salesforce - QuickBooks integration: import data from QuickBooks Invoices to Salesforce Opportunities. The main problem of such an import operation is that the Invoice object in QuickBooks contains Invoice Line, which is not represented as ordinary fields of complex type, it has an array data structure. However, when you select the Nested Objects checkbox in your import, it becomes possible to map fields of QuickBooks Online invoice line items to Salesforce OpportunityLineItem fields. To find out how to create an import and configure mapping settings of nested objects correctly, read our step-by-step instruction below. Please note that the Nested Objects functionality is available when the new data integration runtime is selected. Make sure you have selected the Use new runtime checkbox in your import before adding a task. Creating Connections First, you need to create connections in Skyvia, which you will use as source and target in your import. For our scenario, we create connections to QuickBooks (source) and Salesforce (target) respectively. Read the QuickBooks and Salesforce topics to find out how to configure settings of necessary cloud apps as well as view the Connections topic to know how to create a connection following several simple steps. Creating Import To successfully create and run a new integration, you need to adjust general integration settings and configure task editor settings. Follow these simple steps to adjust general integration settings: Click +NEW in the top menu; In the Integration column, click Import ; In the open integration editor, rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note that if you omit this step, the integration name will remain Untitled in the list of created integrations. Select the Use new runtime checkbox to activate the Nested Objects option; As you import data from the cloud app, click Data Source database or cloud app and select QuickBooks as source from the Connection drop-down list; In the next drop-down list, select Salesforce as target; Select the Nested Objects checkbox to be able to map the columns from the Opportunity object and related OpportunityLineItem in Salesforce to the corresponding columns of Invoice and Invoice Line in QuickBooks; Click the Add new link to open the Task Editor. Configuring Task Editor Settings Each import can contain one or more import tasks, which are configured in the convenient task editor. For our scenario, we need to create and configure settings for one import task, which will load data from the QuickBooks Invoice object to the Salesforce Opportunity object and its related OpportunityLineItem . Source Definition Tab On this tab, you need to configure all necessary QuickBooks settings. Select a source object to load data from and if needed specify filter settings to filter data being imported. To configure settings, follow these simple steps: When the task editor opens, select the task editor mode you want to work with ( Simple or Advanced ). You can read more about task editor modes here . Next select the Invoice object from the Source drop-down list. Optionally, below you can add a filter by clicking +Condition and setting a certain condition. Let us say, if you need to insert a specific Invoice, you can filter it by ID: Invoice -> Id -> equals -> 9 . In our scenario, we import all Invoices from QuickBooks that is why we do not need to apply filters. More information about filters and conditions can be find it in the Filter Settings topic. Click the Next step button at the bottom of the window to switch to the next editor tab. You can also switch between the editor tabs by clicking the corresponding icons: Source Definition, Target Definition, and Mapping Definition; Target Definition Tab On this task editor tab, you need to configure all necessary target settings, i.e. Salesforce settings. To do it, follow the below steps: On the Target Definition tab, select Opportunity from the Target drop-down list. Click + Related and select an object related to the main one in the new drop-down list (in our scenario it is OpportunityLineItem ). You need to select the related OpportunityLineItem object in order to match the Line Item object in QuickBooks Invoice. Select an operation type you want to use (Insert). Click the Next step button to proceed with mapping. Mapping Definition Tab On this tab, first, you need to map columns of the Salesforce Opportunity object to the columns of QuickBooks Invoice and, second, configure mapping for the Salesforce OpportunityLineItem and QuickBooks Invoice Line . Mapping Salesforce Opportunity to QuickBooks Invoice Fist of all, pay attention to the required columns of the Salesforce Opportunity object. Among required columns are Name , StageName and CloseDate . They should be mapped to save and successfully run a integration. For the Name and CloseDate columns, use Column mapping. For the StageName column, use Constant mapping and select a proper QuickBooks value from the drop-down list (for example Closed Won status). Mapping Related Salesforce OpportunityLineItem to QuickBooks Invoice Line After we mapped all the necessary fields of the Opportunity object, let\u2019s proceed to the OpportunitylineItem object. Select this object in the list. Also select the nested Invoice.Line object in the list of source objects on the left. As a first step, you need to map PricebookEntryId , using Lookup mapping. We require mapping of both PricebookEntryId and OpportunityId as OpportunitylineItem is a junction object between Opportunity and PricebookEntry. However, the OpportunityId was generated automatically by Skyvia from the parent Opportunity object, and now you only need to configure mapping for PricebookEntryId , following the below steps: Click PricebookEntryId column and, in the QuickBooks Column drop-down list on the left, select Target Lookup . In the Lookup Object drop-down list, PricebookEntry will be selected automatically. In the Result Column list, Id will be mapped. Under Lookup Key Column, select Name. Next, select Constant mapping and type in the required name. Click + Add Lookup Key to add another lookup key column. Select Pricebook2Id from the below list. Select Constant mapping and paste the Id value. As a second step, you need to map two other columns, one of which is required for successful integration execution: Click the Quantity column in Salesforce and map it to the SalesItemLineDetail_Qty column in QuickBooks. It is a required field and if values are missing in the SalesItemLineDetail_Qty column in QuickBooks, records won\u2019t be inserted to Salesforce, and integration will fail with errors. Next, map the the Salesforce UnitPrice field to QuickBooks SalesItemLineDetail_UnitPrice , using Column mapping. You can read more about different types of mapping here . Saving Import Task When everything is ready, click Save to save the task. You will see the created task in your integration settings. Running Import We have successfully configured our integration for QuickBooks - Salesforce integration. \u0421lick Create to create the integration and run it to import data from QuickBooks to Salesforce. The data will be imported into two tables. You can check whether results are successful or not on the Monitor tab. For this, click the Run ID to open the History Details window. The number of failed and successfully imported rows are displayed as links for each table. You can click a link to download a CSV file with the result values of the mapped target columns. When there are failed rows, you can always receive a detailed per-record error log." }, { "url": "https://docs.skyvia.com/data-integration/package-run-history.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Integration Run History Skyvia logs all the integration runs and allows you to view their results. By viewing a certain run, you can check whether there were errors and if yes, what was the reason for them. With this information, you can also contact our technical support. Viewing Current Integration Status and Most Recent Runs You can view the current integration status and 5 most recent runs on the Monitor tab of the Integration Details page. The integration is considered Succeeded if the most recent integration run has loaded all the records successfully. It is marked as Failed either when at least one record has not been loaded successfully, or when the integration has not been executed completely (for example, when its connection became invalid). Run History Columns Run ID represents the ID of your integration run. You may specify it when contacting our support regarding errors in your integration run. State represents the status of the integration run. It can be one of the following: Succeeded \u2014 integration has successfully loaded data to target objects. Failed \u2014 integration has failed to execute. Queued \u2014 integration is queued for running. Running \u2014 integration is being executed. Canceled \u2014 integration execution has started but was canceled. Canceling \u2014 canceling of the integration execution is in progress. Success rows show the number of successfully processed rows. Error rows show the number of failed rows. Date represents when the integration run occurred. Duration represents the execution time of the integration. When you click a specific integration run, the History Details window opens up. From this window, you can download the result CSV file to your computer. Here you can also see the number of succeeded and failed rows, date of run execution and other details. Viewing integration Runs for Specified Period You can view older runs or runs for a specified period on the Log tab of the integration Details page. The integration is considered Succeeded if the most recent integration run has loaded all the records successfully. It is marked as Failed either when at least one record has not been loaded successfully, or when the integration has not been executed completely (for example, when its connection became invalid). On the Log tab, you can display all runs at once, only successful or only failed ones. For this, our team developed All , Failed and Succeeded tabs. Click Calendar and select any period you want. You can either select certain dates from the calendar or change the time period manually by entering the required dates in the box. We do not limit you, you are free to change any parameter in the box \u2014 year, month, day, even minutes or seconds. Click Today list and select a period from the available options. You can display runs for today or yesterday only, or select a longer display period \u2014 for the whole week/month, for the last 30 days. Clearing Integration Run History Run history may contain error logs, which may include parts of loaded data. In some cases (for import or for export to downloadable CSV files) it also includes the logs with successfully loaded data. These per-record logs with data are not stored forever, they are cleaned from time to time. Exported CSV files are stored for 7 days, and then they are deleted. Error and success logs may be stored for a longer time, but eventually they are deleted too. However, in case of sensitive data, you may want to delete them immediately. In this case, you may use our clear history feature to delete all the records about integration runs for a specified period, including per-record logs. To clean up your integration run history, switch to the Log tab and click Clear history on the tab bar (the top right corner of the integration details page). Then select whether you want to delete all the integration runs from history except the latest, or runs prior to the specified date, or runs for the specified period. Select the corresponding dates if necessary and click Apply . Import For import integrations, the Success Rows and Error Rows columns display the number of successfully and unsuccessfully processed rows on a per-table basis for a selected integration run. When you click a certain integration run, the History Details window pops up. In this window, if some rows failed, the number of failed rows for a table is displayed as a link. You can click this link to download a CSV file with detailed per-record error log. This CSV file contains the values of the fields from the failed rows that were used in mapping and the error description for each failed row. The number of successfully imported rows for a table is also displayed as a link. You can click such link to download a CSV file with the result values of the mapped target columns and the primary key of the result record. Export For export integrations, the Success Rows and Error Rows columns display the number of successfully and unsuccessfully exported rows on a per-table basis for a selected integration run. In export integrations, which export data to locally downloadable CSV files, Run History plays a special role. To download the result CSV files, you should click a certain integration run in the Run History table. In the History Details window, which pops up, click the number of exported rows from a table. Please note that the result files in this case are only stored for 7 days since the integration run, and after 7 days they won\u2019t be available, and you will need to perform export again. The number of exported rows from a table is a link only for the export integrations that export data to locally downloadable CSV files. It is not a link for integrations that export data to a cloud file storage service or FTP. Replication For replication integrations, the Success Rows column displays the number of successfully inserted, updated, and deleted rows in total for a selected integration run. The Error Rows column displays the number of failed rows in total for a selected integration run. If some rows failed, the number of failed rows is displayed as a link to per-record error logs for each table. When you click a certain integration run, the History Details window pops up. In this window, you will find more detailed information on the successfully changed rows (inserted, updated, deleted) and failed rows with the reason of their failure. Synchronization For synchronization integrations, the Success Rows column displays the number of successfully inserted, updated, and deleted rows in total for a selected integration run. The Error Rows column displays the number of failed rows in total for a selected integration run. If some rows failed, the number of failed rows is displayed as a link to per-record error logs. When you click a certain integration run, the History Details window pops up. In this window, the numbers above correspond to the number of changes in rows applied to the source (in our example, account ), and the numbers below correspond to the number of changes in rows applied to the target ( accounts ). Please note that you can download the result CSV files on the Log tab as well. For this, select the period you want to view your integration runs for. Click a certain integration run and download CSV files in the History Details window. Data Flow For data flows, the number of success and error records is determined within the data flows itself according to the Result settings . You also create and write logs yourself in the data flow. When you click a certain integration run, the History Details window pops up. If the data flow generates logs, you can download them from this window." }, { "url": "https://docs.skyvia.com/data-integration/replication", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Overview Replication is used to create a copy of cloud application data in a relational database and keep it up-to-date. When performing replication, you define application objects and their fields to copy and configure replication options. Unlike import, replication does not support loading data of different structure in source and in target and using any custom mapping; however, it is much simpler to configure. Replication also does not require a database with a prepared schema \u2014 it can create database tables for cloud data automatically. Replication can be used when you need to backup cloud application data to a relational database, archiving historical data that is no more a subject to change and is not needed in the cloud application. It can be useful in different data integration scenarios, or just for performing data analysis and reporting with powerful data analysis and reporting tools developed for relational databases. Replication Source and Target Skyvia\u2019s replication allows using any supported cloud app , as well as SQL Server, MySQL, and Oracle databases as a source for replication. It supports all the supported databases and cloud data warehouses and Airtable and Elasticsearch cloud apps as target. Note that if you want to use an on-premise database server, it must be accessible from the Internet. You can find some advice on configuring your database server in order to access it from Skyvia in the How to Configure Local Database Server to Access It from Skyvia topic. For cloud data warehouses, Skyvia uses warehouse-specific optimized data uploading. It uploads data to a data warehouse-specific storage as CSV files and then commands the warehouse to import them. Thus, in order to perform replication to a cloud data warehouse, you need to specify not just mandatory connection parameters, but also the optional storage service-related parameters. Automatic Schema Creation Skyvia does not require already created database tables for data replication. It can automatically create the necessary database tables with the same structure as the replicated cloud objects. You can configure table name generation to replicate multiple instances of the same cloud app to the same database, but to different tables. Tables creation is optional. Skyvia can use existing ones, but in this case their structure \u2014\u2013 columns, data types \u2014\u2013 must be the same as the structure of the cloud objects. Skyvia can also create foreign keys in the database, corresponding to the relations between cloud objects. This can be convenient, but for some data sources it is not recommended. Foreign key constraints are enforced in databases, but may not be enforced in some cloud sources, where child records may reference an already deleted, not existing parent record. If you receive foreign key violation errors during replication, try repeating a full replication with re-creating tables, but without creating foreign keys. Foreign keys are not created for polymorphic relations between the cloud object (relations when the same field of a child object may reference different parent objects). Target Table Naming By default, Skyvia uses cloud object names as table names. However, different target databases may \nhave stricter limitations on table names than the replicated cloud source. Skyvia takes care about such cases and sanitizes table names. It replaces characters, not allowed in table names in the target database, with underscores. If the cloud object name is too long, and target database does not allow that long names, Skyvia truncates such names, and adds hash to the end to ensure that the truncated name is still unique. Additionally, Skyvia allows you customizing table names. It allows both changing names, based on the names of cloud objects \u2014 changing case, adding prefix, etc. and specifying completely custom table names. Customizing table names can be useful in case if you have a pre-existing schema with custom table names or if when some other software expects specific table names, not coinciding with cloud object names. To generate simpler, more consistent and readable table and column names, Skyvia provides additional name sanitization option. It allows removing all non-alphanumeric characters except underscores for any target database, regardless of what this database support and what it doesn\u2019t. Replication also allows specifying target SQL Server schema. For other databases, the schema is specified at the connection level. Keeping Database Up-to-date Skyvia can keep the database up-to-date with the cloud application. For this, you can schedule your replication for automatic execution . There are two ways to keep the database up-to-date: Performing a full replication each time with dropping and re-creating tables. Performing a full replication only the first time, to load legacy data, and then performing incremental replication . Partial Replication You can replicate all or only part of the data with Skyvia. You can select the objects to replicate data from when configuring replication . For each of the selected objects, you may exclude any fields from replication. For example, if you don\u2019t want to replicate personal information, like emails, names, etc. to the database, you may simply exclude these fields. However, please note that you need to have at least the following fields selected in order to use incremental replication: object key fields fields storing record creation and modification time fields, determining whether a record was deleted (for data sources and objects that have such fields) In addition to excluding fields, you may also apply filters for each replicated object separately and replicate only records, matching certain conditions. This can be combined with both full and incremental replication \u2014 incremental replication in such case will load only records, matching the filter AND created/modified since the previous replication run. Data Hashing You can hash data in string or binary fields during replication. This allows you to anonymize sensitive data. See Data Hashing for more details. Metadata Changes Note that replication doesn\u2019t detect metadata changes in the source. If metadata is changed in your cloud data source, replication will continue to run knowing nothing about these changes. If a new field is added in a replicated object, the replication will continue working, but the new field won\u2019t be replicated. If a replicated field or a replicated object is deleted or renamed in the data source, the replication will start failing until you perform the necessary changes to it. However, Skyvia supports applying schema changes to the target database without re-creating tables and reloading already replicated data. See the Incremental Replication and Schema Updates topic for more information. You can find the instruction on what you should do when source metadata changes in its Metadata Change Detection paragraph. Replication Tutorials Skyvia provides the following tutorials on data replication: How to Set Up Replication of Salesforce Accounts and Contacts to SQL Azure How to Set Up Replication of Dynamics 365 Data to SQL Azure" }, { "url": "https://docs.skyvia.com/data-integration/replication/advanced-features/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Advanced Replication Features Replication integration allows performing different ETL operations with your data. Discover the following Import features: Data Hashing Anonymize sensitive data and personal identifiable information with Data Hashing . History Mode Store all the versions of source records in the database with History Mode feature." }, { "url": "https://docs.skyvia.com/data-integration/replication/advanced-features/data-hashing.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Advanced Features Data Hashing Skyvia allows you to hash field values to protect sensitive data when transferring it to databases and data warehouses. Instead of loading the original data, you can load anonymized data that preserves its analytical value. Hashing ensures that the same input always produces the same output, but different inputs produce different outputs. This way, you can use hashed values as unique identifiers. Skyvia uses [SHA-256 algorithm](https://en.wikipedia.org/wiki/SHA-2) to hash data. It also adds the account-specific salt to source values before hashing. The fact that the added salt is account-specific means that all the replication integrations running in the same account, produce the same hashed values for the input. This allows you to have consistent data in your database or data warehouse even when replicating data to it from different sources with different integrations. There are the following limitations on the fields to be hashed: You can hash only binary or string fields. You cannot hash ID/primary key values. How to Enable Data Hashing Data hashing is supported only in the new replication runtime. Currently, new replication runtime is not enabled by default. Select the Use new replication runtime checkbox to use it. To enable hashing for a field, point at the field, click Unhashed and select Hashed . Changing hashing settings in an existing integration does not change state of an already loaded data. If you want to hash/unhash the already loaded data, you need to perform a full resync for the corresponding object." }, { "url": "https://docs.skyvia.com/data-integration/replication/advanced-features/history-mode.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Advanced Features History Mode Use History Mode to track the history of the entries in your database. Enable it for the chosen objects in Source to store every change made to the objects\u2019 rows in Target using the [type 2 slowly changing dimension](https://docs.oracle.com/cd/E41507_01/epm91pbr3/eng/epm/phcw/concept_UnderstandingSlowlyChangingDimensions-405719.html#:~:text=A%20type%202%20slowly%20changing,leaves%20the%20old%20record%20intact.) concept. If you enable History Mode, Skyvia will replace the Update operation with Insert and will add three more columns to the table in Target: _history_start , _history_end , and _active_value . Use these columns to perform the data analysis for the specific date or period of time. Column Type Details _active_value BOOLEAN Is true for the most recent record. Otherwise is false . Only one record among records with the same Id can have _active_value = true . _history_start DATETIME Stores either the CreatedDate value for the unmodified records or LastModifiedDate value for the modified ones. _history_end DATETIME Stores the date of the last time the record was active minus a millisecond. Is Null for the records with _active_value = true . Example Let\u2019s check what happens when you enable History Mode on the example. You have the following table synced between Source and Target: Id FullName CreatedDate LastModifiedDate 1 Mary Johns 2023-01-01T23:59:59.999Z 2023-01-01T23:59:59.999Z 2 Steve Smith 2023-01-02T23:59:59.999Z 2023-01-02T23:59:59.999Z Mary\u2019s last name has changed to Smith on 2023-03-04:00:00:00:000. When you sync this changes from Source to Target with the History Mode enabled, the table in Target will look like this: Id FullName CreatedDate LastModifiedDate _active_value _history_start _history_end 1 Mary Johns 2023-01-01T23:59:59.999Z 2023-01-01T23:59:59.999Z false 2023-01-01T23:59:59.999Z 2023-03-03T23:59:59.999Z 2 Steve Smith 2023-01-02T23:59:59.999Z 2023-01-02T23:59:59.999Z true 2023-01-02T23:59:59.999Z Null 1 Mary Smith 2023-01-01T23:59:59.999Z 2023-03-04T00:00:00.000Z true 2023-03-04T00:00:00.000Z Null If you work with Source that supports soft delete, during the sync of a soft deleted record Skyvia will find a corresponding record by Id with an _active_value = true , will change it to false , and replace _history_end date with the date of a replication run. So, if you soft delete Mary\u2019s record in Source and run the replication on 2023-05-04:00:00:00:000, the table in Target will look like this: Id FullName CreatedDate LastModifiedDate _active_value _history_start _history_end 1 Mary Johns 2023-01-01T23:59:59.999Z 2023-01-01T23:59:59.999Z false 2023-01-01T23:59:59.999Z 2023-03-03T23:59:59.999Z 2 Steve Smith 2023-01-02T23:59:59.999Z 2023-01-02T23:59:59.999Z true 2023-01-02T23:59:59.999Z Null 1 Mary Smith 2023-01-01T23:59:59.999Z 2023-03-04T00:00:00.000Z false 2023-03-04T00:00:00.000Z 2023-05-04T00:00:00.000Z Limitations History mode is not supported for the tables that lack timestamps as Skyvia relies on them to track changes. How to Enable History Mode To enable History Mode: Select an object to apply History Mode to, and click Edit . Select History from the mode dropdown." }, { "url": "https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Configuring Replication Replication is a specific kind of an integration, which helps you simply and easily replicate cloud data to relational databases or cloud data warehouses on an ongoing basis with the minimum necessary configuration. Replication offers dynamic, continuous, incremental delivery of high volumes of data with very low latency. Creating a Replication Integration To create a replication integration, click + Create New in the top menu and select Replication in the Integration column. When the replication details page opens, perform the following steps: Specify a source connection you replicate data from and a target connection you replicate data to; Specify the necessary Options \u2014 Incremental Updates , Update Schema , Create Tables , Drop Tables , Create Foreign Keys ; Select objects from the source list; Optionally schedule replication for automatic execution. Creating Source and Target Connection If you have already created source and target connections, simply select them from the drop-down list under Connection. If you haven\u2019t created the source or target connection yet, click +New connection at the bottom of the drop-down list, select the connector in the opened Connection window and specify the connection parameters. To create different connections, you need to enter different sets of parameters. See the Connectors section for more details. Selecting Replication Options Incremental Updates . If you select this checkbox, Skyvia does not perform a full replication (copying of all the data) each time replication is executed. Instead it performs a full replication only the first time the integration is run. During subsequent replications, Skyvia detects data that were changed in your cloud app since the last integration execution and then applies these changes to your database. It deletes records that were deleted in the cloud app since the last execution, updates records that were updated and inserts the newly inserted records. Such replication is known as incremental replication. Incremental updates are not supported for some objects in certain cloud apps. To learn more, go to Incremental Replication and Schema Updates . Update Schema . If you select this checkbox, Skyvia checks target schema and compares it with the schema it would build for the objects and fields, selected in the replication. If there are differences, Skyvia applies certain differences to the target database without re-creating all the tables. Note that not all differences are applied. See details in Incremental Replication and Schema Updates . Create Tables . If you select this checkbox, system will try to create source tables in the target database. If the Incremental Updates checkbox is selected, Skyvia will try to create source tables in the target database only for the first replication execution (full replication). You can find more information below. If the table with the same name already exists in a database, Skyvia will check the structure of that table. If it has the same structure, Skyvia will upsert data from replication object to the selected table. If the table has different structure, you will receive an error. However, if you select Drop Tables checkbox, that table will be deleted and new table with the same name and corresponding structure will be created. Drop Tables . If you select this checkbox, Skyvia will try to drop source tables in the target database before creating them. If the Incremental Updates checkbox is selected, Skyvia will try to drop source tables in the target database only for the first execution (full replication). This checkbox is enabled only if the Create Tables checkbox is selected. You can find more information below. Create Foreign Keys . If you select this checkbox, system will create the foreign keys in the target database according to relations between the replicated objects in the source CRM. This checkbox is enabled only if the Create Tables checkbox is selected. Note that Skyvia does not create foreign keys for many-to-many relations. This option is only available if you clear the Use new replication runtime checkbox. When you create a replication, Incremental Updates , Create Tables and Create Foreign Keys checkboxes are selected by default. Clear them if needed. Incremental Updates in More Details When the Incremental Updates checkbox is not selected, the Create Tables and Drop Tables options are applied every time replication is run (if the corresponding checkboxes are selected). When the Incremental Updates checkbox is selected, the Create Tables and Drop Tables options are applied only when full replication is performed: for the first run after the integration creation; for the first run after resetting the LastSyncTime integration parameter. The LastSyncTime parameter is the parameter that shows sync time since the last replication. When replication is executed first time, Skyvia sets this parameter to the current time, and the next integration execution will load the changes made since the time specified in this parameter. If you want to reset the parameter manually, click Parameters in the toolbar on the left. When the parameter editor window opens, click Reset value to reload all the data. Additional Options Additional replication options control how target tables and columns are named. They are useful in a number of cases, for example, if you need to replicate multiple instances of a cloud app to the same database, but to different tables, or when you need to replicate data to preexisting tables with custom names. Direct Id Check . This checkbox determines how Skyvia checks whether a replicated record is already present in the target database. By default (if this check box is not selected) Skyvia queries all the records from the target table to its cache and performs this check against its cache. However, this may be inefficient in case if a current replication run replicates only a few records, but there are a lot of records already in the target table. In this case, select this checkbox, and Skyvia will perform this check directly against the target database. This option is only available if you clear the Use new replication runtime checkbox. Schema . This setting is available only if target is SQL Server. It allows specifying SQL Server schema to load data to. By default, the data are loaded to the default schema of the user, used in the target SQL Server connection. Usually, it is the dbo schema. Case . Determines how case is changed when converting a cloud object name to the target database table name. Note that some databases may not support uppercase characters in table names or can be configured to always use lowercase table names. It can take the following values: Unchanged \u2014 case is unchanged, database table names will have the same case as the original cloud object names. Lower \u2014 all letters of a cloud object name are converted to lower case. Upper \u2014 all letters of a cloud object name are converted to upper case. Capitalized \u2014 the first letter of a cloud object name and the first letter after each underscore or space are converted to upper case, other letters are converted to lowercase. For example, for the FAQ__ka object, a Faq__Ka table is created. FirstLetterUppercase \u2014 The result name of database table starts with the capital letter, other letters are converted to lower case. For example, for the FAQ__ka object, a Faq__ka table is created. Prefix . Adds the specified prefix to the result table name. Sanitize Table/Column Names . Enables name sanitization. It removes all non-alphanumeric characters including spaces from the source object name and replacing them with underscores. Remove Underscores This checkbox determines whether to remove underscores from the target table name. Track Target Naming Changes . This checkbox determines whether to track the changes of target naming settings and apply them to an existing database next time the integration runs. If the Incremental Updates checkbox is selected, the naming changes are applied using ALTER statements. Otherwise, if Create Tables and Drop Tables checkboxes are selected and naming settings change, Skyvia attempts to drop tables with old names and create tables with new names. If the Track Target Naming Changes checkbox is not selected, and you change naming settings for an existing replication with the database already created, you need to manually apply the naming changes to the database before the next replication run; otherwise, the next run will result in an error, because database tables with changed names would not be found. If this checkbox is selected, Skyvia tries to apply the changes automatically. Note that if this checkbox is selected, Skyvia creates an additional table __skyvia_objects in the target database to store information on naming settings and changes in order to know which naming changes were already applied, and which it needs to apply during the next replication. Target Naming settings are ignored for replication tasks, in which you have specified a custom target table name manually. The Schema setting (for replication to SQL Server) is applied in this case. Selecting Data to Replicate After you have chosen both connections on the left, you can select source objects arranged in an alphabetic order in the table on the right. You can select either some or all of the objects to replicate them to target. The checkbox in the header under the Select Objects title allows selecting all the displayed objects. Filtering and Selecting Objects For better visual perception and easier search, Skyvia offers additional filtering settings for objects: Filtering using Show selected only link. When you apply this filter, system displays only the objects you have selected. To display all objects again, click Show all link. Filtering by object name. When you enter part of the object name (for example \u201caccount\u201d) in the field above the table, system automatically displays all objects containing \u201caccount\u201d name. If you want to hide objects with this name, select Hide from the drop-down list. Depending on your needs, you may apply multiple filters at the same time and save the applied filters. To do it, enter a part of object name in the objects if name contains field, switch from Show to Hide if necessary, and click Save filter . The filter is displayed. After this you may add more filters in the same way. Selecting Fields After you have selected objects, you can optionally exclude some of their fields, which you do not want to replicate. For this, click the corresponding Edit icon next to the object. When the task editor window opens, clear only those fields of the object, which you do not want to replicate and leave selected only those ones intended for replication. Note that if you want to use Incremental Updates , the object key fields and such additional fields as CreatedDate, UpdatedDate, DateCreated, DateModified, LastModifiedDate, Modified Time, Created Time, etc. must be included. Here is the list of such fields for different cloud sources . In our screen, we show fields of the Account object \u2014 both selected and cleared. When replicating data Amazon Redshift, replication task editor provides more settings, allowing you to configure the target Redshift table and column parameters. See more details in the Editing Replication Task for Amazon Redshift topic. Filtering Object Data Additionally you can filter source data for replication by applying certain conditions to object fields in the task editor window. Click +Condition and set conditions you want to be met during replication. In our example, we want to replicate only accounts of the Customer-Direct type, and we specify it in the filtering area highlighted in blue. To learn more about filters, go to Filter Settings topic. Setting Custom Table Names You can also specify a custom table name in the task editor window. Just enter it to the Target name box. In this case, the Target Naming settings are not applied for this task. If you want to return to the generated names, click the button in the Target name box. You must use only alphanumeric characters, digits, and underscore. Replicating Data from a Database You can choose your database as Source in Replication. This adds one extra step to the replication configuration - configuring the Ingestion mode . Ingestion mode defines how to track changes in Source during the incremental replicaiton. There are three available Ingestion modes: New, New and modified, and Log-Based. Currently Skyvia supports replication from SQL Server, Oracle, and MySQL. We work on supporting more databases and they will be available in the near future. Ingestion Modes You select the Ingestion mode for the whole replication on the main page of your replication. You can change the Ingestion mode separately for each replication task in the task settings. Ingestion Mode: Default The tasks with the Default Ingestion mode selected use the ingestion mode, selected for the whole replication. Ingestion Mode: New The New mode allows you to track only new records in Source and incrementally replicate them to Target. To configure a New mode select a DATETIME column that can be used to track a creation date, or an auto-increment number column such as Id . Skyvia will automatically provide you with the selection of columns with the suitable column types. Note that the New mode does not track deleted records. Ingestion Mode: New and Modified The New and modified mode allows you to track new and modified records in Source and incrementally replicate them to Target. To configure a New and modified mode select a DATETIME or autoincremental number column that can be used to track record modifications from the Modified column dropdown. You can use the same column in the Created column dropdown to track new records unless this column is NULLABLE and is not updated at the time of row creation. In this case, select a separate NOT NULLABLE DATETIME or autoincremental number column from the Created column dropdown. Skyvia automatically provides you with the list of columns that can be used as Created and Modified columns. Note that the New and Modified mode does not track deleted records. Ingestion Mode: Log-Based This mode uses database change-tracking and logging tools to detect new, modified, and deleted records between replication runs. The technics of tracking changes can differ from database to database. To learn how to enable change tracking for your database visit the connectors topic. Log-Based mode is currently available for SQL Server only." }, { "url": "https://docs.skyvia.com/data-integration/replication/editing-replication-task-for-redshift.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Editing Replication Task for Amazon Redshift Replication task editor for data replication to Amazon Redshift is different from the replication task editor for other sources, and it allows you to specify additional parameters, specific for Amazon Redshift. These parameters affect the Redshift table creation. The editor consists of the three tabs: Table This tab allows specifying settings for the whole table. Distribution Style Determines how Amazon Redshift will distribute the rows loaded to the table between the node slices. It can have the following values: Auto \u2014 this parameter when creating a table. Default value is used. Even \u2014 The rows will be evenly distributed between the node slices in a round-robin fashion, regardless of the row data values. Key \u2014 The rows will be distributed between the node slices depending on the values in one of the columns. ALL \u2014 Every node will have its own copy of all the table rows. You can find more information about distribution styles in the [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/c_choosing_dist_sort.html) . Distribution Key Determines the column, based on values of which the rows will be distributed between the node slices. If Auto is selected, this parameter will be omitted when creating the table. Distribution Sort Keys Specifies the column list, which will be used for sorting table data when performing initial data loading to the table. Columns This tab allows you to configure settings for the Redshift table columns. It also allows you to exclude some of the columns from replication. Clear checkboxes for the columns you want to exclude from replication. You can also edit column length for textual columns and precision and scale - for numeric columns. Additionally you can select Compression Encoding for each column. Auto means that compression encoding won\u2019t be specified in the CREATE TABLE statement. See [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) for more information about compression encodings. Filters This tab allows you to configure filtering settings ." }, { "url": "https://docs.skyvia.com/data-integration/replication/incremental-replication-and-schema-updates.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Incremental Replication and Schema Updates Skyvia allows loading only records that were changed (created/edited/deleted) in the cloud source since the previous replication run and applies these changes to the target database. When using incremental updates, Skyvia also allows tracking source metadata changes and applying some of them to target database without re-creating tables and reloading all the data. Incremental Replication Requirements Requirements to Source Cloud App In some sources Incremental updates are not supported for some of the objects. For incremental updates, an object must have fields, storing record creation and modification timestamps. See details for each source in the Connectors section. If the object allows updating data, at least one of the fields with the creation and modification time must be present. If you select objects without both such fields in replication (or if you excluded both fields from replication), the result depends on whether you replicate data to a database (like SQL Server, SQL Azure, MySQL, etc.) or a cloud data warehouse \u2014 Amazon Redshift, Azure Synapse Analytics, or Google BigQuery. An integration, replicating data to a database, will not run. When running such an integration, you will get an error that the source object does not have the corresponding fields, or they are not selected. An integration, replicating data to a data warehouse will run successfully and load data. However, if an object does not have both of the fields, Skyvia will display a warning for such object in replication editor. Only the first (full) replication run will load data from this object to a data warehouse. Subsequent replication runs won\u2019t load any data from this object. Please note that if an object doesn\u2019t have one of these fields (or you excluded one of them from replication), Skyvia won\u2019t apply all the source data changes to the database: If an object does not have a field, storing record creation time, but has a field, storing record modification time, subsequent replications will update the cloud data warehouse records modified in the cloud app, but won\u2019t add new records created in the cloud app. If an object does not have a field, storing record modification time, but has a field, storing record creation time, subsequent replications will load new records created in the cloud app, but won\u2019t apply any modifications to existing records for updates made in the cloud app. Besides, an object must have a primary key to replicate it with incremental updates. However, most cloud objects have primary keys/ids. If the source is a database, you can specify the corresponding fields for each table. Requirements to Target database Replication requires privileges for creating and dropping tables and loading data. Note that incremental replication creates temporary tables each time it runs, and after running drops them. Replication does not create temporary objects if you replicate data to cloud apps, like Airtable or Elasticsearch. What to Do When There Are No Required Fields in Object If some objects in your replication do not have fields, storing record creation and modification time, probably it would be the best to create two separate replication integrations: An integration with objects that support incremental updates. It will use incremental replication and may be scheduled for more frequent runs. An integration with objects that do not support incremental updates. It will re-create tables and reload all the data every time it runs. To reduce the number of processed rows, it\u2019s better to schedule it to run seldom. This approach, however, has a drawback that foreign keys between the tables, created by different integrations, are not created (and it\u2019s better not to create them manually to avoid errors). Processing Deleted Records For some data sources, incremental replication not just loads new and updated records to a database, but also deletes records, deleted in the cloud source, from the database. This is supported for data sources and objects that store information about deleted records and allow querying it. For example, Salesforce stores soft deleted records for 30 days, and Skyvia applies soft deletes from Salesforce to database during incremental replication. Hard deletes leave no information about deleted records in Salesforce, and thus, are not applied to the database. This feature is useful if you want to have an exact copy of cloud data in the database, but it might not be convenient if you want to replicate data for archiving purposes and want to keep your records, that were deleted in the cloud app, in the database. Metadata Changes and Updating Target Schema Replication with incremental updates supports automatic updates of the target database schema without the need to re-create tables and reload all the data. This feature can be turned on with the Update Schema checkbox. This feature does not mean that Skyvia automatically detects changes in the source. This rather means that it can reflect changes you make to the replication in the database without losing the data already replicated to the database. How Updating Schema Works When replication runs, it checks the target database schema and compares it to the schema it would build based on the selected source objects and fields. If there are differences, Skyvia automatically applies changes that add tables and columns to the database. It ignores any extra tables/columns in the database. Such differences may appear when you either manually modify the target database or you edit the replication. However, if source metadata changes, you still need to edit your replication to apply the changes to the database. Metadata Change Detection Skyvia does not support automatic detection of metadata changes. If metadata are changed in your cloud data source, replication will continue to run knowing nothing about these changes. If a new field is added to a replicated object, the replication will continue working, but the new field will not be replicated. If a replicated field or a replicated object is deleted or renamed in the data source, replication will start failing until you perform the necessary changes to it. Besides, if the source connector supports metadata cache , no changes are detected untill the cache is cleared. To detect and apply source metadata changes in a replication, you need to do the following: If the source connector supports metadata cache, clear the metadata cache of the source connection. Open the replication for editing. When you edit it, Skyvia automatically excludes the objects, deleted in the source, from replication. If all the object fields are replicated, and some of them were deleted, they are also excluded from the replication automatically. However, if some of the object fields were manually excluded, and some of the replicated fields were deleted in source, you need to manually edit the task and exclude the deleted fields. If fields were added to a replicated object, you need to edit the corresponding replication task and select check boxes for the corresponding fields to add them to replication. If new objects were created in the source, you also need to select them manually to add them to replication. How Changes Are Applied Skyvia does not apply automatically all the metadata changes that it detects. If a Skyvia finds any extra tables and columns in the database, not participating in replication, it ignores them. Source Change How It Is Applied to Database An object is added to replication The corresponding table is created in the database. However, only the records created/updated since the previous replication run, are replicated. If you need to load legacy data from that object, you can do it manually with a separate integration. An object is excluded from replication or deleted Skyvia ignores it. The corresponding table with all the data stays in the database. An object is renamed Skyvia treats such case as an object deleting and creating a new one. It keeps the table with the old name in the database, and creates a new table with the new name. A column is added to replication Skyvia creates a new column in the corresponding table, filled with NULL values. A column is excluded from replication or deleted Skyvia ignores this deletion and keeps the database column and all the data. For the future rows, the column will have NULL values. A column is renamed Skyvia treats such case as a column deleting and creating a new one. It keeps the column with the old name in the database, and adds a new one with the new name. Data for the new records is loaded to the new column. A column data type is changed Skyvia applies changes that widen the column data type.* * Note that different databases support changes to existing tables differently. Not every change can be applied in all supported databases. For example, data type changes may be treated differently in different databases." }, { "url": "https://docs.skyvia.com/data-integration/replication/required-fields-for-incremental-replication.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Required Fields for Incremental Replication Replication with Incremental Updates selected requires entity keys and fields storing the information about record creation/last modification times selected in all its tasks. All fields are selected by default, but you can manually exclude fields from selection. Here is the list of these required additional fields for all the data sources: for ActiveCampaign: CreatedDate, UpdatedDate for Asana: CreatedDate, UpdatedDate for BigCommerce: DateCreated, DateModified for Dynamics 365: modifiedon, createdon; for FreshBooks: DateCreated, DateModified for Freshdesk: Created, Updated for G Suite (Google Apps): Updated for HubSpot: CreatedAt, UpdatedAt for Jira: CreatedDate, UpdatedDate for Magento: created_at, updated_at for Mailchimp: CreatedDate, LastUpdate for Marketo: Created At, Updated At for NetSuite: dateCreated, lastModifiedDate for Pipedrive: CreatedDate, UpdatedDate for QuickBooks: MetaData_CreateTime, MetaData_LastUpdatedTime for Salesforce: LastModifiedDate, CreatedDate for Salesforce Marketing Cloud: CreatedDate, ModifiedDate for ShipStation: CreatedDate, UpdatedDate for Shopify: CreatedAt, UpdatedAt for Streak: CreatedDate, LastModifiedDate for Stripe: CreatedDate, UpdatedDate for SugarCRM: date_modified, date_entered for WordPress: CreatedDate, UpdatedDate for Zendesk: Created, Updated for Zoho Books: CreatedDate, UpdatedDate for Zoho CRM: Modified Time, Created Time for Zoho Inventory: CreatedDate, UpdatedDate for Zoho Invoice: CreatedDate, UpdatedDate for Zoho People: CreatedDate, UpdatedDate Note that it is enough to have just one of the fields, storing record creation and modification timestamps, selected or even present in the object." }, { "url": "https://docs.skyvia.com/data-integration/replication/tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Replication Tutorials In this section, you can get familiar with the following tutorials on data replication operations: How to Set Up Replication of Salesforce Accounts and Contacts to SQL Azure This tutorial describes how to create a replication that replicates data from Salesforce Accounts and Contacts to SQL Azure and how to schedule the integration for automatic data updating in the SQL Azure database. How to Set Up Replication of Dynamics 365 Data to SQL Azure This tutorial describes how to create a replication that copies data from Dynamics 365 to SQL Azure and how to schedule the integration for automatic data updating in the SQL Azure database." }, { "url": "https://docs.skyvia.com/data-integration/replication/tutorials/how-to-set-up-replication-of-dynamics-crm-data-to-sql-azure.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Tutorials How to Set Up Replication of Dynamics 365 Data to SQL Azure In this tutorial, we will show how to configure replication of your Dynamics 365 data to a relational database so that the replicated data were kept up-to-date with Dynamics 365 automatically. In our tutorial we will demonstrate it with Dynamics 365 accounts, contacts, leads, lists and listmembers, however the same can be applied to any Dynamics 365 object. We will also show how to switch replication mode so that while incremental replication is used most objects, for some objects different method is used. Creating Connections In order to replicate data from Dynamics 365 to SQL Azure, first we need to create connections to Dynamics 365 and SQL Azure databases. If you have already created the necessary connections, you may skip these steps. To create a connection to Dynamics 365, perform the following steps: Click +Create New in the top menu Click Connection in the menu on the left. In the opened Select Connector page, select Dynamics 365 . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Dynamics 365 , select the CRM category). The default name of a new connection is Untitled . Click it to rename the connection, for example, to Dynamics 3651 . Enter the following connection parameters: URL \u2014 url that you use to connect to Dynamics 365. Usually it looks like: https://companyname.crm.dynamics.com Username \u2014 your Windows Live ID. Password \u2014 password for your Windows Live ID account. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories, using the All list (for SQL Server , select the Database category). The default name of a new connection is Untitled . Click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201cTCP:<server name>\u201d. Replace \u201c<server name>\u201d with your actual server name. Specify your User Id , Password , and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary data replication operation. Creating Integration To create a replication, perform the following actions: Click +Create New in the top menu. In the Integration column, click Replication . Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Dynamics 365 connection. {: .img-zoomable Under Target , in the Connection drop-down list, select the SQL Azure connection. In the grid under Select Objects , select checkboxes for the objects you want to replicate. In our tutorial we will replicate account , contact , lead , and marketing lists data - list and listmember objects. Let\u2019s suppose, marketing lists are rarely modified. We do not want to check the list object for changes every time, and want to only replicate their data when we know something has changed. So we can switch replication mode for it to Resync on demand . For this, open the replication task by clicking the Edit button next to the list object and, in the Mode list, select Resync on demand . Skyvia will replicate this object the first time, and then only when you reset the corresponding LastSyncTime parameter. Let\u2019s also assume that we don\u2019t want to replicate the contacts\u2019 emails. However, we still want to check for duplicate emails later. Here Skyvia\u2019s Hashing feature comes in handy. To enable hashing for the emails fields, open the replication task for the contact object, point to the Email field, click Unhashed , and then select Hashed . Click the Create button. Run the integration by clicking Run . Now you have your data replicated to a relational database. On the Monitor tab, you can check the Run History of the integration. Scheduling Integration Execution After we have replicated our data, we want to keep them up-to-date. For this, we will configure the integration to run every hour during workdays automatically. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and then select Recur every in the list. Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 9:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this our integration will run automatically every hour between 09:00 and 18:00 of every workday. You may check its execution on the Monitor tab in the Run History . You can also visit Scheduling Integrations to get more detailed information on setting an integration schedule." }, { "url": "https://docs.skyvia.com/data-integration/replication/tutorials/how-to-set-up-replication-of-salesforce-accounts-and-contacts-to-sql-azure.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Replication Tutorials How to Set Up Replication of Salesforce Accounts and Contacts to SQL Azure In this tutorial, we will show how to configure replication of your Salesforce data to a relational database so that the replicated data are kept up-to-date with Salesforce automatically. In our tutorial, we will demonstrate it with Salesforce Accounts and Contacts, however, the same can be applied to any Salesforce object. By default, Skyvia creates and maintains a copy of the data. It also replicates record deletions from Salesforce, deleting the corresponding records in the target. In this tutorial, we don\u2019t want to replicate deletions, and want to store all the history of replicated data, so we will use the History mode instead. Creating Connections In order to replicate data from Salesforce to SQL Azure, first we need to create connections to Salesforce and SQL Azure databases. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of new connection is Untitled. Click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list, select the authentication method for connecting to Salesforce. If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. These parameters are stored on Skyvia. Otherwise, if you have selected OAuth 2.0 authentication, click Sign In with Salesforce and sign in. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories, using the All list (for SQL Server , select the Database category). The default name of new connection is Untitled . Click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201c TCP:<server name> \u201d. Replace \u201c<server name>\u201d with your actual server name. Specify your User Id , Password , and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary data replication operation. Creating Integration To create a replication, perform the following steps: Click +Create New in the top menu. In the Integration column, click Replication . Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Salesforce connection. Under Target , in the Connection drop-down list, select the SQL Azure connection. In the grid under Select Objects , select checkboxes for the objects you want to replicate. In our tutorial we will replicate Account and Contact . In the replication tasks, enable the History mode. For this, open the replication task by clicking the Edit button next to the Account object and, in the Mode list, select History . Skyvia will store all the history of replicated changes. Click Save task and repeat these actions for the Contact object. Let\u2019s also assume that we don\u2019t want to replicate the contacts\u2019 emails. However, we still want to check for duplicate emails later. Here Skyvia\u2019s Hashing feature comes in handy. To enable hashing for the emails fields, open the replication task for the \u0421ontact object, point to the corresponding email field, click Unhashed , and then select Hashed . Click the Create button. Run the integration by clicking Run . Now you have your data replicated to a relational database. On the Monitor tab, you can check the Run History of the integration. Scheduling Integration Execution After we have replicated our data, we want to keep them up-to-date. For this, we will configure the integration to run every hour during workdays automatically. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and then select Recur every in the list. Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 9:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this, our integration will run automatically every hour between 9:00 and 18:00 of every workday. You may check its execution on the Monitor tab in the Run History . Click an integration run to open the History Details window and see the detailed information on this run. You can also visit Scheduling Integrations to get more information on setting schedule." }, { "url": "https://docs.skyvia.com/data-integration/scheduling-packages-for-automatic-execution.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Scheduling Integrations for Automatic Execution Skyvia allows you to set a schedule for an integration to execute it automatically. This might be useful if you want to configure data loading operations to run periodically or if you want to delay an operation to a later time. Based on your needs, you can schedule the created integration to execute either once (one-time execution) or at recurring intervals (repeated execution). Recurring integration runs can be executed on a daily, weekly and monthly basis \u2014 once a day at a specific time or multiple times with intervals. You can select certain weekdays you want your integration to be executed as well as impose time period restrictions on the integration execution for a day. Setting a Schedule To specify a schedule for an integration, first create an integration you need or open the already created integration for editing. In the toolbar on the left, click Schedule to open the schedule editor window. By entering all the required parameters and saving them, you activate the schedule at once or delay an operation to a later time. To make the schedule inactive without deleting it, click Disabled . Repeated integration Execution (Recurring Runs) Repeated integration execution is intended for those clients who need to run an integration on a regular basis. You can schedule an integration to run based on the days selected and time specified, once or many times per day. If you choose to run your integration many times per day, you need to specify intervals your integration runs at. Such advanced schedule settings are developed for your comfort and to better meet your business needs. Running an Integration on a Daily, Weekly and Monthly Basis In order to specify days of your integration run, use Run every parameter. By default, this parameter is set to Day . If you want to run your integration each day or once in several hours or minutes, leave Day selected. However, you can change it to Month or Week . You select Week if you want to run your integration on certain weekdays (for example, from Monday to Friday). You select Month if you want to run your integration once per month (for example, on the 29th of each month). When you click Day , you schedule integration execution for at least once a day. When you click Week , you can schedule integration execution on certain weekdays by selecting corresponding checkboxes. When you click Month , you can schedule integration execution on a specific day of the month (1st, 3rd, 15th, etc.) or on a certain weekday of a specific week (second Wednesday of the month or last Friday of the month). Running an Integration Once or Multiple Times within Selected Days If you want to run your integration once a day within selected days, click Occur once at and specify the time. If you want to run your integration many times within selected days, click Recur every and specify time intervals the integration runs at. You can schedule your integration to run every n minutes (1, 2, 5, etc) or every n hours (1, 2, 5, etc). We do not limit you in setting the right interval for your integration runs. Please note that these time intervals are counted for every selected day independently, and each day the schedule starts at 00:00 afresh. If, for example, you schedule an integration to start running at 00:00 each 7 hours, the last run will be on 21:00. For the next day, the schedule will be refreshed, and the integration will start running again at 00:00. To set frequency of your integration runs, use the Time parameter. When you select Recur every option for your integration, you may click Set time restrictions to limit a run period of your integration. Set time restriction parameter is very important if you want your integration to be executed in the very exact period, let us suppose, from 9:30 to 18:30 each workday. Please note if you do not specify the exact period, next day integration will start running on 00:00 automatically. One-Time Integration Execution (One-Time Run) One-time integration execution is intended for those clients who need to run an integration only once. You can launch your integration at once or delay the integration run by selecting the At a specific time option under Starting . The Starting on parameter determines since when the schedule itself is considered \u201cactive\u201d. However, that doesn\u2019t necessarily determines the first integration run. For the first integration run, it is more suitable to use Set time restrictions option under Time parameter. At the end, select Time Zone and click Save to save the schedule. Time Zone is a required parameter that is used to define the time zone your schedule is based on. Usage Examples Example 1 You would like to schedule an integration to run each workday every hour not at 09:00, 10:00, 11:00, etc., but at 09:30, 10:30, 11:30 and so on. Setting Starting on parameter to 09:30 will not help in this case. Instead, click Set time restrictions and set the from parameter to 09:30. Leave the default value in the to parameter or change it to any end time you want. Your integration will run every hour at minutes specified in the from parameter. Example 2 You would like to run an integration twice per day \u2014 at 8:00 and 14:00. Hence, you set the integration to recur every 6 hours. The from parameter should be set to 8:00. The to parameter should be more than the time of the second integration run but less than the unwanted third integration run. Set it to 14:30. If you want your first integration run to start in the afternoon on a certain date, use the Starting on parameter. For example, set it to 09.01.2019 at 13:00. This means that the first integration run will start on 09.01.2019 at 14:00 because this is when the time of scheduled integration run (depending on frequency and time restrictions) is equal or greater than the Starting on parameter. Example 3 You would like to run an integration on the last day of every month. As some months have 30 days, some months \u2014 31 days and February has 28 days, you cannot select the On day option and run your integration, for example, on the 31st of each month as not all months have 31 days. Instead you should select the On the option. Afterwards, in the first drop-down list, select Last and, in the second drop-down list, select Day . If you want your integration to run one time each month on the last day, under the Time parameter, select the Occur once at option and specify the required execution time. When everything is configured, save the schedule." }, { "url": "https://docs.skyvia.com/data-integration/synchronization", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Synchronization Overview Synchronization integration performs bi-directional synchronization between two data sources: cloud applications or relational databases. Skyvia allows you to synchronize data with different structures while preserving all data relationships. It provides advanced mapping settings to configure the synchronization process and supports data splitting, allowing you to map a single table to multiple related tables or objects and vice versa (one-to-many). Skyvia goes beyond basic column-to-column mapping by offering powerful mathematical and string expressions for advanced data transformation during synchronization. How Synchronization Works Synchronization keeps data between two data sources consistent by applying changes in both directions. It works best when records in the data sources don\u2019t overlap or when one of them is empty before the first synchronization. If you only need to sync data in one direction, use Import instead. First Synchronization When you run Synchronization for the first time, Skyvia copies data from Source to Target and vice versa, without checking if the data is identical. After copying, Skyvia creates an ID map and links the original records to their copies, and will be able to detect changes between both data sources in the next runs. To avoid creating duplicates, we recommend leaving one of the data sources empty for the first Synchronization. Next Synchronizations If a record is added, modified, or deleted in one of the data sources, Skyvia will sync these changes to other data sources in the next run. You don\u2019t need to add custom external ID fields or use textual IDs for Synchronization to work: ID map remembers how data corresponds between sources. When performing synchronization repeatedly, Skyvia synchronizes only the data that has changed since the previous synchronization. It uses timestamp fields in cloud applications to track when records are created or modified and creates special tracking tables and triggers in relational databases to monitor data changes. However, certain objects in some cloud applications do not have these timestamp fields, so synchronization is not supported for those objects. Synchronization Mapping Specifics In Synchronization integrations, you must configure mapping for both directions: Source to Target, and Target to Source. Columns with the same names and data types in Source and Target are mapped automatically. You can assign a \u0441onstant value to a field for one-way synchronization and use the same constant to filter data in the opposite direction. You can see more information in the Constant Mapping topic. Requirements and Recommendations To synchronize database data, the primary key columns in the database tables must be autogenerated. Cloud objects must support the INSERT and UPDATE operations and have the UpdatedDate or CreatedDate fields. If you want to synchronize cloud CRM data (Salesforce, Dynamics 365, SugarCRM, or Zoho CRM), we recommend to create a dedicated CRM account exclusively for synchronization, that will not be used for any other actions. Check out the Dedicated Account section here to learn more. Change Conflicts When a record is modified both in Source and Target, Skyvia will apply Source changes to Target, and all Target changes will be lost. They are lost even when different fields were changed in Source and Target. Skyvia does not track changes on per-field level. For example, we have a Contact record with the following values: Field Value First Name John Last Name Doe Email john.doe@example.com Phone 123-456-7890 Source Change : The Email field is updated to john.new@example.com . Target Change : The Phone field is updated to 987-654-3210 . During the next synchronization run: Source changes to the Email field are synced to the target. Target changes to the Phone field are completely discarded. After synchronization, the record will be as follows in both systems: Field Value First Name John Last Name Doe Email john.new@example.com Phone 123-456-7890 Synchronization Tutorials See our tutorials to learn more about data synchronization: How to Synchronize Product Data How to Synchronize Zoho CRM Contacts with Mailchimp Subscribers How to Configure Cloud Data Synchronization with Empty Database" }, { "url": "https://docs.skyvia.com/data-integration/synchronization/how-to-create-synchronization-task.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Synchronization How to Create Synchronization Task After you have opened an integration details page either for existing integration or for a new one, click the Add new link or Edit button to open the Task Editor. Note that the integration should have valid source and target connection before creating or editing tasks. After the task editor is opened, perform the following steps: If you want to synchronize several source cloud objects (or database tables) with one target object (or table) or vice versa, click the corresponding many-to-one or one-to-many buttons. If you synchronize one object or table from both sides, omit this step. On a side with single object select the synchronized object from the list. If you perform one-to-one synchronization, do this for both sides. If you perform one-to-many or many-to-one synchronization, on the multiple side perform the following: Select the main object or table to synchronize data from the drop-down list box. Click the +Related button and select an object related to the main one in the new drop-down list box. Repeat the previous step until you add all the necessary related objects. Note that you can add not only objects related to the first one, but objects related to any of already added objects by clicking the +Related button near this object. Click Next step . Map target columns to the source data columns or expressions or lookup fields. The simplest way to import data is to map necessary target properties/columns to the source columns directly. For this, click a target property and then select the corresponding source file column in the drop-down list in the Source column. For more complex cases, when synchronized data has a different structure, see Mapping . Note that you should map at least the fields marked as Required . This mapping is applied when synchronizing target data with source data. Click Target to Source and configure mapping in the opposite direction. This mapping is applied when synchronizing source data with target data. \n Mapping can be different for different directions. For example, when source contains one column \u2018FullName\u2019, and target has two columns - \u2018FirstName\u2019 and \u2018LastName\u2019, you need to use different expressions for synchronizing source to target and target to source. When synchronizing target to source, you need to split the \u2018FullName\u2019 column values using the SUBSTRING function, and when synchronizing source to target, you need to concatenate values of two columns and add a space character between them. After you have mapped all the properties/columns you need, click Save to add the created task to the package. For more information you can check tutorials How to Synchronize Product Data and How to Synchronize Zoho CRM Contacts with Mailchimp Subscribers . When creating a one-to-many or a many-to-one synchronization task, it is possible to map a field on the One side first using the main table fields from the Many side, and after this select a related table from the many side, and map the same field on the One side using the related table fields from the Many side. This will result in the following behavior: when a record is created on the One side, first, the value based on the main table fields from the Many side is assigned to the field, and after this the record is updated, and this value is overwritten with the value based on the related table fields." }, { "url": "https://docs.skyvia.com/data-integration/synchronization/synchronization-package.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Synchronization Configuring Synchronization Synchronization is a specific kind of integration, which performs bi-directional data synchronization between cloud applications and relational databases. Creating a Synchronization Integration To create a synchronization integration, click + Create New in the top menu and select Synchronization in the Integration column. When the integration details page opens, perform the following steps: Specify the data sources you import data from and to and connections (one-to-one; one-to-many; many-to-one) to them. Create synchronization tasks for the integration. Optionally schedule the integration for automatic execution. Synchronization Requirements and Supported Sources Skyvia supports synchronization of cloud applications and relational databases. It does not support synchronization of cloud data warehouse services, like Amazon Redshift, Google BigQuery, or Azure Synapse Analytics. For synchronizing database data, the primary key columns of the database tables must be autogenerated. For example, you may use autoincrement/identity columns for a primary key or create triggers to generate and assign primary key values. Synchronized cloud objects must have fields storing information about when an object was created and last modified. Both of the fields must be present in an object. In some cloud applications some of the objects may have only one of such fields or don\u2019t have any. Such objects cannot be added to a synchronization. If you try to select such an object for synchronization, the corresponding error message is displayed. Note that some cloud applications, like Podio or SendPulse, do not have objects with both such fields at all. These data sources cannot be synchronized. Source and Target Connection When creating a synchronization, first you need to select source and target connections. Synchronization is bi-directional. It means that the only difference between the Source and Target is that in case of conflicts source data changes have priority over the target data changes. If the same records were modified in both source and target between synchronizations, the source values are applied to the corresponding records in the target. You can specify integration connections in the following way: Under Source , in the Connection list, select the source connection from the drop-down list. If you have not created the source connection yet, click the +New connection button and create a new connection ( learn how ). Under Target , in the Connection drop-down list, select the target connection to synchronize data with. Creating Tasks To create a synchronization task, click the Add new link and configure a new synchronization task in the Task Editor as described in the How to Create Synchronization Task topic. Integration Settings You can rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. You can schedule your integration for automatic execution. See Scheduling Integrations for more details. Dedicated Account If the source or target are Salesforce, Dynamics 365, SugarCRM, or Zoho CRM, and synchronization uses a CRM account that is not used for any other data update operations, select the Dedicated [CRM] account for synchronization checkbox (where [CRM] can be Salesforce, Dynamics 365, SugarCRM, or Zoho CRM). This checkbox means that this account is used only for the synchronization, and changes made by this account are ignored by the synchronization. If this checkbox is selected: The first synchronization synchronizes all the data. If users, other than the user specified in the synchronization connection, make changes to application data during the synchronization operation or after it, these changes are synchronized in the next synchronization operation. Any changes, performed by the user specified in the synchronization connection, are ignored. Skyvia assumes that this application user account is dedicated for the synchronization operation and does not perform any other data changes. If this checkbox is not selected: The first synchronization synchronizes all the data. Any changes to CRM data performed during the synchronization operation are ignored. If other side is a relational database, all the changes to it performed during the synchronization operation are also ignored. It\u2019s highly recommended not to edit the data being synchronized during the synchronization operation. Any changes, performed by the user specified in the synchronization connection or any other user between the synchronization operations, are synchronized. We recommend creating a dedicated cloud application account for the synchronization operation and selecting the Dedicated [CRM] account for synchronization checkbox. Locale The specified Locale determines locale settings including DateTime format, number format, string collation, currency format, etc. This option is applied when non-textual data (like dates, numbers, etc.) is converted to a string. Such conversion can happen in expressions or when mapping a non-textual field to a string field. If this option is not set, InvariantCulture is used. The invariant culture is culture-insensitive; it is associated with the English language but not with any country/region. After you have configured your integration, click the Create button. Synchronization Parameters In Skyvia, synchronization integrations have the following two parameters. Parameter Description LastSyncTime The time since the last synchronization. When synchronization is executed, Skyvia sets this parameter to the current time, and the next integration execution will synchronize the changes made since the time specified in this parameter. You can reset this parameter in order to reload all the data InitTrackingObjects A Boolean value, determining whether to create tracking tables and triggers in the synchronized database. It is true before the first integration run, when they are not created yet. Integration execution sets it to false . You can reset it when you need to re-create the tracking tables and triggers in the database Editing Existing Integration Editing an existing integration is performed via the same integration editor page with the same interface elements as when creating a new integration. To edit a integration, click Edit on its details page. In the integration editor, you can change connections, add, edit, or delete tasks, enable, configure, or disable integration schedule , etc. Editing or Deleting Tasks When editing an existing synchronization, which have already run, you can freely delete tasks and unmap fields. However, when mapping new fields or changing mapping for already mapped fields, you should note that next synchronization run won\u2019t resync all the records with the new mapping automatically. Next synchronization run will only sync records that were modified since the previous integration run, so the new mapping will be applied only for these records. If you want to resync all the records with the updated mapping, you can choose one of the following workarounds: \u0421lear the LastSyncTime parameter by clicking Parameters and then Reset value to perform synchronization as for the first time. Your data are previously synced that means you have same records in both data sources. In order not to make duplicated records during the next synchronization, delete them in one of the data sources. Skyvia will load all the records from one side to the side with the deleted records. Note that you should not forget to clear the LastSyncTime parameter or you may lose your data. Perform a mass update of your records in one of the sources with the same values that are already in this data source. For example, you may use our Query tool and perform an UPDATE statement . Here is an example of such statement for Salesforce Pricebookentries: 1 UPDATE PricebookEntry SET IsActive = IsActive This statement does not actually modify data, because it assigns an already assigned value for records, but the records will be considered modified, and the synchronization will resync them to the other source. When using any of these workarounds, please be careful and act only if you know what you are doing. Incorrect actions may cause data loss. Adding Tasks Adding a task to an existing integration that has already run is a more complex case. If one or both of the sources are databases , you will definitely need to reset both of the synchronization parameters and run the integration as in the first time. If you have synchronized data, you may need to truncate previously synchronized tables on one of the sides to avoid creating duplicates. If both of the synchronized sources are cloud applications , then it is possible to use the second workaround for the added object. Anyway, we recommend to plan your synchronization integrations in advance and add all the necessary tasks when creating them. In case you need to add tables/objects to synchronization, it may be better to create a completely new integration." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials In this section, we provide step-by-step information on how to create export, import, replication and synchronization integrations, how to add tasks to them, use mapping in integrations, set schedule and many more. This section consists of the following tutorials: Import Tutorials Export Tutorials Replication Tutorials Synchronization Tutorials Data Flow Tutorials" }, { "url": "https://docs.skyvia.com/data-integration/tutorials/data-flow-tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Data Flow Tutorials In this section, you can find tutorials for Data Flow. Loading New Records from Autoincrement Table This tutorial describes how to create a simple data flow, loading new records from an autoincrement database table to Salesforce. It helps you to quickly get started with data flows and explains creating a data flow in details. Updating Existing Records and Inserting New Ones This tutorial describes how to create a simple data flow, checking if a record is already present in the target table and avoid creating duplicates when loading data." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/data-flow-tutorials/data-flow-tutorial-loading-new-records-from-autoincrement-table.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Data Flow Tutorials Loading New Records from Autoincrement Table In this tutorial, we will show a simple Data Flow use case. We will load new records from a database to Salesforce. The source database table does not have any columns that store record creation time, but it has an autoincrement primary key column. Skyvia import integrations allow loading only new records from cloud apps easily . You can also use source data filtering in import for databases if the source table has a column, storing record creation time. However, when we need to use an autoincrement column to detect new records, such scenario requires a more complex tool like Data Flow . Source Table This tutorial uses the following MySQL table as an example: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15 CREATE TABLE demobase . customers ( CustomerID int ( 11 ) NOT NULL AUTO_INCREMENT , CompanyName varchar ( 40 ) NOT NULL , ContactName varchar ( 30 ), ContactTitle varchar ( 30 ), Address varchar ( 60 ), City varchar ( 15 ), Region varchar ( 15 ), PostalCode varchar ( 10 ), Country varchar ( 15 ), Phone varchar ( 24 ), Fax varchar ( 24 ), Email varchar ( 50 ), PRIMARY KEY ( CustomerID ) ); Its primary key column, CustomerID is AUTO_INCREMENT. It means that for every new record, CustomerID is increased by one. This allows us to determine the new records by remembering the max CustomerID value processed before. We are going to import data from the customers table to the Account object in Salesforce. To find out how to create a data flow and configure mapping settings of nested objects correctly, read our step-by-step instruction below. Creating Connections to Data Sources First, you need to create connections in Skyvia. In our tutorial we will use a MySQL connection in a Source component and a Salesforce connection in a Target component . Read the MySQL and Salesforce topics to find out how to configure settings of the specific database and cloud app as well as view the Connections topic to know how to create a connection following several simple steps. Creating Data Flow We start by creating a data flow using the following steps: Click +NEW in the top menu; In the Integration column, click Data Flow ; In the open integration editor, rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note that if you omit this step, the integration name will remain Untitled in the list of created integrations. Creating Parameter Data flow parameters can store values between data flow runs. In this tutorial we use a parameter to store the max primary key value in the table. Next time the data flow will run, it will query only records with a larger primary key value, and thus, only new records created since the previous run will be processed. To create a parameter, click Parameters on the Data Flow editor toolbar. Then click + Parameter . Specify the parameter Name , Type , and Value . For the Name, we will use LastMaxId , but you can use any name. The new parameter must have a numeric type. Int64 is the most suitable. As for the Value, you can either specify 0 (which means that the first run will load all the records) or to the current max primary key value in your table (if you don\u2019t want to load any old records). After we created the parameter, let\u2019s start adding components to the diagram. Adding Source Every data flow starts with the Source component . Drag the Source component from the component list to the diagram. After this the Source details sidebar opens on the right. In this sidebar, first we need to select the MySQL connection. Then we select the Execute command action and specify the Command text for it. Here is the SELECT statement that selects new records from the customers table: 1 SELECT * FROM customers WHERE CustomerID > : MaxId This statement uses the MaxId SQL parameter in the WHERE condition to filter out old records. Now we need to map this SQL parameter to our LastMaxId integration parameter. For this, click the MaxId parameter under the Command Tex box. This opens Mapping Editor. To quickly map it to the LastMaxId integration parameter, click Variables in the top right corner of Mapping Editor and then click @LastMaxId . This adds the variable name to the Value box. Finally, click Apply . Our Source component is configured. Adding Value Now we need to add a Value component to record the max CustomerId value from the processed records. Drag the Value component from the component list to the diagram. The Value details sidebar opens on the right. Before configuring the Value component, we need to connect its input to the output of the Source component. Drag the circle on the bottom of the Source component to the Value component. After this, let\u2019s configure the Value component. In the Value box, enter CustomerID . In the Value type list, select Max . Finally, in the Variable box, select @LastMaxId and select the Update Variable on Data Flow Completion checkbox. The Value component is configured, and we can proceed to adding a Target component. Adding Target Drag the Target component from the component list to the diagram. The Target details sidebar opens on the right. Connect the Value component output to the Target component as we did previously for the Value component. After this, we can configure the Target component. First, we need to select our Salesforce connection . After this, in the Actions list, select Insert. For the Insert action , we must select the Table to insert to and map action parameters, which correspond to the target table columns. Let\u2019s select Account as Table and then select one of the parameters in the Parameters list to open the Mapping Editor. In the Mapping Editor, first click the Auto Mapping button. It maps the action parameters to input properties with the same name automatically. In our case it maps the Phone and Fax parameters. Then, we need to map other parameters manually, because their name are not the same as the names of the input properties. We must map the Name parameter, because this field cannot be null in Salesforce. Other parameters are not mandatory. To map the Name parameter, select it in the list under Output schema , then click Properties in the top right corner of Mapping Editor and then click CompanyName . In a similar way, let\u2019s map other parameters: BillingStreet to Address BillingCity to City BillingState to Region BillingPostalCode to PostalCode BillingCountry to Country The mapping is configured, click Apply . After this, we have a fully working data flow, which we can run immediately or schedule for automatic execution. However, if we run the data flow in this state, its run details will show 0 success rows and 0 failed rows, despite actually processing rows and doing its job. This is because Data Flow requires additional configuration steps to determine its results. Configuring Results Settings To record the numbers of success and failed records, we need to count the success and error records in the data flow itself. We will use Row Count components to count the rows, store the numbers to variables, and use these variables in Results Settings. Variables Let\u2019s start with adding variables . Click Variables on the toolbar. In the Variables sidebar, click the plus button. Enter the variable name, for example, SuccessRows , and select Int64 in the Type list and then click Save . Repeat these actions to add the FailedRows variable. Row Count Components Now drag two Row Count components to the diagram. Connect one of them to the regular output of the Target component (blue circle), and another one - to the error output (red circle). Select the Row Count component, connected to the regular output, and in its details sidebar, select the SuccessRows variable . For another Row Count component, select FailedRows in its Variable list. Result Settings After we configured Row Count components, we need to set up the result settings . Click Result on the Data Flow editor toolbar to open the corresponding sidebar. Here we need to specify, what number to take as the number of success records, and what - as a number of failed records. For the purpose of this simple tutorial, we can just type @SuccessRows in the Success Rows box, and @FailedRows in the Failed Rows box. However, in more complex cases with multiple target components, we may need to use more Row Count components, more variables, and enter expressions which calculate the necessary numbers out of multiple variables into these boxes. Configuring Log Now the data flow will display correct results. But what if some rows fail to load into Salesforce? We will know the number of failed rows, but we won\u2019t know anything else. To fix these errors easier, it\u2019s important to know which rows failed, and why they failed. To achieve it, we can write a log with error information in the data flow. Creating Log We need to create a log object first in order to write into it. To create a log object, click Connections on the Data Flow editor toolbar. The Connection sidebar opens. This sidebar lists all the connections used in the data flow and allows managing logs. To create a log object, click the plus button near Log connection . Enter a name for the log object, and then add columns to it by clicking the plus button and specifying their names and types. In this tutorial, let\u2019s create three columns - to store the ID and name of failed records and the corresponding error message, returned by Salesforce. Writing Log To write data to a log, you need to add another Target component. Add it to the diagram and connect it to the Row Count, which counts error rows of the first Target component. Then proceed to configuring the newly added Target. In the Connection list, select Log . This target component will be used to write data to our log object, so select the created log object in the Table list. After this, we need to map the action parameters, which correspond to the Log object columns. Select one of the parameters in the Parameters list to open the Mapping Editor. In the mapping editor let\u2019s map parameters to the following input properties: ID - to CustomerID Name - to CompanyName Error - to Error Error is a new column, automatically added by the first Target component to its error output. It stores error messages, returned by Salesforce, for failed rows, and now we store them to our log. Data Flow is Ready That\u2019s all. Now our data flow not only loads data but also displays correct results and writes the necessary information to the log in case of any errors. Let\u2019s save it. Click Save on the toolbar and specify the Data Flow name and folder to store it. Click Create and go to overview . On the Overview of the data flow, you can run it or schedule it for automatic execution. For more information about scheduling integrations, read Scheduling Integrations for Automatic Execution ." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/data-flow-tutorials/updating-existing-records-and-inserting-new-ones.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Data Flow Tutorials Updating Existing Records and Inserting New Ones In this tutorial we will show how to avoid creating duplicates when loading data. We will create a data flow that checks if a record from source is already present in the target. If the record is present, it will update it with data from the source. If there is no such record in target, we will insert it. Suppose we have the following scenario. We have a Mailchimp subscriber list and the database table with the list of contacts and their Mailchimp statistics. New subscribers are added to the list, and statistics for existing ones is also updated. We want to add the new subscribers to the database table, and also keep existing records in the table up-to-date. Target Table This tutorial uses the following MySQL table as a target: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19 CREATE TABLE demobase2 . listmembers ( Email varchar ( 100 ) NOT NULL , ListID varchar ( 50 ) NOT NULL , Status varchar ( 20 ) DEFAULT NULL , EmailClient varchar ( 100 ) DEFAULT NULL , UnsubscribeReason varchar ( 255 ) DEFAULT NULL , AverageOpenRate double DEFAULT NULL , AverageClickRate double DEFAULT NULL , FNAME varchar ( 255 ) DEFAULT NULL , LNAME varchar ( 255 ) DEFAULT NULL , ADDRESS text DEFAULT NULL , PHONE varchar ( 255 ) DEFAULT NULL , LastUpdate datetime DEFAULT NULL , CreatedDate datetime DEFAULT NULL , VIP tinyint ( 1 ) DEFAULT NULL , SubscriberID bigint ( 20 ) NOT NULL AUTO_INCREMENT , PRIMARY KEY ( SubscriberID ) ) ENGINE = INNODB ; This table is similar to the ListMembers Mailchimp object to make mapping simpler. Creating Connections to Data Sources First, you need to create connections in Skyvia. In our tutorial we will use a Mailchimp connection in a Source component and a MySQL connection in a Target component . Read the Mailchimp and MySQL topics to find out how to configure settings of the specific cloud app and database as well as view the Connections topic to know how to create a connection following several simple steps. Creating Data Flow We start by creating a data flow using the following steps: Click Create New in the top menu; In the Integration column, click Data Flow ; In the open integration editor, rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note that if you omit this step, the integration name will remain Untitled in the list of created integrations. Adding Source Let\u2019s add the Source component that will query data from Mailchimp. Drag the Source component from the component list to the diagram. After this the Source details sidebar opens on the right. In this sidebar, first we need to select the Mailchimp connection. Then we select the Execute query action. After this we need to configure the query in the visual Query Builder . First, let\u2019s drag the ListMembers object from the connection object list to the Result Fields pane. Then click this object in the connection object list to expand the list of its field, and drag the ListID field to the Filters pane. Then enter the ID of the list to load subscribers from into the box on the Details pane on the right. If you don\u2019t know a Mailchimp list ID, you can either get it in the [Mailchimp](https://mailchimp.com/help/find-audience-ID/) itself or get it with the Skyvia Query tool. Adding Lookup After adding the source, we need to check whether the records are present in the target. For this, we will use the Lookup component . Let\u2019s add it to the diagram and connect the source output to the lookup input. Let\u2019s select the MySQL connection and Lookup action. Then we need to select the target table, in our example it\u2019s listmembers . We will use Email as a lookup key because this column uniquely identifies a subscriber. As for the result column, let\u2019s select the SubscriberID column. Then, we need to map the Email parameter in the Parameters list. Click next to Email and, then, in the opened Mapping Editor window, click the Properties tab in the top right corner and select Email in the Properties list. Lookup with the FirstOrDefault behavior, which is selected by default, puts input records to its regular output regardless whether it finds a match or not. For matched rows, the result columns are filled with the corresponding values from the lookup table, and if match is not found, they are filled with null values. Adding Conditional Split Now we need to split records that have a match in the target table and records with no match found. We will perform UPDATE for the records with a match, and INSERT \u2014 for records without a match. For this purpose, we can use the Conditional Split component . Drag it to the diagram and connect it to the lookup output. Records, for which no match is found, have the null value in their SubscriberID column. So let\u2019s create a conditional output named Insert , for which we will specify the Condition : isnull(SubscriberID) . Records, not matching this condition, are sent to the Default output, which is generated automatically. Adding Target for Insert Drag the Target component from the component list to the diagram. Connect the Conditional Split component output to the Target component. When Skyvia asks, which output to select, select the Insert output. First, we need to select our MySQL connection. After this, select Insert in the Actions list. For the Insert action , we must select the table to insert to and map action parameters, which correspond to the target table columns. Let\u2019s select our listmembers table in the Table list and then select one of the parameters in the Parameters list to open the Mapping Editor. In the Mapping Editor, first click the Auto Mapping button. It maps the action parameters to input properties with the same name automatically. Since corresponding columns have the same names in source and target, it maps all the parameters automatically. Let\u2019s also name this target component Insert to avoid confusion in future. Adding Target for Update After this, we need to add the Target component for updating records and connect it to the Conditional Split component output. Since only the Default regular output is not connected, Skyvia won\u2019t display the Input Output Selection dialog box this time. Let\u2019s select the same MySQL connection and the Update action. Then we select the same listmembers table in the Table list. As for the lookup keys, we can use either Email again, or the SubscriberID column. Updating records by their SubscriberID is faster, so let\u2019s select SubscriberID . After this, we only need to map the Update action parameters. For the purpose of our tutorial, we can just use automapping again. Select one of the parameters in the Parameters list to open the Mapping Editor and then click Auto Mapping . Let\u2019s also name this target component Update to avoid confusion in future. Configuring Results Settings To record the numbers of success and failed records, we need to count the success and error records in the data flow itself. We will use Row Count components to count the rows, store the numbers to variables, and use these variables in the Results Settings. Variables Let\u2019s start with adding variables . Click Variables on the toolbar. In the Variables sidebar, click the plus button. Enter the variable name, for example, SuccessInsert , and select Int64 in the Type list and then click Save . Repeat these actions to add the FailedInsert , SuccessUpdate , and FailedUpdate variables. Row Count Components Now we need to add four Row Count components to the diagram. For each of the two Target components, connect one Row Count to the regular output (blue circle), and another \u2014 to the error output (red circle). In each of the Row Count component connected to the regular output, select the corresponding success variable, and for Row Count component connected to the error output, select the corresponding error variable. Result Settings After we configured Row Count components, we need to set up the result settings . Click Result on the Data Flow editor toolbar to open the corresponding sidebar. Here we need to specify what number to take as the number of success records, and what number \u2014 as a number of failed records. For the purpose of our tutorial, we can sum the number of successfully inserted and successfully updated records in the Success Rows box and sum the number of failed rows for inserting and updating in the Failed Rows box. After this, the data flow is ready. If necessary, you can also add log objects and write error information to logs . Other Integration Tools A similar task of inserting and updating records can also be solved with a replication or \nan import with the UPSERT operation . However, data flow offers more flexibility. A replication can be used only to load cloud data to a database, and only if the table has the same structure as the cloud object. An import can load data to a cloud app as well as data flow. But data flow can be useful when you need to have different mapping for insert and update. For example, when you don\u2019t want to update certain columns that should be filled when creating a record." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/export-tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Export Tutorials In this section, you can get familiar with the following tutorials on data export operations: How to Export Salesforce Attachments for Specific Object This tutorial describes how to create an export that extracts attachments for a specific Salesforce object. It describes cases when you need to export attachments for all accounts or for one specific account. How to Set Up Daily Backup of Salesforce Contacts to Dropbox This tutorial describes how to configure a daily backup of Salesforce Contacts to Dropbox." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/export-tutorials/how-to-export-salesforce-attachments-for-specific-object.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Export Tutorials How to Export Salesforce Attachments for Specific Object In this tutorial, we will show how to export Salesforce attachments for a specific object. In our example, we will export attachments for Account object. Creating Connection If you haven\u2019t created the Salesforce connection yet, you need to create one first. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click Connection in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the environment type of Salesforce to export data from. From the Authentication drop-down list select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on our Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and login via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . Creating Integration Now let\u2019s create the corresponding export. To create an integration, perform the following actions: Click +NEW in the top menu. In the Integration column, click Export . The export details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection list, click Select source and select Salesforce connection from the drop-down list. Then select Target Type you want to export data to. By default, data are exported to CSV file(s) downloaded manually to a local computer. We use this option in our tutorial. See the screenshot below. Click the Add new link to open the Task Editor. From the Object drop-down list, select Attachment . All the Attachment fields are selected automatically. Clear the checkboxes next to those fields, you don\u2019t want to export. After this, let\u2019s filter the export results by the attachment parent objects. In our case, we want to export attachments with Account parents. For this click +Condition under Filter . In the first box of the added condition, select Parent . In the second box, select Type . In the last box of the condition, select Account . After this, our integration will export only attachments with Account parents. You can save the task and the integration and run it for execution if this is enough. However, let\u2019s demonstrate a case, when you need to export attachments for a specific account. Click +Condition under Filter once again. In the first box of the added condition, select Parent . In the second box, select Name . Enter the name of the account into the rightmost box of the condition. In our case it\u2019s \u2018Edge Communications\u2019. Click Save to save the task. Click Create to create the integration. Run the integration. Then, click the Monitor tab of the integration details page to see the export Run History . Click the integration run you want to download results of. After you do it, the History Details window will appear with detailed information. You can export attachments to your computer by clicking the number of records. Note that when you export attachments, the result file will be a zip archive containing the CSV file together with an attachment (binary files). When you export data to a locally downloadable CSV file, the file is available for download for 7 days." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/export-tutorials/how-to-set-up-daily-backup-of-salesforce-contacts-to-dropbox.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Export Tutorials How to Set Up Daily Backup of Salesforce Contacts to Dropbox In this tutorial, we will show how to configure a daily backup of Salesforce contacts to Dropbox. The tutorial can actually be applied for backing up any Salesforce objects. Creating Connections In order to export data from Salesforce to a file uploaded to Dropbox, first we need to create connections to Salesforce and Dropbox. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the environment type of Salesforce to export data from. From the Authentication drop-down list select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on our Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Sign In with Salesforce button and log in via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to Dropbox, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Dropbox . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Dropbox , select the Storage category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Dropbox1 . Click Sign In with Dropbox . Enter your Dropbox credentials and click Sign In . Click the Allow button. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary operation. Creating Integration Now let\u2019s create the corresponding export. Click +NEW in the top menu. In the Integration column, click Export . The export details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection list, click Select source and select Salesforce connection in the drop-down list. You can use the Type to filter box to find the connection quicker. Then select Target Type you want to export data to. To save CSV file(s) to a file storage service or FTP, click CSV to storage service . After this, click Select target and select Dropbox connection. Next, specify the Folder where to place the result file. Click the Add new link to open the Task Editor. From the Object drop-down list, select Contact . All the Contact fields are selected automatically. Clear the checkboxes next to those fields, you don\u2019t want to export. Click the Save button at the bottom of the Task Editor. Then click the Create button to create an integration. By default, Skyvia creates a CSV file with the name of the exported object (in our case \u201cContact\u201d) and appended timestamp of the export operation, separated by the underscore character. Thus, the new files will not overwrite the older ones. However, you can disable adding a timestamp if you want the files to be overwritten in the integration Naming options . Or you can set a custom name for the result file in the Target File Name box in Task Editor. In this case, the timestamp will also not be added to the file. If you need to backup several Saleforce objects at once, you may simply add more export tasks with the necessary object in the same way \u2014 just repeat the steps 6-9 selecting the necessary objects instead of Contact . Scheduling Integration Execution If you want the export to run periodically in order for you to always have a fresh backup, schedule export execution when creating a integration. For fresh backup, configure the integration to run every day on a specified time. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. By default, State , Recurrence , and Run every parameters are already configured to run the integration every day, so let\u2019s keep these settings. Then, set the execution time of export operation next to Occur once at . Optionally, select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Optionally, specify your Time Zone . Click Save to schedule integration execution. After this, our integration will run automatically every day on the specified time. You can also visit Scheduling Integrations to get more detailed information on setting an integration schedule. That is all, the integration is created. It will backup Contacts to CSV files to Dropbox. To restore Contacts from a CSV file, you can use Import ." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/replication-tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Replication Tutorials In this section, you can get familiar with the following tutorials on data replication operations: How to Set Up Replication of Salesforce Accounts and Contacts to SQL Azure This tutorial describes how to create a replication that replicates data from Salesforce Accounts and Contacts to SQL Azure and how to schedule the integration for automatic data updating in the SQL Azure database. How to Set Up Replication of Dynamics 365 Data to SQL Azure This tutorial describes how to create a replication that copies data from Dynamics 365 to SQL Azure and how to schedule the integration for automatic data updating in the SQL Azure database." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/replication-tutorials/how-to-set-up-replication-of-dynamics-crm-data-to-sql-azure.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Replication Tutorials How to Set Up Replication of Dynamics 365 Data to SQL Azure In this tutorial, we will show how to configure replication of your Dynamics 365 data to a relational database so that the replicated data were kept up-to-date with Dynamics 365 automatically. In our tutorial we will demonstrate it with Dynamics 365 accounts, contacts, leads, lists and listmembers, however the same can be applied to any Dynamics 365 object. We will also show how to switch replication mode so that while incremental replication is used most objects, for some objects different method is used. Creating Connections In order to replicate data from Dynamics 365 to SQL Azure, first we need to create connections to Dynamics 365 and SQL Azure databases. If you have already created the necessary connections, you may skip these steps. To create a connection to Dynamics 365, perform the following steps: Click +Create New in the top menu Click Connection in the menu on the left. In the opened Select Connector page, select Dynamics 365 . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Dynamics 365 , select the CRM category). The default name of a new connection is Untitled . Click it to rename the connection, for example, to Dynamics 3651 . Enter the following connection parameters: URL \u2014 url that you use to connect to Dynamics 365. Usually it looks like: https://companyname.crm.dynamics.com Username \u2014 your Windows Live ID. Password \u2014 password for your Windows Live ID account. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories, using the All list (for SQL Server , select the Database category). The default name of a new connection is Untitled . Click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201cTCP:<server name>\u201d. Replace \u201c<server name>\u201d with your actual server name. Specify your User Id , Password , and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary data replication operation. Creating Integration To create a replication, perform the following actions: Click +Create New in the top menu. In the Integration column, click Replication . Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Dynamics 365 connection. {: .img-zoomable Under Target , in the Connection drop-down list, select the SQL Azure connection. In the grid under Select Objects , select checkboxes for the objects you want to replicate. In our tutorial we will replicate account , contact , lead , and marketing lists data - list and listmember objects. Let\u2019s suppose, marketing lists are rarely modified. We do not want to check the list object for changes every time, and want to only replicate their data when we know something has changed. So we can switch replication mode for it to Resync on demand . For this, open the replication task by clicking the Edit button next to the list object and, in the Mode list, select Resync on demand . Skyvia will replicate this object the first time, and then only when you reset the corresponding LastSyncTime parameter. Let\u2019s also assume that we don\u2019t want to replicate the contacts\u2019 emails. However, we still want to check for duplicate emails later. Here Skyvia\u2019s Hashing feature comes in handy. To enable hashing for the emails fields, open the replication task for the contact object, point to the Email field, click Unhashed , and then select Hashed . Click the Create button. Run the integration by clicking Run . Now you have your data replicated to a relational database. On the Monitor tab, you can check the Run History of the integration. Scheduling Integration Execution After we have replicated our data, we want to keep them up-to-date. For this, we will configure the integration to run every hour during workdays automatically. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and then select Recur every in the list. Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 9:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this our integration will run automatically every hour between 09:00 and 18:00 of every workday. You may check its execution on the Monitor tab in the Run History . You can also visit Scheduling Integrations to get more detailed information on setting an integration schedule." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/replication-tutorials/how-to-set-up-replication-of-salesforce-accounts-and-contacts-to-sql-azure.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Replication Tutorials How to Set Up Replication of Salesforce Accounts and Contacts to SQL Azure In this tutorial, we will show how to configure replication of your Salesforce data to a relational database so that the replicated data are kept up-to-date with Salesforce automatically. In our tutorial, we will demonstrate it with Salesforce Accounts and Contacts, however, the same can be applied to any Salesforce object. By default, Skyvia creates and maintains a copy of the data. It also replicates record deletions from Salesforce, deleting the corresponding records in the target. In this tutorial, we don\u2019t want to replicate deletions, and want to store all the history of replicated data, so we will use the History mode instead. Creating Connections In order to replicate data from Salesforce to SQL Azure, first we need to create connections to Salesforce and SQL Azure databases. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of new connection is Untitled. Click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list, select the authentication method for connecting to Salesforce. If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. These parameters are stored on Skyvia. Otherwise, if you have selected OAuth 2.0 authentication, click Sign In with Salesforce and sign in. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +Create New in the top menu. Click Connection in the menu on the left. Select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories, using the All list (for SQL Server , select the Database category). The default name of new connection is Untitled . Click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201c TCP:<server name> \u201d. Replace \u201c<server name>\u201d with your actual server name. Specify your User Id , Password , and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an integration that performs the necessary data replication operation. Creating Integration To create a replication, perform the following steps: Click +Create New in the top menu. In the Integration column, click Replication . Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Salesforce connection. Under Target , in the Connection drop-down list, select the SQL Azure connection. In the grid under Select Objects , select checkboxes for the objects you want to replicate. In our tutorial we will replicate Account and Contact . In the replication tasks, enable the History mode. For this, open the replication task by clicking the Edit button next to the Account object and, in the Mode list, select History . Skyvia will store all the history of replicated changes. Click Save task and repeat these actions for the Contact object. Let\u2019s also assume that we don\u2019t want to replicate the contacts\u2019 emails. However, we still want to check for duplicate emails later. Here Skyvia\u2019s Hashing feature comes in handy. To enable hashing for the emails fields, open the replication task for the \u0421ontact object, point to the corresponding email field, click Unhashed , and then select Hashed . Click the Create button. Run the integration by clicking Run . Now you have your data replicated to a relational database. On the Monitor tab, you can check the Run History of the integration. Scheduling Integration Execution After we have replicated our data, we want to keep them up-to-date. For this, we will configure the integration to run every hour during workdays automatically. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and then select Recur every in the list. Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 9:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this, our integration will run automatically every hour between 9:00 and 18:00 of every workday. You may check its execution on the Monitor tab in the Run History . Click an integration run to open the History Details window and see the detailed information on this run. You can also visit Scheduling Integrations to get more information on setting schedule." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/synchronization-tutorials/", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Synchronization Tutorials In this section, you can get familiar with the following tutorials on data synchronization operations: How to Synchronize Product Data This tutorial describes how to create a synchronization that syncs a modified Products table from the Microsoft standard Northwind database on SQL Azure with Salesforce Product2 and PricebookEntry objects. It also demonstrates creating an export. How to Synchronize Zoho CRM Contacts with Mailchimp Subscribers This tutorial describes how to create a synchronization that will keep Mailchimp subscribers in a list and Zoho CRM contacts in sync. How to Configure Cloud Data Synchronization with Empty Database This tutorial describes how to create a copy of your cloud data in a database and keep it in sync with the cloud source automatically using Salesforce as an example." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/synchronization-tutorials/how-to-configure-cloud-data-synchronization-with-empty-database.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Synchronization Tutorials How to Configure Cloud Data Synchronization with Empty Database This tutorial shows how to make a copy of cloud data in an empty database and configure a bi-directional synchronization between this copy and the original cloud application. For this tutorial, we select Salesforce cloud source as an example and its Account and Contact objects. In Skyvia, Replication is used to create a copy of cloud data in a database. It can create tables for data automatically and then keep the database up-to-date performing one-way synchronization periodically from the cloud source. If such synchronization in one direction is enough for you, you can take a look on our Replication Tutorials . Synchronization integrations , on the other hand, perform bi-directional data synchronization. However, they are not intended for creating an exact data copy and cannot create tables for the data automatically. They are intended for synchronization of existing tables, and they support synchronization of data having different structure. So, if you have just an empty database and want to create a copy of cloud data in it and then sync changes between this copy and original cloud source in both direction, you have two options. One is to manually create tables in the database, which is not always convenient. Another way is to use a replication for creating the necessary database tables, then manually modify these tables so that they meet Skyvia\u2019s synchronization requirements, and finally, configure a synchronization. Synchronization Requirements to Consider Note that Skyvia has certain requirements to consider for synchronization. Skyvia supports synchronization not for every object in every cloud source. It requires that these cloud objects have fields, storing the record creation and modification time. You can find the information about these limitations in the corresponding Cloud Sources topics. Some cloud sources, like Salesforce, support synchronization for most objects, and some \u2014 only for a few objects. As for databases, Skyvia requires synchronized tables to have auto-generated primary keys. This is the reason why we should manually modify database tables after creating them via a replication, because replication creates primary keys that are not auto-generated. Connections For our integration we need to create a connection to our cloud source and database. You can find the information about connecting Skyvia to various cloud sources in the Cloud Sources section. In this tutorial, we create a Salesforce connection. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list, select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Log In with Salesforce button and login via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click the Save button to save the connection. As for connecting to databases, see the information about database connections in the Databases section. Please note that Skyvia does not support synchronization for cloud data warehouse services \u2014 Azure Synapse Analytics, Amazon Redshift, and Google BigQuery. Replication To create a replication, perform the following actions: Click +NEW in the top menu. In the Integration column, click Replication . The replication details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Salesforce connection. Under Target , in the Connection drop-down list, select the corresponding connection. Important! Clear the Create Foreign Keys checkbox. If you need a bi-directional synchronization, you should not have foreign keys created by replication. If you don\u2019t clear this checkbox, you will need to drop the foreign keys manually later. In the grid under Select Objects , select checkboxes for objects you want to synchronize. In our example, these are Account and Contact objects. The first run of the synchronization will re-load all the records from Salesforce to the target database anyway. So it\u2019s not necessary that replication actually load records. We need it only to create the tables. If you don\u2019t want to spend time on additional configuration, you may omit the steps and allow replication to load data. In this case, it may take quite some time for replication to load all the data, and you will need to delete data from the database tables before running the synchronization anyway. Besides, these extra records are added to your subscription counter. But if you don\u2019t want to unnecessarily load data, you can avoid this by configuring filters for replication: Click Edit icon next to the selected object. The task editor window will open. In the replication task editor window, click +Condition . In the added boxes, specify a filter condition that is never true for any record. In the first box, select a field; in the second \u2014 comparison operator, and in the third \u2014 specify some value for comparison. For example, for Salesforce I can use a condition Id equals 0. It cannot be true in Salesforce for any record. Thus, all the records will be filtered out by this filter, and the replication will create a table, but won\u2019t load any records. Repeat the steps a-\u0441 for all the objects you want to synchronize. Click the Create button to create the integration. Run the integration by clicking Run . Editing Database Tables After replication creates the tables, we need to edit them. We need to delete their primary key constraint and add a new autogenerated one. You can either use some visual database tools to make these changes or run SQL scripts against your database, for example, with Skyvia Query. If you want to use a database tool, you need to perform the following actions: If you didn\u2019t add filters so that no data were replicated, truncate your database tables, which were created after the replication. Delete the tables\u2019 primary key constraints. You may optionally delete their primary key columns as well. Their values are automatically generated on the cloud side and cannot be synchronized when loading data from the database to the cloud source. They will only be synced when loading data from the cloud source to database. Add a new autogenerated column for the primary key. For example, you may call it sync_id: In PostgreSQL you may use the serial data type to make a column autogenerated. In MySQL you may use a BIGINT data type with the AUTOINCREMENT option. For SQL Server, you may use a BIGINT data type with the IDENTITY option. For Oracle 12c and higher, you may use a NUMBER data type with the GENERATED ALWAYS AS IDENTITY option. For lower Oracle versions, you will need to create a sequence and a trigger to assign values from this sequence to the primary key column when inserting records. Alternatively, you may do the same, using DDL scripts. Click +NEW in the top menu. In the Query column, click SQL . Click Select connection on the left and select the connection to your database, where you replicate data, in the list. Enter your script to the query editor box. Here we provide scripts with Account and Contact objects as an example. You can create your own scripts based on this example by replacing Account and Contact with your object names: For SQL Server : 1\n2\n3\n4\n5\n6\n7 ALTER TABLE Contact DROP CONSTRAINT PK_Contact;\n\nALTER TABLE Contact ADD Sync_id BIGINT IDENTITY PRIMARY KEY;\n\nALTER TABLE Account DROP CONSTRAINT PK_Account;\n\nALTER TABLE Account ADD Sync_id BIGINT IDENTITY PRIMARY KEY; For MySQL : 1\n2\n3\n4\n5\n6\n7 ALTER TABLE contact DROP PRIMARY KEY;\n\nALTER TABLE contact ADD sync_id BIGINT AUTO_INCREMENT PRIMARY KEY;\n\nALTER TABLE account DROP PRIMARY KEY;\n\nALTER TABLE account ADD sync_id BIGINT AUTO_INCREMENT PRIMARY KEY; For PostgreSQL : 1\n2\n3\n4\n5\n6\n7 ALTER TABLE \"Contact\" DROP CONSTRAINT \"PK_Contact\";\n\nALTER TABLE \"Contact\" ADD Sync_id SERIAL PRIMARY KEY;\n\nALTER TABLE \"Account\" DROP CONSTRAINT \"PK_Account\";\n\nALTER TABLE \"Account\" ADD Sync_id SERIAL PRIMARY KEY For Oracle (12c or higher) : 1\n2\n3\n4\n5\n6\n7 ALTER TABLE \"Contact\" DROP CONSTRAINT \"PK_Contact\";\n\nALTER TABLE \"Contact\" ADD Sync_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY;\n\nALTER TABLE \"Account\" DROP CONSTRAINT \"PK_Account\";\n\nALTER TABLE \"Account\" ADD Sync_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY; Click Execute or press F9. Synchronization To create a synchronization, perform the following steps: Click +NEW in the top menu. In the Integration column, click Synchronization . The sync integration details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Now it\u2019s necessary to specify source and target connections. Note that synchronization is bi-directional, and the only difference between the source and target is that source changes have a priority when solving change conflicts (when a record was changed both in source and target between synchronizations). Let Salesforce be the source in our example. Under Source , in the Connection drop-down list, select the Salesforce connection. Under Target , in the Connection drop-down list, select the SQL Azure connection. Click the Add new link. In the Source list, select Account . In the Target list, select the Account table and click Next step . Since all the columns in source and target have the same names and types, they are mapped automatically. Click Target to Source in order for columns to be also mapped for the opposite direction. Click Save . Repeat steps 7-11 for all other objects that you want to sync. Scheduling Integration Execution After we have created the synchronization, we want to keep the data in sync automatically. For this, we will configure the integration to run every hour during workdays. Skyvia uses LastModifiedData and CreatedDate fields to track changes in Salesforce, and it creates its own tracking tables and triggers to track changes in the database. Note that since we used Salesforce connection as the source, its changes have a priority, and if a record was changed both in Salesforce and in the database, Salesforce changes are applied. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and select Recur every . Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 09:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this our integration will run automatically every hour between 09:00 and 18:00 of every workday. You can also visit Scheduling Integrations to get more detailed information on setting an integration schedule." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/synchronization-tutorials/how-to-synchronize-product-data.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Synchronization Tutorials How to Synchronize Product Data In this tutorial, we will show how to create a synchronization that syncs Salesforce Product2 objects with a slightly modified Products table from the Microsoft sample database Northwind on SQL Azure. This tutorial also demonstrates how to create an export. The difference between the Products table used in this tutorial and the Products table from the Northwind database is that our Products table has two columns that store two different prices for each product: UnitPrice1 and UnitPrice2. Using Constant Mapping While in the SQL Azure database all the information about the products with two prices is stored in a single table, in Salesforce product prices are stored in a separate PricebookEntry object. Thus, one row from the SQL Azure Products table corresponds to one Product2 Salesforce object and two PricebookEntry objects. The first price from the UnitPrice1 column goes to the \u2018Standard\u2019 Pricebook, and the second price goes, for example, to the \u2018Discount\u2019 Pricebook. If you don\u2019t have such a pricebook, you may use any other pricebook or create a new one for this tutorial and name it \u2018Discount\u2019. To implement this scenario, we go to Columns Definition tab of the Task Editor. In the Column drop-down list, we select Constant mapping. In synchronization integrations it has an additional checkbox \u2014 Use this value as filter of target records . If you set constant mapping for one direction (for example, from source to target) and select this checkbox, only the data having the column values equal to the specified constants participate in synchronization when performing synchronization in opposite direction. In our tutorial we map the fields of the Product2 object and two PricebookEntry objects to the columns of the Products table. We will simply map the Pricebook2Id fields of the PricebookEntry objects to the constant values equal to the IDs of the Standard and Discount Pricebook2 objects. The UnitPrice field of these PricebookEntry objects will be mapped to the UnitPrice1 and UnitPrice2 columns respectively. After this, when synchronizing data from SQL Azure to Salesforce, the PricebookEntry objects will refer to the necessary Pricebooks since their ID are assigned to them. When synchronizing data from Salesforce to SQL Azure and getting values for UnitPrice1 and UnitPrice2 columns, only the PricebookEntry objects with the corresponding Pricebook2Id values specified in the constant mapping, will be retrieved from Salesforce for synchronization, and thus the correct values are applied. In our tutorial, we will get the ID values of Salesforce Pricebook2 with an export and then create a synchronization itself. Creating Connections In order to synchronize data from SQL Azure to Salesforce, first we need to create connections to Salesforce and SQL Azure. If you have already created the necessary connections, you may skip these steps. To create a connection to Salesforce, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Salesforce . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Salesforce , select the CRM category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Salesforce1 . From the Environment drop-down list, select the Salesforce environment type to import data to. Since this is just a sample walkthrough, the Sandbox environment is recommended. From the Authentication drop-down list select the authentication method for connecting to Salesforce. If you don\u2019t mind storing your Salesforce credentials on Skyvia server, select User Name & Password . If you prefer not to store your credentials, select OAuth 2.0 . If you have selected User Name & Password , on the previous step, specify your Salesforce account e-mail, password, and security token. Otherwise, if you have selected OAuth 2.0 authentication, click the Log In with Salesforce button and log in via the Salesforce website on the opened page. The result OAuth token will be stored in the connection data. Your Salesforce credentials will not be stored on our website. Click Create Connection . To create a connection to SQL Azure, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select SQL Server . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for SQL Server , select the Database category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to SQL Azure . In the Server box, enter \u201c TCP:<server name> \u201d. Replace \u201c<server name>\u201d with your actual server name. Specify your User Id , Password , and Database to connect to. Click Create Connection . Now we have the necessary connections created. Let\u2019s create an export that gets the necessary Pricebook2 ID values. Creating Export for Getting Pricebook2 IDs Click +NEW in the top menu. In the Integration column, click Export . The export details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Salesforce connection. By default, data will be exported to a manually downloaded CSV file. Click the Add new link to open the Task Editor. From the Object drop-down list, select Pricebook2 . All the Pricebook2 fields are selected automatically. Clear the checkbox next to Pricebook2 to deselect all its fields. Then select the Id and Name checkboxes and click the Save button at the bottom of the Task Editor. Click the Create button to create the integration. Run the integration. Now we have a CSV file with Pricebook names and Ids. Click the corresponding link in the Run History . The History Details window will open. Click the number of successful records in the window. The result file will be automatically downloaded to your computer. Creating Synchronization To create a synchronization, perform the following steps: Click +NEW in the top menu. In the Integration column, click Synchronization . The synchronization details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Salesforce connection. Under Target , in the Connection drop-down list, select the SQL Azure connection. Click the button. In the drop-down list, under the Source , select Product2 . Click the +Related button twice. In both new drop-down lists, select PricebookEntries . In the Target list, select the Products table and click Next step . Source to Target Mapping Next, you need to configure mapping settings. You can read detailed information about mapping here or follow the steps below: On the Source to Target tab, map the Name column to the ProductName field. Click the Discontinued column, then, in the Column drop-down list, select Expression and enter \u201c! IsActive\u201d (without quotes) to the box below. In the drop-down list that is at the top of the Task Editor, select the first PricebookEntry object. Click the UnitPrice1 field and, in the Column drop-down list, select UnitPrice . Next select the second PricebookEntry object in the top drop-down list of the Task Editor. This time click the UnitPrice2 field and, in the Column drop-down list, select UnitPrice . Target to Source Mapping Since the data is synchronized in both directions, you need to configure mapping settings to load data in a reverse direction. For this, perform the following steps: Click the Target to Source tab and map the ProductName field to the Name column. Click the IsActive field, then, in the Column drop-down list, select Expression and enter \u201c! Discontinued\u201d (without quotes) to the box below. In the drop-down list that is at the top of the Task Editor, select the first PricebookEntry object. Click the Pricebook2Id field. For this field we will use the constant mapping as described above. Open the CSV file with exported pricebook data and copy the Id value of the standard pricebook. Then switch back to the browser with the integration editor, click the Pricebook2Id field, select Constant in the Column drop-down list, and paste the Id value (without quotes) to the box below. Select the Use this value as filter of target records checkbox. This means that when synchronizing data in the reverse direction (from Salesforce to SQL Server) only records with Pricebook2ID equal to this value are queried, other PricebookEntry records are ignored. Map the UnitPrice field to the UnitPrice1 column. For PricebookEntry we will map the IsActive column to be always true. You can use either Constant or Expression mapping in this case and specify the true constant as an expression. Click the IsActive field and then, in the Column list, select Expression . Enter \u201ctrue\u201d (without quotes) to the Expression box. Repeat steps 3-5 for the second PricebookEntry object, using the Id value of Discount pricebook (for example), and repeat step 6 for UnitPrice2 column. Click Save to save the task. Scheduling Integration Execution After we have created the synchronization, we want to keep the data in sync automatically. For this, we will configure the integration to run every hour during workdays. For this we will use the Incremental Update feature. When using Incremental Updates, Skyvia will not synchronize all the data each time when the integration is executed. Instead it will detect data that were changed since the last integration execution, and then synchronizes these changes. This allows reducing Salesforce calls and thus the cost of the replication operation. Skyvia uses LastModifiedData and CreatedDate fields to track changes in Salesforce, and it creates its own tracking tables and triggers to track changes in the SQL Server database. Note that since we used Salesforce connection as the source, its changes have a priority, and if a record was changed both in Salesforce and SQL Server database, Salesforce changes are applied. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and select Recur every . Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 09:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this our integration will run automatically every hour between 09:00 and 18:00 of every workday. You can also visit Scheduling Integrations to get more detailed information on setting an integration schedule." }, { "url": "https://docs.skyvia.com/data-integration/tutorials/synchronization-tutorials/how-to-synchronize-zoho-crm-contacts-with-mailchimp-subscribers.html", "product_name": "Data Integration", "content_type": "Documentation", "content": "Product: Data Integration. Documentation Data Integration Tutorials Synchronization Tutorials How to Synchronize Zoho CRM Contacts with Mailchimp Subscribers In this tutorial, we will show how to create a synchronization that syncs Zoho CRM contacts with Mailchimp subscribers. In this tutorial, we suppose that there are no corresponding records present both in Zoho CRM and Mailchimp already. This is because when a synchronization is performed the first time, it copies all the source data to target and all the target data to source without any check if the corresponding records are already present in the opposite source. If some of the records were present in both sources, Skyvia will create duplicate data. That\u2019s why it\u2019s better to perform the first synchronization when one of the sources is empty or at least does not contain records present in the other source. When performing next synchronizations, Skyvia already knows how the records in the source and target correspond to each other and loads only the changed data between sources. Creating Connections In order to synchronize Zoho CRM contacts with Mailchimp subscribers, first we need to create connections to Zoho CRM and Mailchimp. If you have already created the necessary connections, you may skip these steps. To create a connection to Zoho CRM, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Zoho CRM . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Zoho CRM , select the CRM category). The default name of a new connection is Untitled . Click it to rename the connection, for example, to Zoho CRM1 . Click Sign In with Zoho . In the opened window, enter your Zoho CRM email and click the Next button. In the opened window, enter your Zoho CRM password and click the Sign In button. In the opened window, choose the Org you want to access (if you have several ones) and click Submit . In the next window, click Accept to allow Skyvia to access data in your Zoho account. The authentication token is generated. Optionally change the Metadata Cache parameter value. It determines how often to update cached metadata for the connection. By default, Skyvia caches metadata of available objects for cloud sources. You can configure how often the cache is refreshed automatically or reset it manually on the details page of the corresponding connection by clicking the Clear now link next to the Metadata Cache parameter. Click Create Connection . To create a connection to Mailchimp, perform the following steps: Click +NEW in the top menu. Click the Connection button in the menu on the left. In the opened Select Connector page, select Mailchimp . To quickly find it, you can either use the Type to filter box or filter connectors by categories using the All list (for Mailchimp, select the Email Marketing category). The default name of a new connection is Untitled . Just click it to rename the connection, for example, to Mailchimp1 . Click Sign In with Mail\u0441himp . In the opened window, enter your Mailchimp credentials and click the Log In button. Click the Allow button. Optionally select values for the Merge Fields Behavior and Metadata Cache parameters. Click Create Connection . Creating Synchronization When loading subscribers to Mailchimp, you need to specify the Mailchimp list to load subscribers to by mapping the ListId field of the Mailchimp ListMembers table. You can either find out the id of the list (with an Export or with Query or [directly via Mailchimp interface](https://mailchimp.com/help/find-audience-id/) ) and specify it via the Constant Mapping or use the Lookup Mapping to get the list Id, for example, by its name, like we will do in this tutorial. We will synchronize Zoho CRM contacts with a list, having the name \u2018Sync Test List\u2019 in this tutorial. Click +NEW in the top menu. In the Integration column, click Synchronization . The sync integration details page will open. Rename your integration by clicking and editing the integration name. The default integration name is Untitled . Please note, if you omit this step, the integration name will remain Untitled in the list of created integrations. Under Source , in the Connection drop-down list, select the Zoho CRM connection. Under Target , in the Connection list, click Select target and select Mailchimp connection from the drop-down list. You can use the Type to filter box to quickly find the necessary connection. Click the Add new link. In the Source list, select Contacts . In the Target list, select ListMembers . Click Next step . Now we need to map target fields to source fields. As we can see, some of the columns that have the same names in Mailchimp ListMembers table and Zoho CRM Contacts table, are already mapped automatically. We need to map only the ListId column. It is marked as Required, which means, it must be mapped in order for the integration to be valid. If you know the Id of the necessary list, use Constant Mapping and specify this Id as a constant (See [Mailchimp documentation on how to find the list ID](https://mailchimp.com/help/find-audience-id/) ). To configure such a mapping, perform the following steps: Click ListId in the list of target fields. Click Column and then select Constant from the drop-down list. Paste the Id value (without quotes) to the box below. Select the Use this value as filter of target records checkbox. This means that when synchronizing data in the reverse direction (from Mailchimp to Zoho CRM) only records with the corresponding ListId value are synchronized, subscribers from other lists are ignored. Now let\u2019s configure mapping in the opposite direction. In synchronization tasks you need to specify mapping for both directions - from source to target and from target to source. Click Target to Source under task editor header. Here, the First Name, Last Name, and Email Zoho CRM fields are already mapped. Let\u2019s also map the Email Opt Out field so that when Mailchimp subscriber unsubscribes, it will be automatically set. We will use the Expression Mapping for this. Click Email Opt Out in the list of the source fields. Click Column and then select Expression in the drop-down list. Enter the following expression to the box below: 1 Status == \"Unsubscribed\" ? true : false Please note that this expression is valid only for the old runtime . The expression checks the status of the subscriber in Mailchimp, and if the status is \u201cUnsubscribed\u201d, it returns the true boolean value, which is assigned to the Email Opt Out field in Mailchimp. Otherwise, false is assigned. If you want to use an expression for the new runtime (in case the Use new runtime checkbox is selected in the integration), please enter it like this: 1 Status == 'Unsubscribed' ? true : false You can read more about the main syntax differences between old and new runtimes here . Click Save . Scheduling Automatic Execution After we have created the synchronization, we want to keep the data in sync automatically. For this, we will configure the integration to run every hour during workdays. Note that since we used Zoho CRM connection as the source, its changes have a priority, and if a record was changed both in Zoho CRM and Mailchimp database, Zoho CRM changes are applied. Perform the following actions to set the schedule: Click Schedule on the left side of the toolbar. Under Run every , select Week . Under Days of week , select checkboxes with all the workdays. Click Occur once at and select Recur every . Enter \u201c1\u201d (without quotes) into the Recur every box and click the Set time restrictions link. Enter 09:00 and 18:00 to the corresponding boxes. Click Now to put the schedule into action immediately or select At a specific time in the Starting list and specify the necessary date and time you want the schedule to be enabled from. Click Save to schedule integration execution. After this our integration will run automatically every hour between 09:00 and 18:00 of every workday. You can also visit Scheduling Integrations to get more detailed information on setting a schedule." }, { "url": "https://docs.skyvia.com/expression-syntax/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax This sections describes Skyvia expression syntax which is used in expression mapping of integrations using the new runtime as well as data flow integrations . Integrations, using old runtime, use [SSIS expression syntax](https://docs.microsoft.com/en-us/sql/integration-services/expressions/integration-services-ssis-expressions) . You can find more information about main syntax differences in the Main Differences between Old and New Runtime Syntax topic. Identifiers Identifiers in our expression syntax are source column names. If an identifier does not coincide with any keywords, data types, or functions, consists only of English letters, numbers, or an underscore, and starts with an English letter or an underscore, you can use it without additional qualifiers. Otherwise, an identifier must be delimited by double quotation marks. Identifiers are case-sensitive. Skyvia allows loading data from multiple related tables/objects in the same tasks. In import , you can select relations of the main source objects in order to add data of related tables/objects to import. Synchronization also allows many-to-one and one-to-many synchronization. In this case, if you want to use columns of a related table/object in expression, you need to add it as the relation name , followed by the column name, separated by a dot character. If the relation name or column name need to be delimited by quotes, delimit them separately. For example \"My foreign key\".\"My column\" . Data Types Skyvia expressions support the following data types: Boolean type: bool Integer types: int1 , int2 , int4 , int8 Floating point types: float4 , float8 Fixed precision type: decimal Date/time types: datetime , datetimeoffset , time String type: _string Other types: bytes , guid For information about data type conversion, see the Data Types and Type Conversion topic. Literals Skyvia expressions can include numeric, string, and boolean literals. Read the Literals topic for the information about how to enter literals and constant values of different types in your expressions. Operators Skyvia supports various mathematical, logical, and comparison operators, and conditional operator ? : . You can find the complete list of operators as well as their precedence in the Operators topic. Functions You can use a wide range of mathematical, string, datetime, and other functions. See the complete list of supported functions in the Functions topic." }, { "url": "https://docs.skyvia.com/expression-syntax/data-types-and-type-conversion.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Data Types and Type Conversion Data Types The following table lists the data types used in new data integration runtime and its expression engine. Data type Description bool A boolean value. Can be true or false . bytes An array of bytes, containing binary data. Can have variable length. datetime A value, representing a moment in time, that consists of year, month, day, hour, minute, seconds, fractional seconds, and timezone. Datetime values have fixed scale for fractional seconds of 7 digits. datetimeoffset A value, representing a moment in time. In addition to year, month, day, hour, minute, seconds, and fractional seconds, datetimeoffset also includes a time zone offset - the number of hours and minutes that the time is offset from the Coordinated Universal Time (UTC). decimal An exact numeric value that has a fixed precision of 29 and a scale from 0 to 28. float4 Single precision (4 bytes) floating point values. float8 Double precision (8 bytes) floating point values. guid Guid (globally unique identifier) values. int1 Signed 1-byte integer value. Can have values from -128 to 127. int2 Signed 2-byte integer value. Can have values from -32768 to 32767. int4 Signed 4-byte integer value. Can have values from -2,147,483,648 to 2,147,483,647. int8 Signed 8-byte integer value. Can have values from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. string A null-terminated Unicode character string value. Can have variable length. Maximal length is time A time value, consisting of hours, minutes, and seconds. Data Type Conversion For data type conversion, use data type conversion functions with one argument - the value to convert. As for datetime conversion to string and vice versa - it is performed according to the culture settings of locale, specified for the imported CSV files. If the import loads data from a database or cloud app, the invariant culture is used." }, { "url": "https://docs.skyvia.com/expression-syntax/examples.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Examples Here are a couple of useful examples that may help you create your own expressions. Getting Full Name from First and Last Names and Vice Versa Suppose, you load data between two data sources, and one of them has the ContactName column, storing full names, and another source has the separate First Name and Last Name columns. In this case you can use expression mapping to create full name from the first and last names and vice versa. Concatenating first and last names into full name is easy, just don\u2019t forget to add a space between them. The expression looks like the following: 1 \"First Name\" + ' ' + \"Last Name\" Note that column names, containing spaces, like here, should be delimited with double quotation marks. The reverse transformation is a bit more complex. We assume that the full name consists of the first name and the last name, separated with a space. In this case, we can find the position of space with the find_string function, and take the left and right parts of the full name. Here is an example for the first name: 1 left(ContactName,find_string(ContactName,' ',1)-1) And for the last name: 1 right(ContactName,len(ContactName)-find_string(ContactName,' ',1)) However, if the ContactName column may also include middle name or more names, this expression for the last name would return all the names except the first one, because we are looking for the first occurrence of the space character. In this case, since we don\u2019t know how many spaces the ContactName contains, we may reverse the string, so that the last part becomes the first, take this part, and reverse it again: 1 reverse(left(reverse(ContactName),find_string(reverse(ContactName),' ',1)-1)) These expressions with the left function, however, have one con. They cannot process cases when the ContactName column does not contain spaces at all. In this case, the find_string function returns 0, and the left function gets -1 as the second argument. This causes an error, and records with such ContactName values will fail. If it\u2019s OK for you to fail such records, you may use these expressions. Otherwise, we may add an additional check for spaces in the source string using the conditional operator ?: . If there are no spaces, we may either use the whole source string or return a NULL value. Let\u2019s return a NULL value for the first name: 1 find_string(ContactName,' ',1) > 0 ? left(ContactName,find_string(ContactName,' ',1)-1) : null() For the last name let\u2019s return the full ContactName value, because in many cases last name is required, and cannot be null: 1 find_string(ContactName,' ',1) > 0 ? reverse(left(reverse(ContactName),find_string(reverse(ContactName),' ',1)-1)) : ContactName Using Conditional Operator The conditional operator is useful, for example, when you need to map a target column, having a fixed set of possible values, but source values are not exactly the same as target values. For example, let\u2019s consider loading Zendesk tickets to Jira issues. In Zendesk, tickets can have the following Priority values: Low, Normal, High, or Urgent. Tickets can also have no priority assigned. In Jira, available issue priorities are stored in their own table, and issues have references to the corresponding priorities in the PriorityId column. For example, in our Jira there are the following priorities: Highest, High, Medium, Low, and Lowest. They have ids from 1 to 5 respectively. If we want to load Zendesk tickets to Jira issues, we can use expression mapping for the PriorityId column and select the corresponding value using the conditional operator. We can also use the replace_null function to use Normal priority for Zendesk tickets without priority set. The expression will look like the following: 1 replace_null(Priority,'Normal')=='Normal' ? 3 : (Priority=='Urgent' ? 1 : (Priority=='High' ? 2 : 4)) The outer operator assigns the Id of Jira\u2019s Medium priority (3) for Zendesk tickets with a Normal priority or without the priority assigned. Otherwise, we check if the ticket has Urgent priority in Zendesk, and assign the Highest priority (id is 1) in this case. Then the innermost operator checks if the ticket has the High priority, and assigns the corresponding id 2, and if fails, it assign the Low priority (id is 4). Replacing NULL Values In some cases, a column in a source can be null, but the corresponding column in target cannot. If you simply map the columns to each other, you will get errors for every source row, where the corresponding column has a NULL value. An easy way to avoid it is using the replace_null function. This function checks whether the first argument is NULL. If not, it returns the first argument. Otherwise, it returns the second argument. So, for example, if the source has a boolean column Unsubscribed, which can have NULL values, and the target column does not accept NULL (is required), you can use the following expression in your mapping to replace NULL values with the false constant: 1 replace_null(Unsubscribed,false) If the target field is not only required, but should also be unique, you cannot use constant value to replace null values. In this way, you need to use another column that is guaranteed to be non-null and unique as a replacement. For example, suppose you have the Name column in target, and want to map the source column Full Name to it. However, the source may contain records without the Full Name specified, however, they always have the Display Name column filled. In this case, you can use the following expression. 1 replace_null(`Full Name`,`Display Name`) In the worst case, when there are no suitable column for replacement, you may use the source record id. Working with JSON Values Some data sources, like G Suite (Google Contacts) or SendPulse, have columns, storing emails and phones in the JSON format. There are also other data sources with columns, storing data in JSON. When loading data between such data sources and other data sources, when values are stored as usual, without JSON, you can use expressions to construct the necessary JSON strings or get scalar values from JSON. For example, Google Contact Emails values look like the following: [{\u201cAddress\u201d:\u201dexample@example.info\u201d,\u201dType\u201d:\u201dWork\u201d,\u201dIsPrimary\u201d:true}] This means that if our source stores emails as is in an E-mail column, we can use the following expression to import them to the Google Contacts Emails column: 1 '[{\"Address\":\"'+\"E-mail\"+'\",\"Type\":\"Work\",\"IsPrimary\":true}]' A similar expression can be used for the Phones column: 1 '[{\"PhoneNumber\":\"'+ Phone+ '\",\"Type\":\"Mobile\"}]' As for extracting values from such JSON, usually it is not necessary, because Skyvia provides virtual objects, from which you can get these values as a simple column values. For example, you can get emails and phone numbers of your Google Contacts via the ContactEmails and ContactPhoneNumbers. However, in this case we can also use expressions to extract values. For example, we can take a substring between the first occurrences of the \u2019:\u201d\u2018 and \u2019\u201d,\u2019 character sequences. This would correspond to the first email and first phone in the Emails and PhoneNumbers column of the Contacts object: 1 substring(Emails,find_string(Emails,':\"',1)+2,find_string(Emails,'\",',1)-find_string(Emails,':\"',1)-2) And for the PhoneNumbers: 1 substring(PhoneNumbers,find_string(PhoneNumbers,':\"',1)+2,find_string(PhoneNumbers,'\",',1)-find_string(PhoneNumbers,':\"',1)-2) In this way you may design your own expressions for working with JSON string. To create a JSON string, you may obtain an example value from your data source with Skyvia query or export. Then, you may use this JSON string as a template and substitute source columns into places of the corresponding values, concatenating strings using the + operator. To extract a value from the source JSON string, you may use the substring function, and use the find_string functions to determine the position and length of the value to extract. Convert_tz Examples convert_tz function is used to convert values from one timezone to another. For example, the expression 1 convert_tz(datetime('2021/7/7 19:00'),'Eastern Standard Time','Aleutian Standard Time') returns 7/7/2021 2:00:00 PM . For example, let\u2019s consider a case when we need to synchronize two databases in different timezones, having datetime columns without time zones. Suppose one uses Western Europe Standard Time, and another \u2014 US Eastern Standard Time. The first one is used as a source, and the second \u2014 as a target. Let\u2019s consider mapping for an example column LastContactedDate . For source to target mapping it will be: 1 convert_tz(LastContactedDate,'W. Europe Standard Time','Eastern Standard Time') and for target to source mapping: 1 convert_tz(LastContactedDate,'Eastern Standard Time','W. Europe Standard Time') Anonymizing Data via Hashing Hashing field values is useful to conceal sensitive information when loading data to other systems. It enables you to load anonymized values that still can be used for analytics purposes. Skyvia uses sha256_encrypt and sha512_encrypt functions to hash data. These functions hash binary or string values using [SHA-256 and SHA 512 algorithms](https://en.wikipedia.org/wiki/SHA-2) respectively, adding a string or binary salt to the value as a suffix. The key difference between SHA-256 and SHA-512 is their output size, with SHA-256 producing a 256-bit hash value and SHA-512 generating a 512-bit hash value. Note that if you specify the salt as a string value, you need to use a base64 encoded value. For example, if you want to conceal names, you can use the following expression: 1 sha256_encrypt(Name,encode_base64('the_secret_salt')) Replace the_secret_salt with your own secret string." }, { "url": "https://docs.skyvia.com/expression-syntax/functions.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Functions This topic lists functions available in Skyvia. General Functions General functions allow you to create NULL values, check if values are NULL or present, and replace NULL values with alternative ones. Function Parameters Description isnull value - any type Checks whether the specified value is a NULL value. ispresent argument - any type Checks whether the argument is not NULL. If the argument references a property, additionally checks whether it is a known property. null no arguments or value argument of any type Called with no arguments, returns a NULL value. Called with an argument, returns a NULL value of the argument type. replace_null arg1 - any type, arg2 - any type If the first argument is not NULL, returns the first argument; otherwise, returns the second argument. Data Type Conversion Functions Data type conversion functions allow converting values from one data type to another. They have overloads without parameters and with a paramter of types, conversion from which is supported. The overloads without parameters return a zero or empty values of the specified type. Thus, bool() returns false, int() or any other numeric type function returns 0, and string() returns an empty string. Function Parameters Description bool Accepts boolean, string, or numeric values. If an argument is false or 0, returns false. If an argument is true or a non-zero number, retuns true. It also accepts strings \u2018true\u2019 or \u2018false\u2019 (case insensitive) or strings that can be converted to a number. Other strings would cause an error. currency Accepts boolean, string, or numeric values. Converts values to decimal type. datetime Accepts datetime, string, or datetimeoffset values. Converts values to datetime type. datetimeoffset Accepts datetime, string, datetimeoffset, or int8 values. Converts values to datetimeoffset type. decimal Accepts boolean, string, or numeric values. Converts values to decimal type. float4 Accepts boolean, string, or numeric values. Converts values to float4 type. float8 Accepts boolean, string, or numeric values. Converts values to float8 type. guid Accepts guid or string values Converts values to guid type. int1 Accepts boolean, string, or numeric values. Converts values to int1 type. int2 Accepts boolean, string, or numeric values. Converts values to int2 type. int4 Accepts boolean, string, or numeric values. Converts values to int4 type. int8 Accepts boolean, string, or numeric values. Converts values to int8 type. string Accepts any values Converts values to strings. time Accepts time, numeric, or string values Converts values to time type. String Functions String and binary functions perform actions over string values and also encode binary values into strings and vice versa. Function Parameters Description codepoint input - string Returns the Unicode code point of the leftmost character of the argument. concat arg1 - string, arg2 - string Concatenates two strings. decode_base64 value - string Converts the specified string, which encodes binary data as base-64 digits, to an equivalent 8-bit unsigned integer array. decode_hex value - string Converts hex string to byte array. encode_base64 input - bytes Encodes byte array to base64-encoded string. encode_hex input - bytes Encodes byte array to hexadecimal string. extract input - string, pattern - regular expression string Searches for the first occurence of the specified regular expression pattern in the specified input string. Returns the extracted string. Case sensitive. find_string expression - string, searchString - string, occurrence - int8 Returns the 1-based location of the specified occurrence of the search string in the specified string expression. hex expression - int8 Converts an integer value to its equivalent string representation in hexadecimal format. is_null_or_empty value - string Checks whether the specified string value is a NULL value or and empty string. left expression - string, number - int8 Returns the specified number of characters from the leftmost part of the specified string. len expression - string Returns the number of characters in the specified string. length expression - bytes Returns the total number of elements of the byte array. lower expression - string Returns a copy of the specified string converted to lowercase. ltrim expression - string Removes all leading white-space characters from the specified string. ismatch input - string, pattern - regular expression string Checks if the input string matches to the specified regular expression pattern. If it does, it returns true , otherwise, it returns false . replace expression - string, searchString - string, replacementString - string Replaces all the occurrences of the search string in the expression string with the replacement string. replace_pattern input - string, pattern - regular expression string, replacement - string replaces all strings that match the specified regular expression pattern with a specified replacement string in a specified input string. Returns a new string with the replaced characters. replicate expression - string, times - int8 Returns the expression string replicated the specified number of times in the expression string with the replacement string. reverse expression - string Returns the specified string in reverse order. right expression - string, number - int8 Returns the specified number of rightmost characters from the rightmost part of the specified string. rtrim expression - string Removes all trailing white-space characters from the specified string. sha256_encrypt data - string or bytes, salt - string (base64 encoded) or bytes Encrypts the specified data with the salt added as a suffix using [SHA-256 algorithm](https://en.wikipedia.org/wiki/SHA-2) . sha512_encrypt data - string or bytes, salt - string (base64 encoded) or bytes Encrypts the specified data with the salt added as a suffix using [SHA-512 algorithm](https://en.wikipedia.org/wiki/SHA-2) . substring expression - string, position - int8, number - int8 Returns the specified number of the characters of the expression argument starting from the specified position. token expression - string, delimiter - string, occurrence - int8 Returns the specified token in the expression string, divided into tokens by the characters, specified in the delimiter string. token_count expression - string, delimiter - string Returns the number of tokens in the expression string, divided into tokens by the characters, specified in the delimiter string. trim expression - string Removes all leading and trailing white-space characters from the specified string. upper expression - string Returns a copy of the specified string converted to uppercase. You can find some examples of using the string functions in the Examples topic. Mathematical Functions Mathematical functions perform various mathematical operations over numeric values. Some of them are analogs of arithmetic operators, like +, -, *, /, %. Function Parameters Description abs value - any numeric type Returns the absolute value of an argument. Value has the same type as the argument. ceiling value - decimal or float8 Returns the smallest integer that is greater than or equal to the argument. exp value - decimal or float8 Returns an exponent of the argument. floor value - decimal or float8 Returns the largest integer that is less than or equal to the argument. ln value - decimal or float8 Returns the natural logarithm of the argument. log value - decimal or float8 Returns the base-10 logarithm of the argument. power value - decimal or float8 power - decimal or float8 Raises the first argument to the power of the second argument. round value - decimal or float8, precision - int4 Rounds the value to the specified precision . sign value - any numeric type If the argument is less than zero, returns -1. If the argument is greater than zero, returns 1. If the argument is zero, returns zero. sqrt value - decimal or float8 Returns the square root of the argument. square value - decimal or float8 Returns the square of the argument. sub arg1 - any numeric type, arg2 - any numeric type Subtracts the second argument from the first one. Datetime Functions Datetime functions perform actions over datetime values. Function Parameters Description convert_tz date - datetime, sourceTimezone - string, targetTimezone - string Converts the specified date from one timezone to another. See the list of timezones [here](https://app.skyvia.com/timezones) and examples of using timezones here . Note that you need to pass values from the Timezone ID column of the above list to this function. date_add datepart - string*, number - int8, date - datetime Adds the specified number to the specified part of the date. date_diff datepart - string*, startDate - datetime, endDate - datetime Returns the number of the datepart interval boundaries, crossed between the specified start and end dates. date_format date - datetime*, format - string Converts the specified datetime value to its equivalent string representation using the specified format string. You can use [standard](https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings) or [custom](https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings) datetime format strings. date_part datepart - string*, date - datetime Returns an integer, representing the specified datepart of the date. day date - datetime Returns an integer, representing the day datepart of the date. get_date no parameters Returns the current date. get_utc_date no parameters Returns the current date in UTC. month date - datetime Returns an integer, representing the month datepart of the date. year date - datetime Returns an integer, representing the year datepart of the date. * The datepart argument is a string, specifying the datetime interval. The following table lists the time intervals and corresponding datepart values. Datepart values are case-insensitive. Interval Acceptable datepart values Year yy, yyyy Quarter qq, q Month mm, m Dayofyear dy, y Day dd, d Week wk, ww Weekday dw Hour hh Minute mi, n Second ss, s Millisecond ms" }, { "url": "https://docs.skyvia.com/expression-syntax/literals.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Literals Skyvia expressions can include numeric, string, and boolean literals. Numeric Literals Integer literals are added simply as a sequence of digits, optionally prefixed with a sign. By default, they are considered int4 values, if they fit int4 limits. You can optionally add l suffix to a literal to explicitly specify that it should be treated as an int8 value. As for the floating point numbers, you can both enter them as a set of digits with a decimal point, optionally prefixed by sign. You can use the scientific notation, adding the e suffix, followed by an exponent. You can also add the f or l suffix to the end to explicitly specify that the value should be treated as a float4 or float8 value respectively. The mentioned \u201ce\u201d, \u201cf\u201d, and \u201cl\u201d suffixes can be in any case. String Literals String literals must be enclosed in single quotation marks. If you need to use a single quotation mark in a string, double it, like this: 1 'Single quotation mark: ''.' This results in the following string: Single quotation mark: \u2018. Boolean Literals Use boolean literals true and false , in lower case, without any quoting." }, { "url": "https://docs.skyvia.com/expression-syntax/main-differences-between-old-and-new-runtime-syntax.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Main Differences between Old and New Runtime Syntax Please consider these expression syntax differences when switching an existing integration that uses expression mapping between old and new runtime. Quoting One of the main differences is quoting of identifiers (column names) and string constants. In SSIS syntax, column names that required quoting are quoted by square brackets. In Skyvia\u2019s new expression syntax, identifiers are quoted with double quotation marks. String constants (literals), on the other hand, are quoted with double quotation marks in SSIS. In Skyvia\u2019s new expression syntax, they are quoted with single quotation marks. So, for example, an expression, uniting the First Name and Last Name fields and a space between them looks like the following in SSIS syntax, used by the old runtime: 1 [First Name]+\" \"+[Last Name] Here is how to write it in the new syntax: 1 \"First Name\" + ' ' + \"Last Name\" In order to use a single quotation mark in a string literal, use two of them. Data Types and Type Conversion Old and new Skyvia runtime uses completely different types. New data types are listed in the Data Types and Type Conversion topic. In SSIS syntax, you need to specify the required type name and additional parameters, separated with commas, enclosed in brackets, before the value to convert. In the new expression syntax, you simply use type conversion functions, and pass a value to convert to the function without any additional parameters. So, an expression, converting, for example, a numeric value to a string, looked like the following in SSIS syntax: 1 (DT_WSTR,38)accountid In the new expression syntax, it looks simply like this: 1 string(accountid) Functions The new expression engine provides all the functions, available in SSIS, and some additional functions. The names of functions are also very similar to SSIS functions, however, there are the following differences: Function names are lowercase, and they are case-sensitive. You cannot just take expressions with uppercase functions from SSIS expressions and use them as is, you need to convert them to lowercase. When function name consists of two words, in Skyvia\u2019s new expression syntax, an underscore character is added between these words. In SSIS, no character is added. For example, SSIS REPLACENULL function corresponds to the replace_null function in new expression syntax. Corresponding functions mostly have the same argument lists in both SSIS and new Skyvia\u2019s runtime. Typed Null Values If you want to set a column to a null value using expression mapping, you need to provide a null value of the corresponding type. In SSIS expressions you use a NULL function with the data type specified, and with some additional arguments for some data types: 1\n2 NULL(DT_STR,10,1252)\nNULL(DT_DATE) In our new expression syntax, you use null function with the corresponding type conversion function as an argument, and the latter is called without parameters: 1\n2 null(string())\nnull(datetime())" }, { "url": "https://docs.skyvia.com/expression-syntax/operators.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Expression Syntax Operators List of Operators Here is the list of all the operators you can use in expressions: Operator Name Operands Description () Parenthesis An expression Affects the evaluation order of expressions. + Addition or concatenation Two numeric arguments for addition or two string arguments for concatenation Adds two numeric expressions or concatenates two string expressions - Subtraction or negation One or two numeric arguments Unary negation operator negates the argument. Binary subtraction operator subtracts the second argument from the first one. * Multiplication Two numeric arguments. Multiplies two numeric expressions. / Division Two numeric arguments. Divides the first numeric expression by the second one. % Modulo Two integer arguments Returns the remainder after division of the first expression by the second one. && Logical AND Two boolean arguments Returns true if both arguments are true. Otherwise, returns false . || Logical OR Two boolean arguments Returns true if at least one argument is true. Otherwise, returns false . ! Logical negation One boolean arguments Returns true if the argument is false. Otherwise, returns false . == Equality Two expressions Checks whether the expressions are equal != Inequality Two expressions Checks whether the expressions are not equal < Less than Two numeric expressions Checks whether the first arguments is less than the second one. > Greater than Two numeric expressions Checks whether the first arguments is greater than the second one. <= Less than or equal to Two numeric expressions Checks whether the first arguments is less than or equal to the second one. >= Greater than or equal to Two numeric expressions Checks whether the first arguments is greater than or equal to the second one. ? : Conditional operator Boolean expression and two expressions of any type If the boolean condition argument (before the question mark) is true, returns the second argument (after the quesstion mark); otherwise, returns the third argument (after colon). If the condition is null, the operator returns null. Order of Evaluation Operators are evaluated in the following order: Parenthesis Unary minus, logical negation *, / +, - <, >, <=, >= ==, != && || ? :" }, { "url": "https://docs.skyvia.com/gallery.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Gallery Skyvia offers a number of predefined integrations and public queries that can either be used for popular use cases as is or serve as examples for users for designing their own integrations and queries. These integrations and queries can be accessed via Gallery. The predefined integrations help you automate the most common cloud application integration tasks, such as creating Mailchimp subscribers from Salesforce contacts or leads and vice versa, creating Mailchimp subscribers from Shopify customers, adding new Marketo leads as Salesforce contacts, etc. Predefined queries also demonstrate popular use cases and SQL structures or Query Builder features, like updating Salesforce data or querying the sizes of SQL Server tables. To use a query or integration from the Gallery, simply click the Gallery link in the New menu. Here you can use the integrations or Queries tab to view only integrations or queries respectively, use the Type to search box to search by name, or Filters list to filter objects by connector. You may point any integration or query and click View Info to read their description. Click Use to use the corresponding template of integration or query. Then simply select connection(s) for it, and you can run it immediately or edit it to make it more suitable for your use case. Please note that by default Gallery includes predefined integrations, which use old runtime, i.e. the Use new runtime checkbox is not selected in the integration settings. Integrations, using old runtime, use [SSIS expression syntax](https://docs.microsoft.com/en-us/sql/integration-services/expressions/integration-services-ssis-expressions) . Please consider expression syntax differences when switching a predefined integration that uses expression mapping from old to new runtime. You can find more information about main syntax differences between old and new runtimes here ." }, { "url": "https://docs.skyvia.com/index.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Overview Skyvia is a versatile no-code cloud data integration platform for ETL, ELT, Reverse ETL, data migration, one-way and bi-directional data sync, workflow automation, real-time connectivity, and other data-related tasks. Skyvia supports 190+ data sources , including cloud applications, databases, data warehouses, and file storage services. A vast connector library enables seamless integration among them. Skyvia no-coding wizard-based tools are easy to use by IT professionals and business users with no technical skills. How it Works Skyvia connects to the data sources via their APIs. When Skyvia connects to cloud apps or databases, it reads their metadata and data and represents them as tables and relations between them. You can connect Skyvia to any of the supported data sources and use the same connections in different Skyvia products. Products Skyvia offers several products tailored for different data-related needs: Data Integration , Automation , Backup , Query , and Connect . Each product has distinct pricing plans, allowing users to pay only for what they use. Data Integration The Data Integration product automates ETL, ELT, and Reverse ETL processes between various cloud applications and databases. In Skyvia, \u201cintegration\u201d refers to a collection of tasks or operations grouped together to automate data-related processes. Each integration can be scheduled to run automatically, helping users automate repetitive tasks without manual intervention. Skyvia offers various types of integrations for different data integration scenarios. Import Import provides one-way data loading from a data source to a cloud app or database, applying data transformations using mapping capabilities. Data sources can include another cloud app, database, or a CSV file on a local computer or in a file storage. Use Import to integrate data sources of different structures. Export Export enables data extraction from a database or cloud app to a CSV file on a local computer or file storage. Advanced export features allow exporting the original or transformed source data. Replication Replication copies cloud data to a database or cloud data warehouse and automatically keeps it up to date. It creates tables in the database or data warehouse from cloud app objects, using available transformation options. Synchronization Synchronization enables two-way data transfer between data sources with different structures by using mapping capabilities. It copies data between both sources and maps the original records to their counterparts in the other source. Data Flow Data Flow enables the implementation of advanced integration scenarios involving more than two data sources. It helps you perform complex, multistage transformations, such as getting data from one data source, enriching it with data from another one, and finally loading it into the third one. Control Flow Control Flow enables running data flows or other integrations in a specific order or depending on the particular conditions, performing pre- and post-integration actions and setting up error processing logic within a single integration. Automation [Automation](https://docs.skyvia.com/automation/) helps optimize business workflows by connecting apps and services and automating repetitive tasks. With Automation, you can create complex workflows to handle various conditions and data operations efficiently. Use Triggers, Actions, and Conditional components to build the automation flow with a multi-step conditional logic and error processing and run your automation manually based on schedule or event. Automation is especially useful for streamlining recurring processes. For example, you might automatically add a task when a new support ticket is received, create a new order after a sale, or schedule cross-platform data transfers. Backup Skyvia Backup is a robust backup and restore tool for cloud application data. It enables performing manual and automatic backups. After you back up your data, you can access it in the web browser or export it to CSV. The Restore feature helps you to restore the data source objects, separate records, or even fields. Also, you can use your backed-up data as a data source for other integrations. Query Skyvia Query is a cloud SQL client that enables executing SQL statements against cloud applications and relational databases. It supports SELECT and DML Statements. Visual Query Builder helps build queries without SQL knowledge. You can export query results to a CSV or PDF file. To save your queries in one place and access them at any time, use the Query Gallery . Query Gallery is a collection of predefined queries for common use cases. Skyvia Query supports querying data directly in Google Sheets, using Skyvia Query Google Sheets Add-on , and in MS Excel using [Skyvia Query Excel Add-in](https://docs.skyvia.com/skyvia-query-excel-add-in/) . Connect Skyvia Connect is a connectivity-as-a-service solution that helps to publish any data as OData endpoints. It makes your cloud and on-premises data available for various OData consumer applications, such as BI tools, office suites, Salesforce Connect, etc., with no coding. Skyvia connects to data sources via their custom interfaces and provides unified web API for their data. Skyvia supports OData endpoints and SQL endpoints . You can configure endpoints in convenient GUI editors without coding. Skyvia logs all the requests to created endpoints and provides detailed information on activities against them. How to Use This Documentation This documentation is designed to help you get acquainted with Skyvia and find the needed information quickly. Start with the Concepts and User Interface Basics topics to familiarize yourself with the basic terms and UI elements and controls. Look for your apps in the Connectors section, which helps you establish a connection to your application or database and provides additional advanced Connector-related information. Do not hesitate to use the search bar to look for any Skyvia-related information. Below is the list of sections dedicated to the separate Skyvia products. Data Integration Automation Backup Query Connect Google Sheets Add-on Excel Add-in See the latest Skyvia updates and news in the [Recent Releases](https://docs.skyvia.com/recent-releases/) section." }, { "url": "https://docs.skyvia.com/profile-management/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Profile Management Profile represents a single user with their email and password. Profile data also includes some other information, like user name, country, company, job title, etc., and includes email notification and Skyvia newsletter subscription settings. User profile contains user\u2019s email used to log in, his/her password, and some optional basic information about the user, such as user name, company, phone number and job title. You fill in this information when signing up to Skyvia and can change it later on your [Profile](https://app.skyvia.com/#/profile/info) page. You can also open this page by clicking the User icon in the top right corner of the Skyvia page and then clicking Profile . The Profile page allows you to see and change your profile information, including email used to log in, and password , configure email notifications and check some basic statistics \u2014 numbers of created integrations and connections. Modifying User\u2019s Personal Information On the Personal Information tab, you can update or edit such information about yourself as first name, last name, email, phone, job title, company you work for. Here you can also unsubscribe from receiving Skyvia newsletters by clearing the Subscribe to newsletters checkbox at the bottom of the page. Changing Password You can change your password on the Change Password tab of your Profile page. For this, you need to enter your current password, then enter a new password and confirm it below in the Confirm Password box. The password should be at least 8 characters long. It should not be weak and easy to guess. Two-Factor Authentication You can add an extra layer of protection to your Skyvia account with the two-factor authentication. Read more on how to enable/disable it here . Email Notifications Skyvia provides email notification functionality. Email notifications are enabled in the account settings, but the user can override account settings in their profile settings. Read more about email notifications in the Email Notification topic. Deleting Profile On the Delete Profile tab, you can delete your profile if needed. Before deleting your profile, first you need to check accounts and objects, in which you are an admin. Click Delete next to all the accounts in the list to remove them. Otherwise, you will not be able to delete your profile. Please note that when you delete your profile, you will be removed from the Skyvia platform. If you ever want to use Skyvia services again, you will need to sign up again." }, { "url": "https://docs.skyvia.com/profile-management/two-factor-authentication.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Profile Management Two-Factor Authentication Two-factor authentication (2FA) provides a higher level of security to your Skyvia account. It requires two separate methods of identity verification \u2014 login credentials and OTP (one time password) generated by authenticator app on your phone. Second layer of security ensures that third parties can\u2019t access your account in case your login and password are being compromised. With higher security comes higher responsibility. If you enable two-factor authentication and lose access to your phone and recovery codes, you won\u2019t be able to access your Skyvia account. Keep your recovery codes in a safe place. If you lose the access to your 2FA app and recovery codes you will be locked out of your Skyvia account. For security reasons, support team does not accept restore access requests to the accounts with a turned on two-factor authentication. Choosing and Downloading an Authenticator App Skyvia allows authentication through several authenticator apps. Among them are the following: Google Authenticator Microsoft Authenticator Twilio Authy Before proceeding with the two-factor authentication setup, download and install one of the authenticator apps listed above. You will require it further to scan a QR code and complete the process. Please make sure that you\u2019ve downloaded an authenticator app to your mobile phone or to any other your personal device to start the 2FA procedure in your Skyvia account. Enabling Two-Factor Authentication To enable 2FA, stay logged in to your Skyvia account and prepare your authenticator app. Follow the below steps: Click User icon and select Profile in the drop-down menu. Go to the Two-Factor Authentication tab. Click Enable two-factor authentication . You will be asked to verify your Skyvia password to proceed further. Find out why here . Next, open your authenticator app downloaded earlier to your mobile phone or to any other personal device. In the Two-Factor Authentication Setup window in Skyvia, scan the QR code with the opened authenticator app or register with the code provided below. Click Next . Enter a 6-digit code from your authenticator app to the Skyvia setup window. Click Verify to confirm connection between authenticator app and Skyvia account. In the next window, copy or download recovery codes. Store them in a secure location. Read important information about recovery codes below . Click OK to complete Skyvia two-factor authentication. Disabling Two-Factor Authentication Please make sure that this step is required. Disabling two-factor authentication significantly reduces the security of your account. This will remove all trusted devices that were logged in with the authenticator app. To disable 2FA, sign in to your Skyvia account and prepare your authenticator app. Follow the below steps: Click User icon and select Profile in the drop-down menu. Go to the Two-Factor Authentication tab. Click Disable two-factor authentication . Verify your password to disable 2FA. Two-factor authentication is now disabled for your Skyvia account. Recovery Codes Skyvia generates 12 unique recovery codes during your two-factor authentication setup. Recovery codes are numeric strings that are specifically tied to your Skyvia account. You can use the recovery code to verify your identity when you have no access to your two-factor authentication method and you need to use it instead of the 6-digit verification code generated by your authenticator app. Every recovery code can be used only once. Viewing Recovery Codes If you cannot find to where you saved your recovery codes and you need to view and save them again, go to your Profile and click View recovery codes on the Two-Factor Authentication tab. You will be asked to verify your password to proceed further. Find out why here . After that you will see a list of existing recovery codes, which you can copy or download. Using Recovery Codes If you have changed your app or lost your mobile device used to authenticate to your Skyvia account, you need to use one of 12 recovery codes. These recovery codes were generated during your two-factor authentication setup. To regain access to your account, perform the following steps: Log in to Skyvia. Click Use a recovery code to use it instead of access code from authenticator app. In the next window, enter one of your unused recovery codes and click Verify . It will allow you to log into your Skyvia account. Upon logging in, it is advised to reset your two-factor authentication settings. By changing the two-factor authentication settings: you prevent unwanted access to your account through a stolen device (if stolen); you reconfigure 2FA to your new device to avoid being locked out of your account in the future. For this, you need to disable two-factor authentication and enable it back with your new device. See how to do it above . Generating New Recovery Codes If you think that you have used many of your recovery codes, and their total number has significantly decreased, generate a new set of recovery codes to protect yourself from being locked out of your Skyvia account. You can also generate new recovery codes in case you feel that your recovery codes have been shared or seen publicly by third party. To generate new recovery codes, go to your Profile and click View recovery codes on the Two-Factor Authentication tab. You will be asked to verify your password for safety reasons. In the opened window, click Regenerate codes to receive new codes and click Download to download them to a save place. Trusted Devices You can temporarily disable 2FA for trusted devices, like a secured work or personal laptop. To temporarily disable 2FA, add a device as trusted by selecting a corresponding checkbox in the Two-Factor Authentication window when signing in to Skyvia. This allows the device to log in without two-factor authentication for 30 days. You can always view the list of trusted devices by switching to the Two-Factor Authentication tab in your Profile. If there are devices that you do not recognize or devices that are no longer in use, revoke access to them. For this, click Revoke . Changing Two-Factor Authentication Device or App To use a new device or app for two-factor authentication, you need to disable two-factor authentication first, following the instructions above , and enable it back with your new app or device as described here . The next time you log in to Skyvia, you will be asked to enter a verification code from your mobile app. Password Verification According to profile security settings, Skyvia requires entering a password when making changing to your two-factor authentication configuration, i.e. when enabling/disabling it and when viewing recovery codes." }, { "url": "https://docs.skyvia.com/query/", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Overview Skyvia Query is an online SQL client and query builder tool that allows querying and managing cloud and relational data with SQL statements. You can enter SQL statements via code editor or compose SELECT statements with visual query builder. After you have executed the created query, Skyvia allows you to view queried data in the browser and export them as PDF or CSV files.\nSkyvia Query automatically loads returned data in portions of 20 rows. If a query returns more than 20 rows, you can load the next twenty rows by clicking the load more link or pressing F8. Skyvia also offers another way to export data \u2014 to export them using the Data Integration products. For more details, you can read the Export topic. Each query has main query elements, among which are: three tabs with different query views (Builder view, SQL view and Data view), toolbar buttons (Save, Clone, Execute, etc.) for managing queries, sql editor, result field, connection list with connections, object list with objects, etc. We have tried to make the structure of query and its navigation as simple and comfortable as possible. Connections Skyvia Query allows getting data from various cloud applications, databases, cloud data warehouses, backups, etc. To query data, first you need to create a connection to the corresponding data source (in case you have not created it yet) or select an existing one in the connection list on the left of the query page (screenshot). If you haven\u2019t created the necessary connection yet, you can create it by clicking + New connection at the bottom of the drop-down list and specify the connection parameters in the opened Connection window. To create different connections, you need to enter different sets of parameters. You can find more information on creating the corresponding connections in the Connections or Connectors section of our documentation. Viewing Connection Metadata Skyvia Query allows viewing connection metadata. The Connection Object List to the left of the query editor displays the information on the metadata of available objects/tables from the target connection. After you select a connection, it will display all the available objects/tables by default. By clicking a certain table, you can see the list of its fields. When the Connection Object List displays the list of table fields, it displays Relation icons to the right of foreign key fields. You can click this icon to navigate to the corresponding parent table. If you navigate to this table via a foreign key (FK), the name of the used foreign key column will be displayed. For more details about browsing tables by relations in the Connection Object List check How to Create Joins . When Connection Object List displays the list of fields, it uses the special icons to show the field type: fields with string data, fields with numeric data, boolean fields, datetime fields, fields with binary data, virtual fields displayed by Skyvia, such as Records count pseudo-field. Query Editor Query Editor contains different query views: Builder view, SQL view and Data view. Whenever necessary, you can easily switch between query views by clicking the corresponding buttons in the right part of the query toolbar. Builder View allows composing a SELECT statement visually with Query Builder without typing a code. It displays the Query Builder and Results pane by default. When you design a query with Query Builder, a SELECT statement is generated automatically. You can access and edit this statement by switching to the SQL view. SQL View allows typing and executing SQL statements directly. This view displays the SQL Editor and the Results pane. However, if you have entered or edited an SQL statement on the SQL view and then switch to the Builder view, these changes are not preserved. All the changes made on the SQL view are usually lost when switching to another view. Data View provides more space for better viewing the query results. When you have created and executed a query using Builder or SQL view, you can switch to the Data view to take a better look on the returned data. Supported Query Features With Skyvia, you can apply different aggregation and expression functions to the fields/columns added to queries. The Details pane on the right displays only the functions applicable to the type of the selected column/field or object. Except aggregation and expression functions, you can also configure filters within your queries. After you have selected objects, apply filters to query data matching certain criteria or conditions. Depending on your needs, Skyvia Query offers different filters. For example, you can apply filters to query records for a certain month only or you can apply filters to get/exclude only rows with certain values. You can also add multiple filters, by uniting them in groups. Each group may consist of several filters and/or subgroups united with a logical operator (AND or OR). For more information, please visit the Configuring Queries with Query Builder topic. Supported SQL Statements Skyvia Query supports SQL SELECT, INSERT, UPDATE, and DELETE statements for cloud sources . For relational databases, Skyvia can execute all the statements, supported by the database, including DDL statements. For Backup Connections , Skyvia supports only SELECT statements. Skyvia Query supports all the Cloud Sources and Relational Databases , supported by other Skyvia tools. Please note, when you query a Database, use a Database-specific SQL syntax. To learn more about SQL syntax for Cloud Sources visit Supported SQL for Cloud Sources page. Using Parameters Query parameter is a placeholder for varying values you can use instead of constant values in Skyvia Query . Skyvia can recognize SQL Parameters in custom SQL commands. Skyvia detects parameters in SQL commands by the parameter prefix\u2019s : (colon) character. Also, you can use parameters as filters in Query Builder. More details on how to add parameters in Query Builder are available in Configuring Queries with Query Builder topic. You can assign values for parameters in the parameters list. You can set parameter values to null . To do this, select the corresponding checkbox for the needed parameter. Skyvia detects parameter types in Query Builder automatically. For SQL commands, you must set the parameter Type manually. Skyvia defines parameter Type as String by default for SQL commands. Skyvia parses a command, not using object metadata. Query Gallery The Query Gallery provides you with a number of predefined public queries to different data sources. In the Query Gallery, you can find the most common use cases and query data quickly and easily. You can access public queries by clicking the More options icon in the toolbar on the left of the query editor. In the drop-down menu, select Open from Gallery . See how it looks like on the screenshot below. You can find more information on how to use public queries in the Query Gallery section. Another way to access public queries is via the General Gallery of Skyvia. For this, click the +NEW button in the top menu and switch to Gallery . This Gallery contains both predefined integrations as well as queries available for you in one place \u2014 on the All tab. Optionally, you can switch to Integrations or Queries tabs to see only integrations or only queries respectfully." }, { "url": "https://docs.skyvia.com/query/configuring-queries-with-query-builder.html", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Configuring Queries with Query Builder General Introduction to Query Builder Query Builder is a visual query editor that allows composing SELECT queries visually without typing a code. It is available on the Builder tab of query. It consists of the four panes: Result Fields , Filters , Sort Fields and Details . Result Fields pane displays fields (columns) the query returns. You can drag tables and columns from the Connection Object List to this pane to query their data (add them to the SELECT and FROM clauses of the SELECT statement). You can read more about it in the How to Add or Remove Columns from Query subsection. Filters pane displays query filters and filter groups. You can drag columns to this pane to filter data by these columns (add them to the WHERE clause of the query). Find more in the How to Configure Filters subsection. Sort Fields pane displays columns, the returned data are ordered by, and the sorting order. You can drag columns from the Connection Object List or from the Result Fields pane to this pane in order to sort the queried data by these columns (add them to the ORDER BY clause of the query). Read more in the How to Sort Data subsection. Details pane is located to the right of other three panes. It allows configuring query in more details and displays settings for the currently selected item in one of other panes. When you click an item in the Result Fields pane, the Details pane allows you to apply functions to the result fields. For an item in the Filters pane, the Details pane allows you to apply a filter or filter group; for an item in the Sort Fields pane it allows changing the sorting order. How to Create Query When no query is opened, you can create a new query by clicking +NEW in the top menu and selecting Builder or SQL under Query . In our example, we select Builder . A new query will be created with the respective view opened. To create a new query when one or more queries are already open, click the button on the query page tab bar. A new query will be created with the same view opened as the previously active query. You can switch to the necessary view by clicking the corresponding button on the right side of the query toolbar. Additionally, you can clone an open query by clicking Clone on the Query toolbar. The query is cloned with all its settings, connections and returned data. Created queries are saved to the OBJECTS page for future use. How to Rename Query If you want to rename a newly opened query or an already existing one, you need to hover over the corresponding query tab and click the edit icon as shown in the screenshot below. How to Add or Remove Columns from Query Adding Columns to Query To query data from a table, simply drag this table from the Connection Object list to the Result Fields pane. To add only some of the table fields to the query, click a table in the Connection Object list, and it will display the list of table fields. You can also drag several table fields at the same time. For this, select them in the Connection Object List using Ctrl or Shift key and drag them. Removing or Disabling Columns from Query To remove a column or table from the query, click it on the Result Fields pane and press the Delete key or click the button in the Details pane header on the right side of the query. You can also temporary disable a column in a query without removing it. For this, click the icon in the Details pane header. This will remove the column from the generated SQL statement, but keep it in the Query Builder so that you will be able to re-enable it. To enable it back, select it on the Result Fields pane and click the same button in the Details pane header again. Setting Column Alias You can also set an alias for the column. For this, click the column on the Result Fields pane and then click the Rename field button under the Details pane header. Enter the alias and click the Apply rename button or click the Cancel rename button to cancel the action. How to Add Aggregations and Functions After you have added the necessary fields or objects to the query , you can apply aggregation or other functions to them. To do it, simply click the field in the Result Fields pane and then select the necessary function in the Details pane on the right side of the query. Note that the Details pane displays only the functions applicable to the type of the selected column. When you apply aggregation functions to selected columns, columns without an aggregation function selected are automatically added to the GROUP BY clause. You don\u2019t need to configure this clause manually. To remove a function from a column, simply select the column in the Result Fields pane and then, in the Details pane, click Value . How to Configure Filters Adding, Removing, and Disabling Filters To filter data by certain column, drag this column from the Connection Object List to the Filters pane. Then click it in the Filters pane and configure the filter in the Details pane on the right side of the query as described below. Skyvia Query filters are type-specific, the available filters depend on the data type of the column. To remove a filter, click it in the Filters pane and then press the Delete key or click the icon in the Details pane. You can also temporary disable a filter in a query without removing it. For this click the icon in the Details pane header. This will remove the filter from the generated SQL statement, but keep it in the Query Builder so that you will be able to re-enable it. To enable it back, select it on the Result Fields pane and click the same button in the Details pane header again. Read more in the How to Configure Filters in Queries tutorial When you add several filters to a query, they can be united into groups and subgroups. Each group consists of several filters and/or subgroups united with a logical operator (AND or OR). For more information on how to create and configure filter groups check Filter Groups subsection. Configuring Filters Skyvia supports four kinds of filters: value filters, list filters, relative filters and Parameters . Value filters are supported for all field types. List filters are supported for fields with string data only. Relative filters are supported for the fields with datetime data only. Parameter is a placeholder for varying values you can instead of constant values. You can change parameter values without modifying the query itself. Value Filters Value filters simply compare the field value with the specified one using different comparison operators or check whether the field value is NULL. For value filters, the configuration is pretty straightforward: on the Details pane, in the Filters list select the necessary filter. Then, if the selected filter requires a value to compare the field with (like equal to or between ), specify this value or values. More information can be found here . List Filters List filters allow you to quickly select one or more values from a list of the distinct values, available in the data source for this field, and get or exclude records with selected values from the list. They are especially useful when a field in the data source can have one of a fixed set of values. To configure a list filter, perform the following steps: In the Details pane, click List . Select checkboxes for the required values in the list. If you don\u2019t see all the required values, click the load more link at the end of the list until all the required values are loaded. If there are too many values in the list, you can use the Type to filter box to quickly find the necessary values. Start typing the value, and only values, containing the entered string, will be displayed. If you want to filter out records with selected values, click is in list and select is not in list . If you want to retrieve the records with such values, omit this step. Relative Filters Relative filters allow quickly filtering records with datetime values within a specified period. To configure a relative filter, perform the following steps: In the Details pane, click Relative . If you want to filter out values within the period, click is in and select is not in from the list. If you want to retrieve the records with values within this period, omit this step. Select the necessary period in the bottom part of the Details pane. Parameters You can use query parameters when you need to run the same query often, using different values. For example, building the same report for different periods. To add a parameter to a query, do the following: In the Details pane, click Parameter . Skyvia automatically creates the parameter with the name corresponding to the field name. To assign a value to the parameter, click Parameters above and enter the value. Filter Groups If you add multiple filters, they can be united in groups. Each group consists of several filters and/or subgroups united with a logical operator (AND or OR). The group header in the Filters pane indicates which logical operator is used in the group. All means that rows must satisfy all group filter conditions and subgroups (AND operator is used). Any means that rows must satisfy at least one filter condition or subgroup (OR operator is used). Initially, one root filter group is created, and rows should satisfy all its filters and subgroups. To add a subroup to a group , click the group header and then click +Add subgroup in the Details pane. To change the group operator , click the group header and then click All or Any depending on the operator you want to apply. To remove a group , click its header in the Filters pane and then press the Delete key or click the icon in the Details pane header on the right side of the page. You can also temporary disable a filter group in a query without removing it. For this, click the icon in the Details pane header. This will remove the filter group from the generated SQL statement, but keep it in the Query Builder so that you will be able to re-enable it. To enable it back, select it on the Result Fields pane and click the same button in the Details pane header again. Note that you can copy a filter from one group to another by dragging it to the target group. Read the How to Configure Filters in Queris tutorial to find out more. How to Sort Data To sort data by a column, drag this column to the Sort Fields pane. You can drag it either from the Connection Object List or from the Result Fields pane. By default, the data are sorted in the ascending order. To change the sort order, click the column in the Sort Fields pane and then, in the Details pane, click Asc or Desc . You can add as many columns to sorting as you need. To change the priority of the columns in sorting, use drag-n-drop in the Sort Fields pane to order columns by the necessary priority. To remove sorting by a column, click the column in the Sort Fields pane and then press the Delete key or click the Delete icon in the Details pane. How to Create Joins In order to create joins, you need to use navigation by relations in the Connection Object List. When you open a table in the Connection Object List, you can navigate to its parent table by clicking the icon to the right of the corresponding foreign key field. If you drag a field or several fields from the parent table in the Connection Object List to query builder after navigating to it via a relation, query builder will generate JOIN by this relation in the FROM clause of the SELECT statement. If you have navigated to a table via several relationships in a table hierarchy, JOINs for all of the relationships used are added to the query. To navigate back to the child table, click the button in the Connection Object List header. Alternatively, you can click the button in the Connection Object List header to open a list with all the tables, by relationships of which you have navigated to the current table, and the corresponding foreign key fields used for navigation. Then click the necessary table in the list. To add join by a relationship to your query, perform the following steps: When you want to add data from several tables, having foreign key relationships, start with the \u201cmost child table\u201d you want to include to a query \u2014 a table not referenced by foreign keys you want to use for joins. Click this table in the Connection Object List in order to access its fields and drag the necessary fields from it to the Result Fields pane. Then navigate to the necessary parent table in the following way: find the foreign key field of the child table, belonging to the foreign key you want to use for join, in the Connection Object List and click the icon to the right of the foreign key field. Drag the necessary parent fields to the Result Fields pane. Query Builder will automatically create JOIN on the foreign key you have used for navigating to the parent table. If you need to add fields from the higher level parent table, find a foreign key field of the corresponding foreign key in the current table, click the icon to the right of the foreign key field to navigate to the higher level parent table. Then drag the necessary fields to the Result Fields pane. Repeat this until you get to the \u201cmost parent\u201d table you want to add the fields from to the query. When you add a join, the joined parent table is assigned with an alias based on the relationship names you have used for navigation. To get better understanding of adding joins, you can read the How to Create Joins in Queris tutorial. How Query Builder Generates SQL While Query Builder user interface does not have exact 1 to 1 correspondence to the structure of the SELECT statement, it\u2019s easy to understand how actions performed in the query builder influence SQL generation. Note that different data sources use different SQL syntax, and thus, SQL generated for different data sources from a similar design may differ. (For cloud apps , SQLite syntax is used). SELECT and FROM Clauses Dragging columns from the connection object list to the Result Fields pane adds these columns to the SELECT clause and adds the corresponding tables to the FROM clause, if they are not still present there. Note that if you drag columns from multiple tables while navigating by table relations, as described above, JOINs are generated automatically in the FROM clause. However, if you switch between tables in the connection object list not by relations but just by switching to the list of tables back from the list of table columns, no joins are built. In this case tables will simply listed in the FROM clause, separated by commas, without any limiting conditions, and you will get a combination of every record from the first table with every record of the second table and so on. When you apply an expression function to a field, you will get an expression in SQL. Note that different databases may have different sets of functions, so the function added to the SQL may be called differently than in the Query Builder or may use a different syntax. Grouping When you apply an aggregation function , Query Builder adds the corresponding aggregation function to the column in the SELECT clause and creates a GROUP BY clause with all the other unaggregated columns if they are present. If you add aggregation functions to more columns, they are removed from the GROUP BY clause, and the corresponding functions are applied in the SELECT clause. WHERE Clause Every filter in the Filters pane is added to the WHERE clause. They are united by AND and OR logical operators and taken into parentheses according to the filter groups configured. Value filters are converted to the corresponding SQL operators. List filters are converted to the IN operators. Parameters are converted to SQL parameters, indicated by a colon character prior to the parameter name. For relative date filters , however, there are no direct SQL analogues. For these filters, Query Builder generates sometimes complex expressions, corresponding to the filter condition, and using data source-specific datetime functions. ORDER BY Clause ORDER BY clause is generated based on the columns added to the Sort Fields pane. Note that if you drag a column from the connection object list, Query builder uses the corresponding column name in the ORDER BY clause, but if you drag them from the Result Fields pane to the Sort Fields pane, result column alias is used if present. If you have modified the default order from ASC to DESC, DESC is added to the generated SQL." }, { "url": "https://docs.skyvia.com/query/managing-existing-queries.html", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Managing Existing Queries As well as other Skyvia objects, saved queries are available in the OBJECTS list, and you can manage them as well as other Skyvia objects. You can organize them into folders, edit or delete them, filter by name or data source, etc. To switch to the OBJECTS list, click OBJECTS in the top menu and select the Queries tab. To open an existing query from the query editor, click the More options icon in the toolbar on the left of the query editor. In the drop-down menu, select Open existing query . See how it looks like on the screenshot below. In the opened Objects Manager window, you will see all the queries you have created. Hover over the necessary query and click Select on the right. The query will open in the query editor. Please note that it will contain previously selected connection and configured settings. Saving Queries for Future Use If you have opened a query, configured it settings, but do not want to execute it right now, you can save it to the OBJECTS list to reuse later. For this, perform the following steps: Click Save on the Query page toolbar. Enter the name of the query in the Name box. Optionally specify a description for the query in the Description box. Optionally clear the Save connection reference checkbox if you want to store query without binding it to a specific connection. In this case you will need to select a connection for it again after you open it from the OBJECTS list. When you save a query to the OBJECTS list, all the query settings are saved, including changes made in the query builder, SQL statement, etc. You can open or delete a saved query later in the OBJECTS list. To delete a stored query, click on it and then click the Delete icon." }, { "url": "https://docs.skyvia.com/query/query-gallery.html", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Query Gallery Query Gallery helps you to automate most common query tasks using predefined templates in just a few clicks. These templates are known as public queries, and Skyvia provides a wide range of them to different data sources. Almost all your needs in data querying will be covered securely \u2014 simply choose the required public query from the available ones. Additionally you can study some aspects of the SQL language using these queries. To open Query Gallery, click the More options icon in the toolbar on the left of the query editor. In the drop-down menu, select Open from Gallery . See how it looks like on the screenshot below. In the opened Gallery Manager window, you will see all available public queries. You can filter queries by their names, using the Filer by name box, or by the used connector, selecting it in the Connector drop-down list. When you click Use , a predefied query you have chosen will be opened in the query editor. Public queries are not linked with a specific connection. So you need to select a connection manually in the query editor. After that you can use the public query immediately or create your own query, using public query as a template. The public query name will be as in the Gallery Manager. On the basis of the public query templates you can create your own queries and build them as simple or as complex as you need by adding/deleting extra fields and filters." }, { "url": "https://docs.skyvia.com/query/tutorials/", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Tutorials In this section, we provide step-by-step information on how to create joins in your queries, how to apply different filters, aggregation and expression functions and many more. This section consists of the following tutorials: How to Configure Filters in Queries How to Create Joins in Queries" }, { "url": "https://docs.skyvia.com/query/tutorials/how-to-configure-filters-in-queries.html", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Tutorials How to Configure Filters in Queries In this tutorial, we will show how to easily configure filters on a specific example. Let us suppose we need to get Salesforce accounts with Customer-Direct or Customer-Channel types added in the previous month. We can do it in two ways: with standard value filters or with list and relative filters. Using relative and list filters in this case is more convenient, however we show both ways in order to demonstrate usage of value filters and filter subgroups. With value filters, we create a query in the following way: Value Filters If there are no open queries, click +NEW in the top menu and select Builder under QUERY . If there are open queries, click the button to open a tab with a new query. Click Select connection and select Salesforce as the connection. Drag the Accounts table from the Connection Object List to the Result Fields pane. Click the Accounts table in the Connection Object List to access its fields. From the Connection Object List drag the CreatedDate field to the Filters pane. Click the CreatedDate field on the Filters pane and select between in the Filter list on the right. Specify the start and end date of the previous month in the corresponding boxes. In the Filters pane, click All . Now we need to specify the filter conditions to select records with Customer-Direct or Customer-Channel types. Note that we select records with any of these values, so we need to create a filter subgroup, which is satisfied if any of its conditions is met. For this, in the Details pane, click +Add subgroup . All is selected in the added subgroup by default. Click Any in the Details pane. From the Connection Object List drag the Type field to the new filter group in the Filters pane. Click this field in the Filters pane. The equal to filter is selected in the Details pane by default, so we need to enter only the necessary value Customer-Direct in the box. Repeat the steps 12 and 13 for the Customer-Channel value. After this our query is ready. It looks like the following in the editor: List and Relative Filters With our list and relative filters, the same query can be created in the following way: If there are no open queries, click +NEW in the top menu and select Builder under QUERY . If there are open queries, click the button and then click Builder on the Query toolbar in order to create a new query. Click Select connection and select Salesforce as the connection. Drag the Accounts table from the Connection Object List to the Result Fields pane. Click the Accounts table in the Connection Object List to access its fields. From the Connection Object List, drag the CreatedDate field to the Filters pane. Click the CreatedDate field on the Filters pane and then click Relative on the Details pane. Then click Previous month in the Details pane. From the Connection Object List, drag the Type field to the Filters pane. Click the Type field on the Filters pane and then click List on the Details pane. Select checkboxes for the Type values to query rows with. In our case these are Customer-Direct and Customer-Channel values. If the necessary values are not displayed, click Load more in the end of the checkbox list to load more values from the data source. That\u2019s all, our query is ready. It looks like the following in the editor:" }, { "url": "https://docs.skyvia.com/query/tutorials/how-to-create-joins-in-queries.html", "product_name": "Query", "content_type": "Documentation", "content": "Product: Query. Documentation Query Tutorials How to Create Joins in Queries Joins are used to combine rows from two or more tables, based on a related column between them. In this tutorial, we provide you with two examples of how to create joins in queries. Simple Example First let us consider a simple example of joins with two tables \u2014 parent and child. For this let\u2019s query the number of Salesforce accounts owned by users for each user. Since there are a number of relations between the account and user tables, we need to specify the relation to join the tables by. In order to create this query, you need to perform the following steps: Click +NEW in the top menu. Click Builder under QUERY . In the Select connection list, select your connection to Salesforce. First, we need to open the child table. In our case it is the Account table. Click the Account table in the Connection Object List. You can quickly find it by typing \u201cAccount\u201d in the Type to filter box in the top part of the Connection Object List. Drag the Records count pseudo-field from the Connection Object List to the Result Fields pane. Navigate to the OwnerId field and click the icon to the right of this field (you can use the Type to filter box to quickly find it as well). The User table will open. Drag the Name field from the Connection Object List to the Result Fields pane. That\u2019s all, our query is ready. It correctly joins the Account and User tables by the OwnerId field. You can switch to the SQL view by clicking SQL on the Query toolbar and check the generated SQL statement. Note that the User table is assigned with an alias Owner corresponding to the name of the relation between the tables. Complex Example Now let us consider a more complex example with several tables and multiple relations in the hierarchy. To demonstrate this case, we will use SQL Server and standard sample database of Microsoft \u2014 AdventureWorks. We will query the number of orders by assigned employee and customer type. Let us take a look at the tables, participating in the query. We will take a number of orders from the SalesOrderHeader table, and a customer type from the Customer table. Employee names are stored in the Contact table. However, we cannot simply add fields from the Contact table, because in this case the direct foreign key FK_SalesOrderHeader_Contact_ContactID by the ContactID field will be used. The tables must be joined all the way via SalesPerson and Employee tables by the corresponding relations. In order to create this query, you need to perform the following steps: Click +NEW in the top menu. Click Builder under QUERY . In the Select connection list, select your connection to Adventure Works SQL Server database. When we look at the table relation, we see that the \u201cmost child\u201d table in our case is SalesOrderHeader. In our query there will be no table, for which SalesOrderHeader is a parent table. So click SalesOrderHeader in the Connection Object List. You can quickly find it by typing \u201cSalesOrderHeader\u201d in the Type to filter box in the top part of the Connection Object List. Drag the Records count pseudo-field from the Connection Object List to the Result Fields pane. Navigate to the SalesPersonId field and click the icon to the right of this field (you can use the Type to filter box to quickly find it as well). The SalesPerson table will open. Click again the icon to the right of the SalesPersonId field. This opens the Employee table. Navigate to the ContactId field and click the icon to the right of this field. The Contact table will open. Drag the FirstName and LastName fields from the Connection Object List to the Result Fields pane. Now we want to add the CustomerType column from the Customer table to the Query. Let us navigate back to our SalesOrderHeader table, which contains a foreign key to the Customer table. For this, click the button in the Connection Object List header. The breadcrumbs list with the tables we have navigated through is displayed with the foreign key fields used for navigation. Click the SalesOrderHeader table. Navigate to the CustomerId field and click the icon to the right of this field (you can use the Type to filter box to quickly find it as well). The Customer table will open. Drag the CustomerType field from the Connection Object List to the Result Fields pane. That is all, our query is ready. It correctly joins the queried tables. You can switch to the SQL view by clicking SQL on the Query toolbar and check the generated SQL statement. Note that the joined tables have aliases generated by concatenating the foreign key names used for joins." }, { "url": "https://docs.skyvia.com/recent-releases/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases This section contains the list of Skyvia releases grouped by years and months. Releases include new connectors, features, and functionality. Select the corresponding topic to find information about all the new features for the needed month of the current year. You can find the releases for the past years in the corresponding folders." }, { "url": "https://docs.skyvia.com/recent-releases/2014/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2014 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2014/june-2014.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2014 June 2014 New Skyvia Releases The release version Skyvia increases the list of supported online CRMs, greatly extends data import functionality, and adds support for bidirectional data synchronization between cloud CRMs. Microsoft Dynamics CRM Support Skyvia increases the list of supported online CRMs with Microsoft Dynamics CRM. Now you can import data to Dynamics CRM, export its data to CSV files, replicate Dynamics CRM data to a relational database, and synchronize Dynamics CRM data with other data sources. Import Improvements Import functionality is now greatly extended. Any of supported data sources - CSV files, relational databases, and cloud CRMs - can be used as the source of data to import. And you now can import data not just to cloud CRMs, but also to relational databases. Additionally, now you can import binary data as a zip archive with a set of binary files together with the CSV file. Export Improvements Now Skyvia allows exporting relational database tables and views to CSV files. Synchronization Skyvia also provides powerful data synchronization functionality. Now you easily can synchronize data between relational databases and cloud CRMs in both directions. Skyvia can synchronize data with different structure, automatically tracks data changes between synchronizations, and does not require changing data structure - adding custom fields and columns of any kind. Secure FTP Support SFTP and FTPS connections are supported in the release version of Skyvia. Now you can use secure FTP connections for importing or exporting data to CSV files." }, { "url": "https://docs.skyvia.com/recent-releases/2014/march-2014.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2014 March 2014 We present the new domain of Devart Data Loader and the second beta of Skyvia. New Domain Devart announces that the release version of Devart Data Loader is now available on another domain: [https://skyvia.com/](https://www.skyvia.com/) . The old link dataloader.devart.com redirects there. Devart Data Loader is also now renamed to Skyvia. The new name will not produce any changes concerning development and support on the product or any other policy. Skyvia Beta 2 We are glad to present the second beta of Skyvia \u2014 our online service for cloud data integration. Skyvia Beta 2 is more than just a Salesforce data loader service. In addition to Salesforce import and export operations it supports Salesforce data replication - replicating your cloud CRM data to a relational database and keeping it in sync with the CRM. Salesforce Data Export In Skyvia Beta 2 you can create Export packages that export data from Salesforce to CSV files. You can export data from any Salesforce object including the custom ones, export data from several related objects, use powerful data filtering. You can apply multiple filters and create complex filtering conditions, order exported data, etc. Salesforce Data Replication Skyvia Beta 2 allows you to create replication packages that replicate Salesforce data to a relational database (SQL Server, MySQL, or PostgreSQL). With Skyvia you can create a copy of your Salesforce CRM data, including creation of the corresponding database tables, and keep this copy up-to-date automatically. Scheduling Packages Now you can schedule execution of Skyvia packages. Skyvia Beta 2 provides you powerful scheduling settings which allow you to specify any kind of schedule. For example, you may run your package every 10 minutes during a specified interval on Mondays and Tuesdays, or run your package every third Wednesday of a month. Other Improvements Skyvia Beta 2 allows you to load an archive with attachments and import them to Salesforce. Additionally now you can clone Skyvia packages." }, { "url": "https://docs.skyvia.com/recent-releases/2014/september-2014.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2014 September 2014 SugarCRM and Zoho CRM Support The new Skyvia version increases the list of supported online CRMs and adds more convenience features. Support for More CRMs Skyvia increases the list of supported online CRMs with SugarCRM and Zoho CRM. Now you can import data to and from these CRMs, export their data to CSV files, replicate them to a relational database, and synchronize with other data sources. Lookup Improvements When mapping columns now you can use lookup to get a target value not just by a source column, but also by a constant value. It can be useful, for example, when you need to map the column to an ID value of an object with a known name. Better Packages and Package Details Pages Package Details page has become more convenient. Now it allows you to preview the connection settings on the Package Details page and quickly navigate to the details page of the package connections. Packages page also displays the data source logos of the connections used in the package, which allows you to quickly distinguish your packages." }, { "url": "https://docs.skyvia.com/recent-releases/2015/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2015 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2015/august-2015.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2015 August 2015 Cloud Accounting, Marketing, and E-commerce Apps Support The new Skyvia version offers beta support for widely used cloud accounting, marketing, and e-commerce connectors: Accounting QuickBooks \u2014 a cloud-based accounting solution that allows designing and sending marketing emails. FreshBooks \u2014 the #1 accounting software in the cloud designed exclusively for service-based small business owners and independent professionals. The app provides easy-to-use invoicing, time tracking, expense management features, etc. Marketing MailChimp \u2014 a cloud-based email marketing solution that allows designing and sending marketing emails. ExactTarget \u2014 a provider of digital marketing automation and analytics software and services. E-commerce Bigcommerce \u2014 leading e-commerce platform for running an online store. Now you can easily integrate your cloud accounting, marketing, and e-commerce applications with cloud applications and relational databases, back up their data, and manage their data via SQL. You can also visit our pages about QuickBooks integration: [QuickBooks Online & HubSpot CRM integration](https://skyvia.com/data-integration/integrate-hubspot-quickbooks) [QuickBooks Online & Salesforce integration](https://skyvia.com/data-integration/integrate-salesforce-quickbooks) [QuickBooks Online & Shopify integration](https://skyvia.com/data-integration/integrate-quickbooks-shopify)" }, { "url": "https://docs.skyvia.com/recent-releases/2015/december-2015.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2015 December 2015 OneDrive Support This month we have supported OneDrive in Skyvia. OneDrive is a well-known file hosting service offered by Microsoft. Now you can export cloud and database data to OneDrive to CSV files and import CSV files from OneDrive to cloud applications and databases. Read more about OneDrive here ." }, { "url": "https://docs.skyvia.com/recent-releases/2015/february-2015.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2015 February 2015 Cloud Backup This month, the new Skyvia version goes beyond cloud data integration and offers cloud backup and restore functionality along with many other features. Cloud Data Backup Now Skyvia can prevent cloud data loss caused by user errors, account hijacking, etc. with cloud data backup. It offers both automatic daily backup and anytime manual backup for Salesforce, Dynamics CRM, SugarCRM, and Zoho CRM. Configuring and performing backup and restore operations is very easy. When restoring your data, you don't need to tinker with CSV files or any other applications - you need just to choose what data to restore and start the restore process. Skyvia does the rest for you. With Skyvia you can restore whole tables, separate records, or even separate fields if you need so. Additionally you can view and undo data changes between backups. You can always access your backed up data. Skyvia allows viewing them directly in the browser or exporting them to CSV files. Additionally Skyvia provides powerful monitoring of your backup and restore operations. Skyvia displays all your backup and restore operations with the numbers of totally backed up or restored records and the number of changes since the previous backup. If you need details, you can compare your backups and see which records were changed between them. Predefined Mapping Templates With Skyvia 3.0, cloud data migration is as easy as never before! Skyvia 3.0 introduces predefined mapping templates in Import and Synchronization packages for quick migration between different cloud sources. Now you can quickly add tasks with predefined mapping for importing and synchronizing data between cloud data sources. Instead of configuring tasks manually, you can simply select tasks from the predefined templates. This allows you to configure data import or synchronization between cloud data sources in less than a minute! If you need, you can edit predefined tasks after adding them to your package. Replication Improvements Replicating cloud CRM data becomes easier with Skyvia 3.0. There is no more need to create replication task for each cloud CRM object to replicate. Now you can just select the necessary objects in the list and start replication if you don't need fine-tuning replication with data filters or replicating only part of object fields. Improvements of Packages and Connections Pages Now, when there are no packages or connections on the page, it displays links that allow quick creation of a package or connection of the type you need. Adding a new package or connection is also more convenient when there are packages or connections on the page. Additionally, you can now sort and filter packages or connections on the page, and these settings are preserved when you leave Skyvia. Lookup Mapping Support in Synchronization Packages Now lookup mapping is not limited to import, you can also use it in synchronization packages." }, { "url": "https://docs.skyvia.com/recent-releases/2015/may-2015.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2015 May 2015 Cloud SQL Tool The new Skyvia version offers a cloud SQL client for cloud and relational data sources and improved control of cloud source metadata cache. Online SQL Tool With Skyvia Query you will be able to use SQL to query and manage your cloud and relational data directly from a web browser. It allows you to query data with SELECT statements and see the query results in the browser or export them to CSV files, work with several queries on the same page simultaneously, perform DML statements against both cloud and relational data, etc. Skyvia offers full support for SQL-92 syntax for cloud sources, allowing you to perform complex queries with aggregations, SQL functions, etc. against your cloud sources and get exactly the data you need, presented in the most informative way. Full DML support makes Skyvia a perfect tool for mass cloud data updates, and it takes care about protecting your data from user errors by allowing you to preview all the changes before actually applying them. For databases Skyvia fully supports all the SQL they support including DDL statements if necessary, and allows you to access and manage your data whenever you need and from anywhere. For more information visit the Query page . Better Cloud Metadata Cache Management By default Skyvia caches metadata from your cloud connections and works with this cache afterwards. If a cloud object structure changed since that, this change may be ignored until the cache is refreshed. Skyvia 3.5 offers more options for control when the cache is refreshed and allows per-object manual cache refresh. First, you may configure the interval between refreshing a cache for a cloud connection in the connection editor and clear the connection cache on the connection details page. Second, when editing an integration package, you can refresh metadata for a cloud object in the task editor of the task that uses this object." }, { "url": "https://docs.skyvia.com/recent-releases/2016/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2016 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2016/december-2016.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2016 December 2016 Connect Product Release This month, we are glad to present Skyvia Connect \u2014 cloud service for connecting various data-oriented applications to databases and cloud applications via unified standard interface. Skyvia Connect is a connectivity-as-a-service solution that allows you to expose your cloud and database data via the OData protocol. OData is a widely accepted open standard for data access over the Internet, supported in a number of applications and data-oriented technologies, such as BI tools, integration solutions, office suites, mobile applications, etc. Skyvia Connect allows you to create OData endpoints for your cloud and relational data visually. After this OData consumers can consume exposed data via web, and Skyvia does all the background job \u2014 translates the OData requests to your endpoints to SQL or API calls and executes them, manages endpoint security, and logs all the OData requests to your data. No Coding Required Create a RESTful OData service for your data in minutes without typing a line of code! Expose your tables via drag-n-drop, include or exclude fields visually, and Skyvia Connect will make the rest. Built-in Security Layer You don't need to give credentials to your database or cloud application accounts. Instead define user names and passwords to your OData endpoint on the Skyvia side. If this is not enough, you can also limit access to the allowed IPs and IP ranges. Detailed Logging Skyvia logs all connections to your Connect OData services with user accounts and IP addresses, all their requests and queries. Whenever necessary you can monitor which users access your data and what they do with them. You can learn more about Skyvia Connect here . Gallery of Predefined Integrations The new version of Skyvia provides a gallery of predefined integrations, automating most common cloud application integration tasks. This makes configuring your integration even easier - just select one of preconfigured packages from a gallery, specify connections to the cloud apps, set up a schedule for automatic execution, and that's all! Skyvia integration gallery allows you to easily automate [creation of MailChimp subscribers from Salesforce contacts](https://skyvia.com/gallery/create-new-mailchimp-subscriber-from-salesforce-contact) or [leads](https://skyvia.com/gallery/create-new-mailchimp-subscriber-from-salesforce-lead) and [vice versa](https://skyvia.com/gallery/create-new-salesforce-contact-from-mailchimp-subscriber) , [creating MailChimp subscribers from Shopify customers](https://skyvia.com/gallery/create-new-mailchimp-subscriber-from-shopify-customer) , [adding new Marketo leads as Salesforce contacts](https://skyvia.com/gallery/create-new-salesforce-contact-from-marketo-lead) , and many more. Let Skyvia do all these repetitive tasks for you, and free your time for more important and less mundane things! How to Use Gallery You can access the gallery with predefined integration packages in two ways: The first way is to start creating a new integration package in Skyvia. After you click the New button for creating a package, click Gallery . This will open the integration packages gallery, where you can quickly find the necessary template via search and immediately start using it. Additionally, you can open the [Gallery](https://skyvia.com/gallery/) page on this site. The Gallery page displays both the predefined queries and predefined integration packages. You can quickly filter the gallery by data sources and click the necessary package or query to open its details page. On the opened page you can read the package or query description and see which additional configuration steps the package requires. The description page for queries also contains their SQL code. On this page you can also immediately open the package or query in Skyvia and start using it. Data Integration Pricing Please also note that we plan to introduce pricing for data integration since the 3rd of January, 2017 . You can see our pricing plans on our [Pricing page](https://skyvia.com/pricing/) . Skyvia will also offer a free limited pricing plan , including 5000 records per month for any operation, and additional 100 000 rows per month for CSV export/import operations. It will allow having two scheduled packages, and won't allow scheduling packages to run automatically more often than once per day. Our users can already purchase subscriptions for paid pricing plans. If you buy it now, it will start since the 3rd of January, 2017. Users without purchased subscriptions will be switched to a free limited pricing plan on the 3rd of January, and all its limitations will be applied. You can purchase your subscription via your Account page in Skyvia. For more details see Subscriptions, Payments and Trials . Till January 3rd you can still use Data Integration for free with no limits. Hurry up to use Skyvia for free! NetSuite Support This month, we have also managed to release NetSuite \u2014 a unified cloud business management solution, including ERP/financials, CRM, and ecommerce. Now you can import data from various sources to NetSuite, export its data as CSV files, replicate its data to relational databases and synchronize it with cloud applications and databases. Skyvia also allows you to backup and restore NetSuite data and to query NetSuite data via SQL or with visual query builder." }, { "url": "https://docs.skyvia.com/recent-releases/2016/february-2016.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2016 February 2016 Improved Online SQL Query Tool and Pricing Introduction This month, we are glad to present the new version of [Skyvia](https://skyvia.com/) \u2014 our online service for cloud data integration, backup, and management. The new Skyvia version offers completely redesigned and greatly improved Query \u2014 our online SQL editor for cloud and relational data. The SQL editor now is accompanied with visual query builder for designing SQL queries without typing code. Skyvia Query also now has tools to visualize data as charts and provides Query Gallery - a collection of predefined queries for common use cases. Visual Query Builder Skyvia Query does not require you to know SQL any more. In Skyvia 4.0 you can design your queries visually without typing code in a new Query Builder tool. It has intuitive interface and is not SQL-centric. Query Builder can help even an unexperienced user to quickly query data from cloud and relational sources. Whenever necessary you can switch from a Query Builder to the generated SQL statement and tweak it in the SQL editor. Data Visualization and Export In Skyvia 4.0 we have introduced tools that allow visualizing the retrieved data as charts. You can display your data as different chart types, create charts with several series, and clearly see the true meaning of your data. The returned data can now be exported not only to CSV format but also to PDF. Charts can also be exported as PNG images or PDF files. Query Gallery Skyvia 4.0 now allows storing composed queries in the Query Gallery. If you have created a good query and want to reuse it in future, you can save it to the Query Gallery and then load and reuse it whenever necessary. It can help you to automate your routine data-related tasks. Query gallery already stores a number of predefined queries for different data sources for the most common use cases. Take a look at the predefined queries, and some of them will be useful for you or help you to create your own queries. Read more about the Query Gallery here . Query Pricing In the version 4.0 we introduce pricing for Skyvia Query . There is a free plan limited to 5 queries per day. The unlimited plan costs $19.00/month. You can see more details on our [Pricing](https://skyvia.com/pricing/) page." }, { "url": "https://docs.skyvia.com/recent-releases/2016/july-2016.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2016 July 2016 New Backup Features The new Skyvia version offers lots of new features for Backup \u2014 cloud data backup product. New functionality allows you to see and browse data relations between backed up records and makes backed up data available in Data Integration and Query products. Browsing Data Relations In cloud data, relations between objects are a very important aspect. When restoring several related records, Skyvia restores all the relations between these records. New Skyvia version provides a new Record Preview page that conveniently displays all the field values of a record and all the related records. If this record has foreign keys, these foreign keys are displayed as links to the corresponding referenced records. This page allows you to quickly browse by record relations and select all the related records easily. Backed Up Data Available in Data Integration and Query Skyvia now allows creating connections to backed up data, which can be used in Data Integration and Query in the same way as other connections! Now Query allows selecting your backed up data via SQL or visual query builder and viewing it in tabular form or as charts. Now you can use such advanced Query features as complex joins, filters, aggregations, grouping, etc. with your backed up data! And ability to use SQL against backed up data provides even more advanced functionality. With Skyvia Backup and Query united, you will be able to query and visualize backed up data from the past and compare it with current data to get better understanding of your business evolution. Applying Skyvia's data integration tools to the backed up data enables you to perform advanced export with filtering and joining related data. You can also load backed up data directly to a wide number of supported data sources or perform your own custom, deeply configured, restore operations when standard Skyvia restore is not enough. For example, with Skyvia Import you will be able to restore data, matching certain conditions, instead of picking records to restore manually. Additionally, when you need one-way data integration, you can reduce the cloud app API call usage by integrating other data sources with backed up data instead of the original cloud application. Your backed up data is read-only, so backup connections cannot be used in import integrations as target, and you cannot perform DML statements in it. Restore Details Dialog Now, after clicking restore, you can see how many records is restored, and from which objects. Additionally you may select a connection to restore data to other than the original backup connection. Backup Pricing In the version 4.2 we introduce pricing for Skyvia Backup . Skyvia can back up various data sources, and to make pricing easier to understand regardless of the data source, our pricing plans differ in storage space available. You can backup as many sources as you need, and as many connections as you need regardless of a pricing plan, and the only limit is storage space used. To improve storage space usage management, Skyvia now displays space used for every backup and allows you to remove old unnecessary backups whenever you need. There is a free plan limited to 1 GB of space used, and it does not provide backup data search and backup comparison. You can see other plans on our [Pricing](https://skyvia.com/pricing/) page." }, { "url": "https://docs.skyvia.com/recent-releases/2016/may-2016.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2016 May 2016 Greatly Improved Data Import We are glad to present the new version of Skyvia \u2014 our online service for cloud data integration, backup, and management. The new Skyvia version offers lots of new features for Import \u2014 a member of our Data Integration toolkit for cloud and relational data. The new Import features include UPSERT support for all data sources, greatly improved Lookup functionality, data filtering support, importing data from several related source objects to one target object, etc. UPSERT Operation Support Now Skyvia Import can perform the UPSERT operation not just for Salesforce, but for all supported data sources. If a Null value is specified for the ID or primary key, UPSERT operation inserts the record, and if a non-null value is specified, UPSERT operation tries to update the record with the specified ID or primary key. This allows you to avoid inserting duplicate data. The primary key value can be specified in various ways: as a source column value or via lookup or external ID, etc., so you can perform UPSERT even without knowing the ID values. Powerful Lookup Mapping Lookup mapping now offers powerful functionality to get values for target fields both from target and source objects. Its new features are available for both Import and Synchronization packages. Source Lookup Now Skyvia Import supports lookup not only on the objects from target connection, but also on objects from the source connection too. Composite Lookup Key Support When one lookup key column is not enough to uniquely identify the necessary record from the lookup object, you can use a lookup key that consists of multiple columns. Lookup Error Processing Now you can specify what to do when no such record was found by Lookup - whether to throw an error or to assign the Null value to the mapped target column. Additionally you can control what to do when multiple records are found - to throw an error or to take the first found record. Case-insensitive Lookup New lookup can perform both case-sensitive and case-insensitive comparisons. Lookup Cache Lookup can optionally select necessary fields all the rows from the lookup object to cache once, and then search for the necessary records in the cache. If you import a large number of rows with lookup, it can be more efficient to select all the data from a lookup object once than select necessary records from it for each imported rows. For cloud data sources with API call limits, it can decrease the number of API calls used. Two-level Lookup For advanced mapping cases you can use the second-level lookup to specify values for the lookup key of the first-level lookup. For example, suppose you perform an import between two CRMs or from two instances of the same CRM. Let it be an import from one Salesforce organization to another. You have already imported accounts and now want to import contacts so that they belong to the same accounts in target that they belong in source. In the target Salesforce organization, imported accounts already have different IDs, so you cannot just import AccountID values of source contacts. We will use Target Lookup mapping to find the IDs of the imported accounts by account names. However, the source Contact table does not contain the names of the referenced Accounts, it contains only their IDs. So we will use the Source Lookup to supply the account name values to the first-level lookup from the source accounts by their IDs. Another use case of the two-level lookup can be when you need to get the values from a table, which is a parent table of the parent table of the source table. You can take the foreign key value of the first parent table on the second-level lookup, and then take the necessary value from the grandparent table on the top-level lookup. Data Filtering Skyvia 4.1 provides data filtering for the source data. It can filter data in two ways. First, it provides advanced filter settings that allow you to configure custom filters as complex as you may ever need. Additionally it allows importing only the recently changed or inserted data. In a single click Skyvia allows configuring an import task to load only the records that was inserted or updated since the last successful package run (or since package creation if it was never run). This allows you to schedule import package for automatic execution, and this package will load only the data changes each time instead of loading all the source data. Additionally, this feature can be used for creating a trigger-action like integration. You can configure an import operation that will perform some data change in a target when a record is created or updated in a source and schedule it to execute every few minutes. Import Joined Tables Another new Import feature is the ability to import data not just from a single object or table, but to join several related tables and use their columns in mapping directly. For example, when importing data from Salesforce contacts, you can join the fields from their accounts, etc. File Mask Support For importing CSV files from FTP/SFTP and Dropbox, the new Skyvia Import allows specifying not only file names but also file masks, that include a date variable. When such Import package is scheduled for automatic execution, it will search the file to import by substituting the current date to the mask. Support for Self-Referencing Relations When using the Relation mapping, Skyvia now supports relations when object references itself. An example of such relation can be a relation of Salesforce Account object that references itself via the ParentId field. Now Skyvia can correctly build a self-referencing relation in target, based on such a relation in source." }, { "url": "https://docs.skyvia.com/recent-releases/2016/october-2016.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2016 October 2016 New Connectors The new Skyvia version offers beta support for widely used marketing, e-commerce, and e-support tools and Google Apps: Google Apps \u2014 including Google Contacts, Google Calendar, and Google Tasks. Marketo \u2014 a cloud lead management and marketing solution. HubSpot \u2014 a leading CRM, marketing, sales, and customer service platform. Freshdesk \u2014 a cloud customer support ticketing system. Zendesk \u2014 a cloud customer support ticketing system with customer satisfaction prediction. Shopify \u2014 an e-commerce platform for selling online, at a retail location, and everywhere in between. Now you can easily integrate your cloud marketing, and e-support applications with other cloud applications and relational databases, backup their data, and manage their data via SQL. Also you can work with data from Google Contacts, Google Calendar, and Google Tasks." }, { "url": "https://docs.skyvia.com/recent-releases/2017/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2017 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2017/april-2017.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2017 April 2017 SQL Data Warehouse Support Skyvia continues to extend the list of the supported data sources, particularly, cloud data warehouses. After the recent introduction of support for Google BigQuery and Amazon Redshift, we are glad to announce beta support for Azure SQL Data Warehouse service in the new Skyvia version. Cloud data warehouse services, like Azure SQL Data Warehouse , are fully managed and offer greater performance and scalability, plus support for petabyte-size databases, as well as the ability to quickly analyze huge volumes of data. Azure SQL Data Warehouse provides fast provisioning and release of computational and storage resources. Thus Azure SQL Data Warehouse can reduce database operational costs greatly, because customers only pay for the resources they use. However, initial migration (porting or mirroring) of large volumes of on-premise and/or cloud data to Azure SQL Data Warehouse will likely require significant time and effort. Skyvia offers users a simple and pain-free way to load data from a cloud application or in-house database to Azure SQL Data Warehouse. Skyvia helps it's users to automate the replication operation, allows visual configuration of the replication operation (with no coding!), and uses the most efficient technique \u2014 PolyBase \u2014 to move data to Azure SQL Data Warehouse. Skyvia can reduce the time and resources required for porting large volumes of data exponentially!" }, { "url": "https://docs.skyvia.com/recent-releases/2017/june-2017.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2017 June 2017 Amazon S3 and Azure File Storage Support Now Skyvia supports file storage services from the most popular cloud service providers: Amazon S3 and Azure File Storage . You can import CSV files from these storage services to cloud applications and databases or export cloud and database data to CSV files, uploaded to these services. Magento 2 and FreshBooks Alpha API Support We are glad to announce support for the second version of Magento e-commerce platform. Now our users can integrate, backup, and work with data of both Magento versions. Additionally, Skyvia have supported the new FreshBooks Alpha API. Now our users can move on and stop using the deprecated FreshBooks Classic API and switch their connections to the latest Alpha API." }, { "url": "https://docs.skyvia.com/recent-releases/2017/march-2017.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2017 March 2017 Support for Amazon Redshift, Google BigQuery and MailChimp API 3.0 We are glad to announce beta support for such data warehouse services as Amazon Redshift, Google BigQuery, as well as MailChimp API 3.0. in the new Skyvia version. Cloud data warehouse services, like Amazon Redshift or Google BigQuery , are fully managed and offer greater performance and scalability, plus support for petabyte-size databases, as well as the ability to quickly analyze huge volumes of data. Both Amazon Redshift and Google BigQuery provide fast provisioning and release of computational and storage resources. Thus they can reduce database operational costs greatly, because customers only pay for the resources they use. However, initial migration (porting or mirroring) of large volumes of on-premise and/or cloud data to Amazon Redshift/Google BigQuery will likely require significant time and effort. Skyvia offers users a simple and pain-free way to load data from a cloud application or in-house database to Amazon Redshift/Google BigQuery. Skyvia helps it\u2019s users to automate the replication operation, allows visual configuration of the replication operation (with no coding!), and uses the most efficient techniques to move data to Amazon Redshift/Google BigQuery. Support for MailChimp API 3.0 Skyvia now can optionally use MailChimp API 3.0 to connect to MailChimp. Now you can access subscriber notes and goals, campaign data, and other objects, which weren\u2019t accessible before via API 2.0. By default, Skyvia still uses API 2.0; you can switch to API 3.0 when creating or editing a connection." }, { "url": "https://docs.skyvia.com/recent-releases/2017/may-2017.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2017 May 2017 Online Support System We are glad to announce the launch of the new Skyvia online support system at [https://support.skyvia.com](https://support.skyvia.com) . The new support system allows you to create tickets with your bug reports, questions, and feature requests online, browse, read, vote, and comment tickets of other users, follow not just yours, but also others' tickets and get notification about answers by email. If your ticket contains sensitive information, you can create a private ticket, visible only to our support team. We hope, our new support system will help you get better support faster. To improve user experience with the new support system, we have also made some changes to our authentication system. We introduced Single Sign On authentication, and you need only to sign in to Skyvia once, and then you will be able to access both Skyvia service at [https://app.skyvia.com](https://app.skyvia.com) and Skyvia support system at [https://support.skyvia.com](https://support.skyvia.com) with the same account. Skyvia now redirects you to id.skyvia.com for authentication." }, { "url": "https://docs.skyvia.com/recent-releases/2017/october-2017.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2017 October 2017 New Connectors In October, Skyvia has released Oracle and [OData](https://skyvia.com/connectors/odata) connectors. With the new Oracle connector you will be able to integrate Oracle with a number of already supported cloud apps and databases. You can use it to connect to both Oracle Cloud or on-premise Oracle Database servers, if the latter are available from the Internet. Skyvia allows import and synchronization of Oracle databases, replication of cloud apps' data to Oracle, and CSV import/export from/to Oracle. It also includes online SQL tool for Oracle \u2014 Query . The OData connector enables you to connect to any of a wide variety of data sources that allow access to their data via the universal OData interface. It enables you to import and export data to and from these sources as well as to work with their data using SQL queries." }, { "url": "https://docs.skyvia.com/recent-releases/2018/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2018 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2018/august-2018.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2018 August 2018 New Connectors In August, we have supported Podio, SendPulse, Slack and Streak. With the new connectors you will be able to quickly integrate, backup, manage, and share your data. More Information about the cloud applications you can find below: Podio \u2014 a cloud collaboration service for organizing team communication, automating workflows, task management, etc. SendPulse \u2014 a cloud-based multi-channel marketing platform for sending marketing messages via email, SMS, Viber, etc. Slack \u2014 a cloud team collaboration service with a number of tools for online calls, file sharing, etc. Streak \u2014 a CRM platform for Gmail that adds features to send personalized emails and manage your conversations." }, { "url": "https://docs.skyvia.com/recent-releases/2018/january-2018.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2018 January 2018 Google Sheets Add-on This month, we proud to release [Skyvia Query Google Sheets Add-on](https://workspace.google.com/marketplace/app/skyvia_query/134536098526) \u2014 a tool for querying cloud and database data to Google Sheets . The add-on allows you to use visual Query Builder or SQL to query data from a wide variety of data sources to Google Sheets and works directly from the Google Sheets interface. Thus, you can quickly get the required data for your reports to a Google Sheets workbook and then share your data via secure Google Docs sharing mechanism. Skyvia Query Google Sheets Add-on works via Skyvia Query . It uses your data source connections, created on the Skyvia platform, uses the Query Gallery to store your saved queries and shares [pricing plans and limitations](https://skyvia.com/pricing/) with Skyvia Query. Connect Pricing We have also released the latest version of Skyvia \u2014 universal cloud platform for data integration, backup, management, and Access. The [Connect](https://skyvia.com/connect/) OData cloud server is now out of the Beta status and becomes commercial. There is a free pricing plan available, which allows you to create one OData endpoint for cloud data only and test it within the limit of 100 KB of traffic per month. Better pricing plans allow you unlimited number of endpoints for cloud and database data, much more traffic, endpoint access with user authentication and specifying IP addresses and ranges, for which the access to endpoint is allowed/denied. You can see the details of our pricing on our [Pricing](https://skyvia.com/pricing/) page. For all current Skyvia Connect users, who already have OData endpoints created, we provide a two-week trial period with no limitations that will last up to January 18, 2018, so you have time to choose a pricing plan that suits most for your needs. After January 18, 2018, all the pricing plans limitations will be applied to our existing Connect users too." }, { "url": "https://docs.skyvia.com/recent-releases/2018/june-2018.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2018 June 2018 New Connectors In June, Skyvia has supported Jira, WordPress and Pipedrive. With the new connectors, you will be able to quickly integrate, backup, manage, and share your data. More information on the new cloud applications: Jira \u2014 an issue tracking and project management solution. WordPress \u2014 a free and open-source CMS. Pipedrive \u2014 a cloud CRM for scaling companies." }, { "url": "https://docs.skyvia.com/recent-releases/2018/march-2018.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2018 March 2018 We are proud to release the latest version of [Skyvia](https://skyvia.com/) \u2014 universal cloud platform for data integration, backup, and management. Now it includes Stripe, ShipStation and ActiveCampaign connectors and improved package run history. New Data Sources With the new Skyvia connectors, you will be able to quickly integrate, back up, and manage, and share data of the following cloud applications: Stripe \u2014 an online payment provider ActiveCampaign \u2014 an e-mail marketing automation solution ShipStation \u2014 a web-based multi-carrier shipping solution Run History Improvements Now you can analyze the log of your integration runs in a more convenient way, especially if your integrations run often. You can simply specify the interval you want to see package runs for instead of scrolling and loading package runs starting from the recent ones to older." }, { "url": "https://docs.skyvia.com/recent-releases/2018/october-2018.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2018 October 2018 Email Notification Feature and New Connectors Released We have released the latest version of [Skyvia](https://skyvia.com/) . Now it supports more Zoho Apps and offers the email notifications feature in beta. In addition to Zoho CRM, we have added support for Zoho People and Zoho Inventory . Email Notifications With the new Email Notifications feature, you will always stay informed in case of failures in your Data Integration and Backup packages. You won't need to open your packages and monitor their run history any more in order to be sure that everything works. In case of a failure in your package, you will be immediately informed by an email, sent to the email address of your account or any other email address of your choice. Please note that in order not to spam your email address, Skyvia won't send multiple emails if there are multiple failures for a brief period of time. You can find more details about email notifications and their frequency in our documentation . Support for Zoho Apps In addition to Zoho CRM, Zoho People and Zoho Inventory have been also supported. Brief information about new cloud apps are provided below: Zoho People \u2014 a cloud HR processes automation software with easy time and attendance tracking. Zoho Inventory \u2014 a cloud inventory management software for growing businesses." }, { "url": "https://docs.skyvia.com/recent-releases/2019/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2019 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2019/august-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 August 2019 Data Integration in Skyvia New We worked hard over the past half a year, and now we are ready to present data integration on Skyvia New! Feel free to try new data integration features at new.skyvia.com. Package Editor Improvements Package editor on Skyvia New was also redesigned for your convenience. Source and target settings are now located on the left, and tasks - on the right. Schedule and parameters moved to separate dialog windows and became more clear. Setting package source and target became much more convenient as you don't need to select source and target type anymore and can just select the corresponding connections from the list. Package tasks can now be reordered simply via drag-n-drop. Advanced Package Execution Logging The new interactive log features, previously available for Skyvia Connect, are now also available for data integration. Now you can view package executions for a month or another period, quickly narrow the period down to the one you are interested in or specify another custom period, filter package runs by success/error and view the details of the necessary event in just a couple of clicks. Besides the Log tab with package activity diagram, Skyvia New also provides the Monitor tab with a list of most recent package runs, which also shows the active package run status, if the package is running. Profile Management In addition to data integration, Skyvia New also gets profile management functionality. You can now manage your profile settings on Skyvia New as well as on Skyvia Classic." }, { "url": "https://docs.skyvia.com/recent-releases/2019/december-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 December 2019 New Connector \u2014 Insightly CRM Skyvia team has supported Insightly CRM, a cloud CRM platform for marketing, sales, and project management automation. Integrate Insightly CRM with Other Data Sources Now one can easily migrate data between Insightly CRM and other CRMs, like HubSpot, Salesforce, Dynamics CRM, Zoho CRM, etc. in any direction with Skyvia or even configure a bi-directional synchronization between them. For example, in just a few minutes you can configure a Skyvia\u2019s integration, loading Insightly Organizations and Contacts to HubSpot, preserving relations between them. The same can easily be done in the reverse direction too, and of course, the list of objects that can be loaded with Skyvia is not limited to only contacts and organizations. Skyvia also allows integrating Insightly CRM with other cloud applications \u2014 accounting and marketing tools, e-commerce and e-support solutions, etc. You will be able to load data from Insightly to relational databases and vice versa, export Insightly data to CSV files and import CSV files to Insightly, and many more. The full list of various cloud applications and databases, supported by Skyvia can be found here Insightly CRM Data Backup Insightly performs regular daily backup of their data for disaster recovery, but they don't provide individual backup or data restoration services to recover your data after, for example, an accidental deletion or account hijacking. They offer such options as CSV export of some main information or export to XML, which cannot be then reimported to Insightly CRM automatically. Fortunately, they also provide API for integration with third-party solutions, and integrating Insightly CRM with Skyvia provides you with a choice of options for backing up Insightly CRM data. For backing up and restoring Insightly CRM data, Skyvia can provide various tools \u2014 from much wider CSV export capabilities (allowing you to export nearly all of Insightly CRM data automatically on schedule) to copying Insightly CRM data to your own database or cloud data warehouse and maintaining this copy up-to-date automatically, and finally to a dedicated cloud data backup and restore solution . The latter option is a convenient and easy-to-use cloud service, which allows you to configure Insightly CRM backup in just a couple of minutes and perform automatic daily and anytime manual backup. While having some of its own limitations, Skyvia Backup can back up more of Insightly CRM data than native CSV export. Besides, it allows viewing and restoring data of objects that can be written via API directly from web browser. You simply select your data in the browser and click the Restore button. No need to fiddle with CSV files, configure complex imports \u2014 just restore data in a couple of clicks. Working with Insightly CRM Data Skyvia also offers Query product to query your Insightly data using either visual Query Builder or SQL language directly, without the need of preparing and exporting data to a database first. You can perform complex SQL statements to get data, prepared for the necessary report and then export returned data to a CSV or PDF file. Skyvia Query also allows running simple DML SQL statements \u2014 and thus, easily perform mass Insightly CRM data updates. Skyvia Connect allows publishing Insightly CRM data via OData protocol without the need in coding, hosting, and maintenance. Connecting Skyvia to Insightly CRM Connecting Skyvia to Insightly CRM is very easy \u2014 you only need to specify your API Key. This API key you can find in your Insigtly CRM user settings \u2014 at the bottom of the User Settings page. Multi-Delete Feature This month, we have also released the Multi-Delete feature in Skyvia New. With this feature, it will take you just a few seconds to delete old connections or agents with all dependent objects you don\u2019t want to use anymore. Bulk Delete of Objects You can make a bulk delete of needless objects, such as old connections with dependent integration packages, queries, backups or unused agents with their dependent items on the OBJECTS page, on the All tab or the corresponding tabs \u2014 Connections , Agents , Packages and others. First, scroll down and check the created objects. Second, select either all or only specific objects for deletion, using Ctrl key and your mouse. Third, simply click the small wastebasket icon in the upper right corner of the page to delete the objects. You can also use drag-n-drop to send selected objects directly to the Trash on the left. It\u2019s important to remember that if you choose to delete a certain connection or agent, it may often have dependent objects (packages, queries, backups or endpoints), which you haven\u2019t selected. In this case, an additional warning message pops up, prompting to delete the dependent objects that are not selected. If you are sure that you want to delete them, click the OK button in all the messages. Deleted objects and their dependencies are stored in the wastebasket for 2 weeks . After this period, they will be permanently deleted, and you will not be able to restore them. Safe Restore of Deleted Objects In case you find out that you have accidentally removed the objects you need, you can always restore them right away or within a 2-week period. For this, simply click the wastebasket on the bottom left of the page to look through deleted objects with their dependencies \u2014 packages, queries, endpoints, etc. Select and restore those you deleted accidentally by clicking the Restore from trash icon highlighted in blue on the screenshot. Your data and objects will be restored, and you can use them again at any time convenient for you. You can also force permanent deletion of objects from wastebasket right away. For this, you select either all or some objects inside of the wastebasket and click Delete from trash icon." }, { "url": "https://docs.skyvia.com/recent-releases/2019/january-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 January 2019 Welcome to Skyvia New The new Skyvia is coming! It will have a new, faster internal implementation with more capabilities, and a redesigned user interface, more convenient and intuitive. While it is yet a long way to go, you can immediately get your hand on our newest feature that are ready to use with Skyvia New, available in beta. Basic Concepts Access Object Visibility Pricing Aspects Skyvia New is available at https://new.skyvia.com/. You can access it with your Skyvia account that you normally use to sign in to our service, as well as with a Google or Salesforce account. No additional registration or actions required. Note that all supported objects that you create on our current Skyvia service are immediately available on Skyvia New. Currently, these objects include connections , user's queries , and endpoints . However, objects, created on Skyvia New, may use features, not available in the current Skyvia service version, and thus, they are available only on Skyvia New. Your pricing plans and subscription limits are shared between the current Skyvia service and Skyvia New. The actions that are counted to a pricing plan limit on the current Skyvia service, are counted to the same pricing plan limit in the same way on Skyvia New. Redesigned User Interface A completely redesigned interface allows you to work faster in Skyvia, providing an easier access to frequently used features and being more intuitive in general. It will definitely make you more productive. Quick Creation of New Objects In Skyvia New you can start creating any object from any page. The quick + New menu, which allows creating a new object of any kind, is always at hand. Full-page editors for connections, replacing connection editor dialog boxes, provide a more intuitive and convenient way to create and edit connections. Organizing Your Objects New Object List Folders to Group Objects Quick Actions Skyvia New displays all the objects \u2014 connections, endpoints, etc. \u2014 together, allowing you to quickly find the necessary object, view its most important details. You can view objects as you prefer \u2014 as a list or as cards, with or without grouping by the object kind. With folders, you can organize your objects so that all objects, related to a specific task or project, were in the same folder, and objects for other tasks/projects were placed to other folder. Then you may click a folder to see only objects that are in this folder and avoid clutter if you have too many objects. Test connections, enable or disable endpoints, or open them for editing in a single click, directly from the new object list. Most used actions for the objects can be performed just by pointing an object and clicking the necessary button. Connect to On-premise Data Sources via Skyvia Agent Skyvia New introduces a new way to connect to your on-premise data, hosted on your local computer or in your local network \u2014 Skyvia Agent. Skyvia Agent is an application that is installed locally and serves as a secure gateway to load data between on-premise data sources and Skyvia. Connect to local data sources, not available from the Internet directly. No need to configure firewall in most cases \u2014 Skyvia Agent uses the default HTTP port. Load data between your data source and Skyvia securely \u2014 strong TLS SSL encryption is used. Run Agent as an application or as a service to always stay connected. Store your connection parameters on Skyvia or locally, on the Agent side Just create an Agent, download and install the Agent application and key, start the Agent, and create connections via this Agent on Skyvia." }, { "url": "https://docs.skyvia.com/recent-releases/2019/july-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 July 2019 BigCommerce API v3 Support Recently we have introduced support of BigCommerce API v3 in [Skyvia](https://skyvia.com) \u2014 universal cloud platform for data integration, backup, and management. Skyvia does not enforce the use of API v3, however. It allows users to select the API version for your connection to use when creating a connection. API Differences The new BigCommerce API was introduced to simplify product management, reduce the number of API calls used, and improve API performance. There is a number of differences between API v2 and API v3, regarding product management. EXAMPLE PRODUCTS AND PRODUCT VARIANTS DIFFERENCES API v3 API v2 In API v3, product variants are available via the ProductVariants object. This object contains records for all the products. Even if a product does not have multiple variants, it will have a base variant record. For API v2 connections, the information about product variants is available via the ProductSKUs object. It contains records only for products with multiple variants. Option management via API v3 is also significantly changed. The options you create via API v3 are product-specific. They cannot be reused for other products, and they are not available via BigCommerce Control Panel. Options and their values can be accessed via the ProductVariantOptions and ProductVariantOptionValues objects, and each record must reference a product, for which this option is used. To create a product with multiple variants, based on product options, via API v2, one should create records in the Options object, then create values for these options via the OptionValues object. Then the user needs to create an option set for the product via the OptionSets object and assign the necessary options to it via the OptionSetOptions object. So to create a product with multiple variants via API v3, you need to create a record in the Product object first, then add the necessary ProductVariantOptions and ProductVariantOptionValues records, and finally add the required ProductVariant records. After this, the user can load a record with a reference to the corresponding OptionSet into the Product object and add records for its variants to the ProductSKUs object, providing values for the options from the option set via its Options field.. The options and option sets, created via API v2, can be reused for other products, and are available via BigCommerce Control Panel. Other New Features in API v3 As for new features, introduced with API v3, modifier options (product options that don't create new variants/SKUs) are now available via separate ProductModifiers and ProductModifierValues objects. It is now also possible to work with metafields. These are the fields that are available only via API, and can be made specific for an applications or available to other applications. Metafields are available via BrandMetafields, CategoryMetafields, ProductMetafields, and ProductVariantMetafields objects. Object Support and Differences in API v3 BigCommerce API v3 is not complete, and does not cover yet all BigCommerce features. When using BigCommerce API v3 connection, Skyvia works with some BigCommerce objects, supported in API v3, via API v3, and other objects are accessed via API v2. API v3 API v2 When using BigCommerce API v3 connection, Skyvia provides access to the objects, related to products, product variants, options, brands, metafields, and price lists via API v3. Here you can see the schema of these objects (click to open the enlarged version). Here is the schema of the corresponding objects, available via BigCommerce API v2 connections in Skyvia. Please note that the schemas above may not reflect all the relations between BigCommerce records, because some of them are implemented via JSON and JSON array fields. For example, the relation between ProductVariants/ProductSKUs records and options is not reflected here, because it is stored within the JSON array values of the OptionValues/Options field. All the other objects are accessed via BigCommerce API v2 when using both API v2 and API v3 connections, because API v3 for them is not yet released by BigCommerce. Here is the list of these objects: BlogPosts BlogTags Redirects Customers CustomerCustomFields CustomerAddresses CustomerAddressesCustomFields CustomerGroups Countries States Coupons GiftCertificates Orders OrderCoupons OrderMessages OrderProducts OrderProductOptions OrderShippingAddresses OrderStatuses OrderTaxes OrderShipments PaymentMethods ShippingMethods ShippingZones Store TaxClasses Authentication Changes API v3 API v2 For API v3, only OAuth authentication is supported. You can get parameters for OAuth (Store Credentials) authentication by creating an API account in the Advanced Settings on your BigCommerce store Control Panel. For API v2, Skyvia supports both OAuth authentication and legacy HTTP Basic authentication. Note that BigCommerce strongly recommends using OAuth authentication, and Basic authentication is not available at all for newer BigCommerce stores, registered after July 31, 2018. Please note that connections using App Credentials cannot be manually created. We are going to implement such connections soon, and they will be created automatically when the user installs Skyvia into BigCommerce via BigCommerce App Marketplace." }, { "url": "https://docs.skyvia.com/recent-releases/2019/november-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 November 2019 Releasing Backup in Skyvia New Great news for our regular users and for those willing to give it a try with our products \u2014 improved functionality, new ergonomic design and latest useful features of Backup are released now in Skyvia New! We expect you to benefit greatly from these updates and backup your data each day safely and easily. New Backup Terminology We have made changes to our Backup terminology. The backup packages have been renamed to backups , backups \u2014 to snapshots . Check these updates in Skyvia New and get familiar with new terms in practice, making your backup. New Ergonomic Design New design is one of the most notable features as it has become more simple and user-friendly. New design offers easier and quicker connection and backup creation, convenient location and display of backup new features, tabs, timelines. Convenient Backup Wizard Backup packages are now created in the convenient Backup Wizard through a series of well-defined steps. To create a new Backup , simply click the +NEW button in the top menu and switch to the Backup Wizard in Skyvia New. Backup Wizard contains now a progress bar, which shows how successful you are in creating your backup. First, select a connection or create a new one by clicking +Add new in Backup Wizard. Second, select objects you want to backup (in our example, it is Salesforce Account object) and proceed further by clicking Next step . The 3rd and 4th steps are easy \u2014 enable backup run on certain schedule (if needed) and name your backup package. Having made all steps and created a package, you can start backup of your data by clicking Run at the top right of the page. New Backup Features The backup functionality has been greatly improved for users\u2019 comfort. Now, each backup page has four tabs with advanced features \u2014 Overview , Settings , Snapshots , Activities . They help our clients check their backup activity and success history. In Skyvia New, we have removed the display of snapshots and restores in a calendar format. Now, you can display them only in a list format, which makes the\u00a0display of events visually more appealing. The old timeline of snapshots and restores has been redesigned into two tabs \u2014 Activities and Snapshots , each having extra possibilities for users. Activities tab displays all backup snapshots and restores in chronological order with time and date of their fulfillment and the number of records, errors and modifications obtained during backup performance. If you want to view more details on a certain snapshot, click it and check info in Details window, which opens up on the right. If you have a long list with snapshots and restores, simply scroll it down and choose any snapshot or restore to view details not leaving the tab. Snapshots tab shows all snapshots you have made and their status as well as time and date of their execution. When you click a certain snapshot, page opens with the detailed information on this snapshot. Here you can also find 3 tabs \u2014 All , Failed , Changed that show details on all records, on those which failed or were modified. Knowing this information, you can restore changed records in case of need or check why some records have failed. We have also completely redesigned navigation in the backed-up data. You do not need to use an inconvenient breadcrumb navigation anymore to return to the list of backed-up objects or the list of snapshots. When you view your snapshots details or backed-up data, you always see the list of objects and the list of snapshots on the left and can switch between them instantly. The new navigation allows you to easily switch between objects using the object list without the risk of clicking a wrong breadcrumb and loosing your selection. Overview tab is intended for viewing general information about the backup (cloud connection used, storage space, total number of snapshots, details of the last snapshot, etc.). What is convenient now is that you can view or edit a cloud connection and save changes to it without leaving the tab. Moreover, storage space is represented visually much better. Now, you can see your total storage for backups, storage you have already used and space taken by certain backup. Everything is shown in colors, and you can easily understand when to proceed with your next clean up. Settings tab shows objects selected for a backup and your current schedule. You can modify objects any time, save latest changes and make data backup again. On this tab, you can also enable, disable or remove backup schedule." }, { "url": "https://docs.skyvia.com/recent-releases/2019/october-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 October 2019 Jira Connector Updates Skyvia team is glad to announce the fresh updates of the Jira connector. We have significantly improved support for Jira Cloud by adding the ability to connect via Jira API v3 and using OAuth authentication. Now you need to explicitly specify whether you are connecting to Jira Cloud or Jira Server when creating a connection. The procedure for connecting to Jira Server remains the same. For Jira Cloud you will need to specify several additional parameters. Jira API Versions Skyvia now supports Jira API v3 in your connections. Jira API v3 is now available in beta, and its main new feature is support for Atlassian Document Format in issue description and environment, comments and custom text area fields. You can find more information about Atlassian Document Format in [Jira documentation](https://developer.atlassian.com/cloud/jira/platform/apis/document/structure/) . Skyvia keeps support for both API v2 and API v3, and you can select the API version to use when creating or editing a connection. The main difference between the API versions in Skyvia is that the above-mentioned fields are available as usual text fields, storing texts in [Jira formatting notation](https://jira.atlassian.com/secure/WikiRendererHelpAction.jspa?section=all) , via API v2 connections. In API v3 connections they are presented as JSON objects in Atlassian Document Format. Root node properties of these JSON objects are available as separate fields \u2014 Version, Type and Content. The latter contains the actual content as a JSON array of document nodes. Authentication Now Skyvia supports both Basic and OAuth authentication for Jira cloud. When using OAuth authentication, you just need to sign in to Jira and allow Skyvia to access your Jira data. For Basic authentication, you need to enter your Atlassian account email and [API token](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) . If you do not have a token, you can generate a new one at [https://id.atlassian.com/manage/api-tokens](https://id.atlassian.com/manage/api-tokens) . Note that in either case, your Atlassian account password is not stored on Skyvia. OAuth authentication, while more convenient because of no need to fiddle with tokens, has certain limitations. Some Jira objects, for example, ApplicationRoles, Dashboards, AuditRecords, don\u2019t support OAuth authentication. Such objects can be accessed only via connections, using Basic authentication. If you try to access them via a connection, using OAuth authentication, you will get an error \u201c OAuth 2.0 is not enabled for this method. \u201d" }, { "url": "https://docs.skyvia.com/recent-releases/2019/september-2019.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2019 September 2019 New Feature in Skyvia New: Dependencies Between Objects We would like to announce that we have added DEPENDENCIES feature on the OBJECTS page in Skyvia New. This feature will help you to track dependencies between objects. You can easily find out which connections and agents are important for you and have relevant dependent objects, and which are out-of-use and can be deleted from your list not affecting the work of all your integrations. To try our new feature, go to the OBJECTS page. Hover over the specific Connection or Agent to see quick action buttons. By clicking the View Dependencies button, you apply filter settings to dependent objects. When you apply filter settings, you see the Show dependencies for box in the toolbar. Dependent objects are listed below. The more objects you have created with a certain connection, the larger list you will have. We also would like to draw your attention to different levels of dependencies between objects in Skyvia New. Clients can notice that, for example, connections in the system may depend on agents, meantime packages, endpoints, backups and queries depend on connections. You can easily check it yourself on the website by opening a page with the agent or connection you use frequently and viewing its dependent objects. On the Connection page, click the View Dependencies button to open the list. Select the object (for example, import package) you want to switch to. You can edit or delete the package if necessary. However, you cannot delete the connection or agent from your list without removing dependent objects first. If you want to manage related objects or organize objects into folders, click the View on Objects Page button at the bottom of the Dependencies list and switch to the OBJECTS page with different view options. Release of Gallery in Skyvia New This month we are also delighted to announce the launch of Gallery in Skyvia New. Gallery helps you to automate most common integration and query tasks using predefined templates in just a few clicks! Useful use cases are just at hand for your convenience! You can enter the Gallery directly [from the website](https://skyvia.com/gallery/) . That is one way. Another way is to sign in to Skyvia and open the new menu. A completely updated interface of the Skyvia service allows accessing the internal Gallery easier now \u2014 simply click the +NEW button in the top menu and switch to Gallery . New Gallery Design According to our vision, both Galleries should look alike and make your life easier as we put comfort of our clients first! Considering the above-mentioned, we have decided to greatly improve the internal design of Gallery . From now on, all predefined integrations and queries are available for you in one place \u2014 on the All tab. Optionally, you can switch to Integrations or Queries tabs to see only integrations or only queries respectfully. You can also select predefined queries directly in the Query interface. This functionality remains the same in Skyvia New. However, the Folders list will no longer be there. Now, you can organize your queries into folders in the same way as your integrations, connections, endpoints and other objects \u2014 on the OBJECTS page. New Filtering Tool We also draw your attention to the Gallery\u2019s new filtering option \u2014 Show connectors used by me checkbox. When you select it, you shorten the overall connectors list to the list of connectors used by you only. For example, you select MailChimp from used connectors and easily apply the template Create new MailChimp Subscriber from Salesforce Contact . We hope you enjoy our endeavors!" }, { "url": "https://docs.skyvia.com/recent-releases/2020/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2020 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2020/april-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 April 2020 New Data Integration Engine for Replication and Export Our team keeps working hard updating Skyvia and making it better. Recently we have released a new, more convenient UI for our users , and now it\u2019s time for changes under the hood. Today we announce a new data integration engine that extends Skyvia\u2019s capabilities in integrating data between various cloud applications and databases. Unlike our current data integration engine, the new engine is designed and written from scratch. The current engine somewhat limits us in extending data integration features, but it is no longer the case for the new engine. We designed it with extendability in mind, considering all the user-suggested features and requests that we could not implement previously, and we laid solid foundation for future data integration improvements. Availability Currently, the new engine is only available for replication and export only. It is not applied by default. In order to use the new engine, you need to select the Use new runtime check box in your package details. This provides you with a choice of whether to use the new engine and get new features or stay with the tried current engine. SQL engine for cloud data allows much more flexible data access and data manipulation than native cloud API and allow you work with your cloud data just like with database. New Engine Features Except the mentioned check box, there are no noticeable changes to the user interface. All of the changes are under the hood, and you may only notice how your integration works better. Better Package Execution and Error Control The new engine provides better package execution and error control. For replication, in particular, this means that the new replication package won\u2019t fail anymore if it finds that some tables for replication already exist in the target database. Or vice versa, if an existing replication package finds that some tables are deleted. Now the package is able to check whether the target table has a suitable structure and use it, or, in case of a missing table, re-create it and reload all the data automatically. This can reduce the number of errors and manual work for setting up replication greatly. Agent Connection Support Another noticeable and long-awaited feature is support for agent connections in the new data integration engine. Agent connections greatly simplify connecting to on-premise databases in local networks, using a secure locally installable Agent application as a tunnel for loading data. A lot of our Connect and Query users already appreciate this feature, and now it finally becomes available for our replication and export users. You can easily copy your cloud data to your on-premise databases without allowing direct access to them from the Internet and making firewall exceptions. Additionally, you can now easily export their data to CSV files and upload these CSV automatically to file storage services or FTP servers. New Expression Engine The new data integration engine also includes new engine for expressions used for data transformation. It provides much more features and functions than SSIS expressions. However, since expressions are not used in export and replication packages, this feature is not yet available for our users. You will be able to assess it when we add support of import and synchronization packages to the new engine." }, { "url": "https://docs.skyvia.com/recent-releases/2020/august-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 August 2020 Expression Editor and New Data Integration Engine for Import We keep developing our new data integration engine , and now it is finally available for Import packages. It finally allows agent connections in import and brings back the expression editor along with a completely new expression syntax. The new engine for import is yet in beta status and is not used by default. As well as for export and replication, you need to explicitly enable it in your packages, by selecting the Use New Runtime check box. Please note that the new engine uses a completely different expression syntax. However, if you want to switch an existing package that uses expression mapping to a new runtime, you will need to review the expressions used and rewrite them to the new syntax. New Expression Syntax The new data integration engine uses a completely new syntax. You can find its description in our documentation , and here we just describe the main differences between the new and old syntax that you need to consider. Quoting One of the main differences is quoting of column names and string constants. In old engine, column names that required quoting, are quoted with square brackets. In our new syntax they are quoted with double quotation marks. String constants (literals), on the other hand, are quoted with double quotation marks in old engine. In Skyvia\u2019s new expression syntax, they are quoted with single quotation marks. So the expression, uniting the First Name and Last Name fields and inserting a space between them, looks like the following for the old runtime: 1 ```pre [First Name]+\" \"+[Last Name]``` and here is how it should look for the new runtime: 1 ```pre \"First Name\" + ' ' + \"Last Name\"``` Data Types and Type Conversion Another important difference is that the new expression engine uses different data types and type conversion syntax. In the old runtime, you need to type the required type name and additional parameters, separated with commas. In the new expression syntax, you simply use type conversion functions and pass a value to convert to them without any additional parameters. So, an expression, converting, for example, a numeric value to a string, looks like the following for the old runtime: 1 ```pre (DT_WSTR,38)accountid``` In the new expression syntax, it looks simply like this: 1 ```pre string(accountid)``` You can find the complete list of type conversion functions in our documentation . Functions The new expression engine provides all the functions, available in old engine, and some additional functions. The names of functions are also very similar, however, there are the following differences: Function names are lowercase, and they are case-sensitive. You cannot just take expressions with uppercase functions from your old expressions and use them as is - you need to convert them to lowercase. When a function name consists of two words, an underscore character is added between these words in Skyvia\u2019s new expression syntax. For the old engine, no character is added. For example, the old REPLACENULL function corresponds to the replace_null function in new expression syntax. Corresponding functions mostly have the same argument lists in both old and new Skyvia\u2019s runtimes. However, you don\u2019t need to worry much about studying all these differences, because together with the new expression syntax, Skyvia brings back a convenient expression editor for the new runtime. Expression Editor Expression editor helps you to quickly enter your expressions and make less mistakes. Instead of typing them into a small box when editing mapping, you can conveniently edit them in a multiline editor with a number of useful features: Expression editor lists all the available columns and functions, and you can just click them to add to your expressions. No more manual typing errors! Expression editor offers syntax highlighting for your expressions, making editing them more convenient. Code completion allows you to type in functions and column names faster; it also shows the argument list for a function and the list of overloads, if any. Agent Connection Support With the new data integration engine, you get support for agent connections in your import packages. Agent connections greatly simplify connecting to on-premise databases in local networks, using a secure locally installable Agent application as a tunnel for loading data. Now you can easily import data to your local databases with Skyvia or import their data to other sources. New Connectors In August, we are also glad to announce support for other Zoho apps and EmailOctopus. With the new Skyvia connectors, you will be able to quickly integrate, backup, manage, and share your data. Brief information about new cloud apps you can find below: Zoho Invoice \u2014 a cloud invoicing and billing software for freelancers and small business owners. Zoho Books \u2014 a cloud accounting software for small businesses. EmailOctopus \u2014 an easy-to-use and inexpensive email marketing platform for anyone." }, { "url": "https://docs.skyvia.com/recent-releases/2020/december-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 December 2020 Skyvia team has released several new connectors and features this month. Among them are Returning feature and new Workspace Functionality. Besides that 5 new connectors (Constant Contact, Acuity Scheduling, Timely, Drip and Facebook Ads) have become available in Skyvia. New Returning Feature This feature allows obtaining IDs or any other fields from records inserted by the import and assigning them back to the corresponding source records. It can be very useful if you want, for example, to import records from a database to Salesforce and store Salesforce IDs of the imported records in the source database. You may also need to import Salesforce records to some other source and store target IDs in a Salesforce External ID field. This feature is not limited to Id/primary key values. It allows using any target columns in mapping, as well as constants, expressions, or relation mapping. Thus, you may use the Returning feature, for example, to mark successfully inserted source records with some constant values. You can read more about Returning here . Returning Requirements The new Returning feature is available only in the Import packages using the new data integration engine . You should select the Use New Runtime checkbox in your package to use it. Besides, it must be a package, importing data from a cloud app or database, and the source object must support updates. This feature also returns values only for records that are created in the target. Thus, it is available only for the insert and upsert operations. How to Use Returning If your package meets the requirements above, you will see the Use returning check box in the mapping settings. You should select it to make the Returning settings available. In the Returning settings, you configure mapping for source columns as usually, and this mapping will be applied when updating source records for successfully created target records. New Connectors Among new connectors this month are Constant Contact, Acuity Scheduling, Timely, Drip and Facebook Ads. Read more about new cloud applications below: Constant Contact \u2014 an easy-to-use email marketing service, which helps to create effective email marketing and other online marketing campaigns to meet customers\u2019 business goals. Acuity Scheduling \u2014 an online appointment scheduling software, where clients can schedule appointments, pay, and complete intake forms online 24/7. Timely \u2014 a fully automatic time tracking tool available for individuals and companies that records your activity and time spent in different work apps, as well as helps track everything your team works on with minimal efforts. Drip \u2014 a marketing automation platform built for e-commerce that helps you gather data, create better customer experiences, bump up long-term loyalty and drive more revenue. Facebook Ads \u2014 a service promoting customer products, services, websites, etc. from Facebook. New Workspace Functionality This month all our users have also been transferred to the updated workspace functionality . Our development team did their best to develop the highly convenient, collaborative environment to team up with colleagues and managers, performing tasks and other actions based on the roles assigned in the workspace. The workspace functionality has a completely new, rather complex, but at the same time user-friendly and intuitive design, many new features, which are useful when working and collaborating with each other, and what is also important \u2013 all your data will be even more securely saved and protected as ever before as you access a certain account, having a specific status, and join a certain workspace being assigned a specific role. Collaboration and Its Key Concepts In the Skyvia documentation, we have introduced and described the key concepts of collaboration. These concepts include user , account , workspace , personal workspace , workspace role , subscription , etc. From now on, users can share not only Skyvia subscriptions and resource limitations in the same account, but also workspaces and workspace objects \u2013 packages, backups, queries, etc. Multiple users from one company or team can easily work over the same Skyvia objects in one workspace at a time. Users access the workspace and perform tasks or other actions strictly based on the roles they were assigned in the workspace. To get to know better how it all functions, read the Collaboration topic. New Features of the Account Page The Account functionality has been greatly improved for users\u2019 comfort. Now, you can name your account the way you want and rename it later if needed. The Account page has two new tabs \u2013 Workspaces and Workspace Roles . They help our clients manage existing workspaces or add new ones, check standard (predefined) workspace roles and create new (custom) ones. Read more about it in the Account Management topic. Workspaces and Workspace Roles Workspaces and workspace roles have been introduced to facilitate collaboration among users in Skyvia. Depending on your business needs, you can use a single workspace , for example, your default workspace to create packages, backups, queries, etc. or you can create multiple workspaces (two and more). Multiple workspaces can be beneficial for medium-sized companies with numerous departments, which are quite isolated from one another, or for large project-oriented companies. Such companies usually develop multiple projects, and each project requires a separate workspace where a project manager can collaborate with other project team members (developers, supporters, etc.) sharing with them mutual objects (connections, packages, etc.). To create a new workspace, you simply need to click +Workspace on the right, enter a unique workspace name in the opened window and save it. As now users can share workspaces with each other, workspace roles have been implemented to allow admin to manage who can do what in the workspace, i.e. a team can easier and more effectively collaborate. In Skyvia, we offer 4 standard workspace roles with a predefined set of permissions: Administrator , Developer , Member , Supporter . Skyvia also allows you to create and assign custom roles to invited users, grant individual permissions to them, depending on the tasks they are supposed to fulfill. Read more about standard and custom roles in the Workspace Roles topic of our documentation. To create a custom role, click +Role on the right, set permissions in the opened window, enter a role name and save it. When a new custom role is created, it is added to the bottom of the role list. Later, you can edit the role name or permissions anytime you want, or delete the custom role completely. Changing Workspace Settings and Assigning Workspace Roles Users can be invited to multiple workspaces. They can be invited to the workspaces of the same account if this account has multiple workspaces or to workspaces of different accounts, but first they should become members of such accounts. In different workspaces, users can have different permissions and can be assigned different workspace roles. The user can be a workspace admin for one workspace and a developer or member for another workspace. To add users to the workspace and assign workspace roles to them, click the more options icon next to the necessary workspace and select Change settings from the drop-down list. On the Workspace page, you can add users to the workspace, in case they are already among account members, and assign roles to them. Here you can also set the workspace as a default one to drop into it directly after signing in to Skyvia. Personal Workspace Functionality Personal workspaces have been developed for security reasons and for compatibility with the old Skyvia. When migrating users to the new workspace functionality, Skyvia automatically changed users\u2019 default workspaces to personal ones. In old Skyvia, users could not collaborate in the common workspace sharing connections, packages, queries and other Skyvia objects with each other, but they could share the same subscriptions to Skyvia products after being invited to the account. Now, we have implemented the possibility for users to team up in common workspaces. When migrating several users from one account to a new Skyvia interface, we separate their workspaces in order for them not to be able to see objects of each other until they allow it themselves. That is why the system automatically enabled the Personal Workspace toggle for workspaces of current Skyvia users when migrating them to the new workspace structure. Please note you can switch one of your workspaces to personal on your own to limit access of other users, including other account admins, to it. After that, other users will not be able to view, edit or do any other actions with your personal workspace. You can make a workspace personal only if you are the single user of this workspace. In case there are other workspace members, you will not be able to make it personal unless you delete other members first You can check accounts you were invited to and workspaces you were assigned roles in by clicking the Workspace drop-down list in the top menu and selecting Change workspace ." }, { "url": "https://docs.skyvia.com/recent-releases/2020/july-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 July 2020 Simple, Effective and Fast: Skyvia Query Excel Add-In Released Our team is proud to announce the release of [Skyvia Query Excel Add-in](https://skyvia.com/excel-add-in/) \u2013 a fast and effective web tool for querying cloud and database data to Excel . You can easily find Skyvia Query Add-in in Office Store and add it to your Excel worksheets. Skyvia Query Add-in is intended for those our clients who use Excel 2016 or newer for Windows, Mac, or Excel Online. The add-in allows you to reliably handle large datasets, quickly and easily query data from a wide variety of data sources to the Excel spreadsheets. You can also update the data from source to Excel at any time in just a couple of clicks. With Skyvia Query Excel Add-in, you can automate boring manual work and save time. Moreover, no coding knowledge is required. The add-in queries data using SQL SELECT statements. However, you don\u2019t need to know SQL as you can simply configure query visually, using our Query Builder tool, i.e. configure data aggregation, add extra functions, filter data or configure filter groups, sort data, add extra columns, and many more. But, in case you are familiar with SQL, you may simply enter the needed SQL statement. What is also important is that you can query data using SQL from both \u2014 databases and cloud sources. Please note that Skyvia Query Excel Add-in works via [Skyvia Query](https://skyvia.com/query/) , which uses your data source connections, created on the [Skyvia platform](https://app.skyvia.com/) , uses the Query Gallery to store your saved queries and shares [pricing plans and limitations](https://skyvia.com/pricing/) with Skyvia Query." }, { "url": "https://docs.skyvia.com/recent-releases/2020/march-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 March 2020 New Interface of Skyvia Today all our users have been transferred to the new Skyvia interface. From now on, when you log in to [https://app.skyvia.com](https://app.skyvia.com) , you automatically access the new app version, which now contains a better, more convenient and intuitive design, new features and greatly improved services. All your data remain securely saved and protected. For your convenience, the old interface will be available at [https://old.skyvia.com](https://old.skyvia.com) within several months. However, no updates of products will be made, no features and connectors will be added. Recent Changes in Skyvia New We would like to briefly remind you of recent updates made by Skyvia team to provide each of our users with far better navigation and functionality over the website. Among such updates were: New Interface. It allows creating any object from any page using the quick +NEW menu, which is always at hand. New interface displays now all the objects, such as connections, endpoints, etc. together, allowing any user to quickly find the necessary object and quickly view its most important details. You can view objects as you prefer, as a list or as cards, with or without grouping by the object kind. Full-page editors for connections have replaced connection editor dialog boxes and provide now a more intuitive and convenient way to create and edit connections. New folders structure allows grouping objects related to a specific project or company into the same folder and to find later such objects with ease. More information is available here . Redesigned Connect. The Skyvia Connect has been updated and forsees now a new mode, simple mode, which is intended for a simple creation of endpoints with no relations between entities. New interactive log features available in Connect help to display endpoint activity visually in bar charts. You can select any period and display endpoints for this period with ease. New Agent application, developed not so long ago, helps to connect your Skyvia account to local, on-premise databases with ease and comfort and to establish secure communication bypassing firewall. Do you require more information and more details? Read this news in our blog. Improved Data Integration. The package editor in Data Integration has been redesigned: source and target settings are now located on the left, tasks \u2014 on the right. Schedule and parameters moved to separate dialog windows and became more clear. New interactive log features have been also added. Now you can view package executions for a month or another period, quickly narrow the period down to the one you are interested in or specify another custom period, filter package runs by success/error and view the details of the necessary event in just a couple of clicks. Besides the Log tab with package activity diagram, Skyvia New also provides the Monitor tab with a list of most recent package runs, which also shows the active package run status, if the package is running. We have more information regarding this update for you, and you can find it in our blog in Data Intergration on Skyvia New . New Gallery. It offers now predefined templates of integrations and queries together in one place. Since the update, users can easier access the new gallery simply by clicking the +NEW button in the top menu or can enjoy the renewed internal design of gallery, new filtering tools and better organization space. Go to Releasing Gallery in Skyvia New to find more about it. Updated Backup. Backups are now created in the convenient Backup Wizard through a series of well-defined steps. The backup page has been also redesigned. It contains now four tabs with advanced features \u2014 Overview, Settings, Snapshots, Activities. They help users to check their backup activity and success history. The calendar format of displaying snapshots and restores has been removed. Now, users can display them in a list format only, but it makes the display of events visually more appealing. The old timeline of snapshots and restores has been redesigned into two tabs \u2014 Activities and Snapshots, each having extra possibilities for users. Find more about the update in Releasing Backup in Skyvia New . New Features Added to Skyvia New Among new features, which have been added to the new interface, were: Multi-Delete Dependencies Between Objects Agent Application Using the Multi-Delete feature, you can easily delete old connections or agents with all dependent objects you do not want to use anymore. For more information, visit our previous news Multi-Delete Feature Using the Dependencies feature, you can conveniently track dependencies between objects and easily find out which connections and agents are important for you and have relevant dependent objects, and which are out-of-use and can be deleted from your list not affecting the work of all your integrations. More detailed information can be found in Dependencies Between Objects . Using the Skyvia Agent , you can connect to your on-premise data, hosted on your local computer or in your local network, with ease and establish secure communication bypassing firewall. That is a short description of that diversity of features and improvements we have added and made. We would appreciate any your [feedback](https://skyvia.com/company/contacts) on our new interface. Connect Extends Beyond OData \u2014 New SQL API Endpoints Today we have released new important feature in our Connect product \u2014 SQL endpoints. These endpoints provide web API that allows executing SQL statements against data sources and work with returned data. SQL API for Cloud Data SQL endpoints offer the same advanced security and logging features as OData endpoints. This completely new endpoint kind allows you to publish your cloud data without the need to publish credentials for the corresponding data sources or even provide source API tokens. You may create your own set of users with passwords on the endpoint level. Convenient logs allow you to monitor all the access to your endpoint. SQL engine for cloud data allows much more flexible data access and data manipulation than native cloud API and allow you work with your cloud data just like with database. SQL API for Databases SQL endpoints provide even more features for databases. They can serve as a secure gateway to your database that allows you to fully manage it with SQL over the web. Since they just run the posted SQL against the database engine itself, you can do whatever actions you need against the database over the Internet, that you can do with native database client. Secure Agent application allows you to easily connect to your on-premise databases. You can create SQL endpoints for database servers, not having an external IP address and sitting behind the firewall and control your database with SQL statements from anywhere. All the traffic between the database, endpoint service, and clients is encrypted with secure SSL/TLS encryption. Client Software Available SQL endpoints provide custom API for executing SQL and retrieving data. This API is simple enough to get started in minutes. However, we also provide client libraries that simplify using SQL endpoints even more. We offer a ready-to-use ODBC driver and ADO.NET provider for SQL endpoints. With them, you can connect existing software, supporting these well-known interfaces, to your SQL endpoints or develop your own solutions. We also plan to create a Google Data Studio connector for SQL endpoints in near future. Easy Endpoint Creation Convenient SQL Endpoint Wizard allows creating SQL endpoints in just three simple steps: connect to a data source, configure endpoint user accounts and IP access limitations, and specify the endpoint name. That\u2019s all, after less than a couple of minutes you have a ready-to-use endpoint. We take care about hosting, maintenance, and everything else; all you need to do is to specify data source that you need to access via SQL API and optionally configure security settings. Note that creating endpoints for database data and security features are not available in the free Connect plan, but you can start a two-week trial period for paid Connect plan in your account settings and try all the advanced features." }, { "url": "https://docs.skyvia.com/recent-releases/2020/november-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 November 2020 New Connectors Skyvia team is glad to announce support for Google Analytics and Zoho Desk connectors. With these connectors you can now work with data of cloud help desk software from ZOHO corporation and well-known free web analytics service from Google. Zoho Desk Support Zoho Desk is a cloud help desk software for managing customer conversations across multiple channels, such as email, chat, phone, and social media. Now you can easily integrate your Zoho Desk data with other cloud apps, such as Salesforce or Dynamics CRM, replicate its data to relational databases, query it with SQL, or share via OData protocol. Please note that in Data Integration packages, Zoho Desk is supported only in the new runtime. When creating a package, select its Use new runtime checkbox if you want to be able to use Zoho Desk connections in it. Additionally, Zoho Desk is not supported in Skyvia Backup. Google Analytics Support Connector Peculiarities Google Analytics connector works differently from other connectors in our products. Its peculiarities are caused by the way how Google Analytics API work. While other connectors represent data as a set of objects with a number of records in each, Google Analytics connector presents its data as a single table, named CompleteAnalytics. Google Analytics metrics and dimensions are available as columns of this table. You select dimensions and metrics that you need, and Google Analytics connector returns the corresponding report data. Besides, Google Analytics data is read-only. You cannot modify it, or load data into it. Google Analytics Support in Skyvia Products Skyvia supports loading Google Analytics data to other cloud applications and databases. You can also export Google Analytics reports since Google Analytics data is read-only, you cannot use it for two-way synchronization or as a target for importing data. As well as Zoho Desk, this connector is available only in the the new runtime. Skyvia Connect and Query support Google Analytics, so you can make Google Analytics data available via OData and SQL endpoints and query them from the browser. With Skyvia Query Google Sheets Add-on and Excel Add-in , you can easily get Google Analytics reports in Google Sheets or Excel spreadsheets, and update data in them in just a couple of clicks! Backup of Google Analytics data is not supported." }, { "url": "https://docs.skyvia.com/recent-releases/2020/october-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 October 2020 New Data Integration Engine for Synchronization. Expression Editor Improvements We are glad to announce that our new data integration engine is finally available for Synchronization packages. With this update, you can benefit from Agent connections and other engine improvements in all kinds of data integration packages: export, replication, import, and synchronization. Additionally this update includes significant improvements for expression editor. The new data integration engine is yet in beta status and is not used by default. You need to explicitly enable it in your packages, by selecting the Use New Runtime check box. However, if you want to switch an existing package that uses expression mapping to a new runtime, you will need to review the expressions used and rewrite them to the new syntax. Expression Editor Improvements Expression editor has became more functional than it ever was. Now it allows you not only to design your expressions, but also to validate them and check them against test data. You can check if your expression is valid with a single click of a button. Editor not only immediately displays whether the expression is valid, but also underlines parts of your expression that cause errors. You can point at these parts and see error messages and quick fixes, if they are available. Additionally, you can now immediately try your expressions with test data. Just click the Preview button and specify test values for columns used in the expression or select the null check box for them, and immediately see the expression result. You can thus check your expression and see whether the expression results correspond to your expectations without running the package. Expression editor is now available for synchronization packages, using new runtime as well as for import packages. Agent Connection Support Agent connections greatly simplify connecting to on-premise databases in local networks, using a secure locally installable Agent application as a tunnel for loading data. And finally, you can use agent connections in all kinds of integration packages, including synchronization packages, using new runtime." }, { "url": "https://docs.skyvia.com/recent-releases/2020/september-2020.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2020 September 2020 Skyvia keeps improving usability and look-and-feel of the new interface . Our software development team has added quite a lot of additional useful features to the new interface. This update makes object list more informative, and allows you to work with Skyvia objects faster and more efficiently. Besides that we have also released \u2014 autocleaning of snapshots. More Informative Object List The new object list displays more useful information about objects in both grid and list view. Moreover, you can add filters and sort objects by the same parameters as you could in the old interface. Note that this advanced filtering and sorting is available when you display only a specific object kind: only packages or only backups or etc. It is not available when you view all the objects on the All tab. Packages The new interface displays more useful information about integration packages on the packages tab. You can see package state of every package - whether it is running or how the last package run finished. You can also easily distinguish packages, scheduled for automatic execution. And finally, packages now display last run time instead of time of the package creation. The new interface allows filtering packages by their types (import, export, sync, etc.) and their state (new, succeeded, failed, etc.). And we added the ability to sort package list by the newly added parameters: type, state, and date of last run, as well as package name and date of last modification. Backups When viewing a list of backups, you can immediately see how much space each backup consumes, determine which backups take the most space, and clean them if necessary. You can also see when the most recent snapshot was made, and when the next snapshot is scheduled. In the list view, Skyvia additionally displays the number of totally backed up records and the number of records modified since the previous snapshot. Backups can be sorted by their size, name, connector and date of the most recent snapshot. Endpoints Endpoints now display their token, status, and whether they are public (without authentication/IP restrictions) both in the grid and list views. Note that now you activate/deactivate them via their status icon, not via quick action button. Connections Connections are now grouped by their type instead of connector: direct or agent. New Backup Feature \u2014 Autocleaning of Snapshots This is another feature released this month. We expect you to benefit greatly from this update and back up your data safely and easily not worrying about the storage space limit. Meeting customers\u2019 needs has always been and will always remain a top priority for Skyvia team. Please note that this feature is available for the pricing plans starting from Backup Standard. Snapshot Deleting Algorithm The Backup autoclean functionality allows users to store a certain number of fresh snapshots in accordance with storage limits of their subscription plans. When the autocleaning mode is on, once every 24 hours Skyvia launches automatic deletion of snapshots starting from the oldest ones (i.e. cleaning of snapshots older than 2 weeks). The autocleaning lasts until enough storage space is cleared not to exceed the subscription limit. When deleting snapshots, Skyvia stores at least one successful snapshot of a backup. For example, if your last snapshot failed or was canceled, the previous successful snapshot will be saved. Backup Autoclean: Where to Find and Activate The Backup autoclean functionality can be activated on the Subscriptions tab of your Account page. For this, click the Storage details link in the Backup subscription plan and turn on the Autoclean toggle in the opened Storage Manager window. When the autocleaning mode is on for the first time, Skyvia selects all backups from the list automatically (check boxes next to all backups will be selected). If you want to disable autocleaning mode for specific backups, clear check boxes next to them. Storage Manager window also shows information on the total size of free and used storage space of the subscription and how much space each backup consumes. More information about the new feature can be found in the Skyvia documentation \u2014 Managing Backup Storage Space and Subscription Limits and Plans topics. New Connectors Skyvia team has also released three new connectors. You will be able to quickly integrate, backup, manage, and share data of the following new cloud applications: Asana \u2014 a collaborative web tool to help teams organize, track, and manage their work. Freshsales \u2014 a sales CRM from Freshworks with AI-based lead scoring, phone, email, activity capture and more. DEAR Inventory \u2014 an online accounting and inventory management system designed to improve and boost operational efficiency and productivity of business." }, { "url": "https://docs.skyvia.com/recent-releases/2021/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2021 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2021/august-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 August 2021 From now on, you can use the results of visually built queries and composed SQL Statements as a source in the tasks of your Import Packages and perform data transformations in a more complex way than before. We expect the new feature to bring an additional benefit for those who use Skyvia Data Integration. Read more about Import Package improvements below. Import Package Improvements: SQL Statements and Custom Queries New Task Editor Mode We have developed a new mode for import packages \u2014 Advanced mode . This mode is available when the new data integration runtime is selected. You need to select the Use new runtime checkbox in your import package before adding a task. In the Advanced task editor mode, you can choose what action to use \u2014 either type and edit SQL statement of any complexity on your own ( Execute Command ) or create custom queries by means of our visual query builder ( Execute Query ). The new mode is equally convenient for SQL professionals and SQL beginners. The design is ergonomic and user-friendly. Advanced Data Extraction The new task editor mode provides powerful data extraction, allowing you to get prepared and preformatted data for import. For example, now you can perform aggregations, change capitalization, add various calculated columns, etc. You can import several tables joined in a more complex way than before. Please note that Skyvia supports only SQL SELECT statements in Import packages. When you import data using the Advanced task editor mode, the Returning feature and its settings are not available on the Mapping Definition tab." }, { "url": "https://docs.skyvia.com/recent-releases/2021/december-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 December 2021 Data Loading from/to CSV Files in Data Flow and Control Flow We have supported data loading to/from CSV files in Data Flow and Control Flow and added CSV Source and Target components to Data Flow. For some time, Data Flow was limited to loading data between data sources directly, but finally CSV files have been also supported in it. The CSV Source component allows you to ingest data from CSV, either manually uploaded or obtained from file storages. The CSV Target component allows you to load records to a CSV file. It can put the result CSV to file storages automatically and provides advanced control over file naming. Compared to Skyvia\u2019s traditional Import functionality, Data Flow now allows importing CSV files into multiple data sources at once, which goes along with much more flexible data transformations in the process. Regarding the CSV export features, now Data Flow also provides flexible data transformations as well as full control over the result file name, not available in Export. In Export, you can either specify a constant custom CSV name or add a timestamp to the object name. CSV Target allows you not only to combine a timestamp with a custom name but provides all the power of Skyvia\u2019s expression language for file name customization. Since Control Flow loads data using data flows, it also now can work with CSV files." }, { "url": "https://docs.skyvia.com/recent-releases/2021/july-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 July 2021 New Connectors We have released new connectors in Skyvia \u2014 SendGrid, Sendinblue and LinkedIn Ads. Note that SendGrid, Sendinblue and LinkedIn Ads are supported only in the new data integration runtime. SendGrid \u2014 a cloud-based platform intended for delivering transactional and marketing emails. Sendinblue \u2014 a cloud-based email marketing solution for small and medium-sized businesses that want to automate email marketing campaigns with limited budget. LinkedIn Ads \u2014 a powerful advertising tool for reaching your audience. Using LinkedIn Ads, you can promote your organization\u2019s updates to targeted audiences on desktop, mobile, and tablet. Control Flow Beta We are also glad to announce the beta version of Control flow \u2014 a new integration kind in Skyvia. Control flow is currently available in beta and is absolutely free for users. Control flow allows you to control the execution of multiple different data flows and other actions against data sources. You can run them in a specific order or in parallel, use results of the previous data flows in the subsequent data flows and actions, or even run them depending on a specific conditions, etc. Control flow provides you with tools for: running multiple data flows in a specific order doing pre- and post-integration tasks building custom error processing logics. Easy-to-Use Designer As well as data flows , control flows are designed on a convenient diagram visually. The diagram allows building control flows of any configuration, size and complexity with no coding. You just drag components onto the diagram and configure their settings. When you start building a control flow, there are the Start and Stop components and the execution branch between them. You can drag components onto specific places on the execution branches, between other components. Some components may add more branches - conditional, or for parallel execution, and you may drag other components to them. With our convenient diagram navigation, you can move the entire data flow, zoom in or out, move the components into other places on execution branches, change their order, etc. Control Flow Components Some components manage the execution flow itself \u2014 add branches, redirect execution between them, depending on condition, offer tools for error processing, etc. These components can be called control blocks . Other components perform the interactions with data sources \u2014 run data flows or one-time actions, or set variables. These are the task components. Control flow includes components that allow sequential, parallel, or conditional execution, a component for implementing error processing, etc. With these components you will be able to configure practically any integration scenario. Viewing Control Flow Results In a control flow, you determine the number of success and error records yourself according to the result settings. Note that by default, it shows 0 success and 0 error rows if you don\u2019t define results settings. You also can create and write your own logs in its data flows. When you click a certain package run, the History Details window pops up. If you have created logs, the data flow generates them, and you can download their results from this window. Our Control Flow documentation will help you get familiar with all the advanced features the data flow offers." }, { "url": "https://docs.skyvia.com/recent-releases/2021/june-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 June 2021 Email Notifications Improved \u2014 Get Notified about Reaching Subscription Limits Threshold We are glad to announce new features, related to our email notifications. From now on, they can be configured on per-account basis instead of per user. Besides, you can now get notifications when you approach your subscription limits for data integration records or backup storage space. Usage Notifications Now you can receive notifications not only about errors in your packages and backups, but also when you getting closer to your subscription limits for data integration records or backup storage space. You can select thresholds, for which a notification email will be sent. Notification Configuration Since one user can participate in many accounts, we have moved notification settings to account settings. Now you can configure error and usage notification settings per account. However, the user still can modify settings in their profile - unsubscribe from notifications completely or specify a email address for them, different than the address of their profile. These settings will be applied for all accounts, where this user is account admin or workspace admin, and notifications are sent to the admin email. Testing Notification Settings Additionally, you can now send a test notification email if you set custom email address. This allows you to make sure the notification emails can be delivered to your email address and that they won\u2019t go into spam. You can also reset the error notification limits in Error Notification settings. This resets both one email per hour limit, so you can immediately get a new notification email, and one email per day limit for all the packages. New Connectors We have released four new connectors in Skyvia this month. Among them are Twitter Ads, Freshworks CRM, RepairShopr and Square. Note that they are supported only in the new data integration runtime. So far, Skyvia does not support backup for these connectors. Twitter Ads \u2014 a service for gathering direct responses, increasing website traffic, downloading content or any app, or simply increasing views and followers on Twitter. Freshworks CRM \u2014 a cloud-based CRM (customer relationship management) solution that includes sales force automation, marketing automation, chat and telephony all in one solution. RepairShopr \u2014 an all-in-one computer repair shop software, CRM and invoice system. Square \u2014 a cloud-based POS system that allows processing payment transactions, updates and securely storing sales history in the cloud." }, { "url": "https://docs.skyvia.com/recent-releases/2021/march-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 March 2021 Data Flow Beta \u2014 New Data Integration Type in Skyvia This month, we are super excited to announce the release of Data Flow. It is a new kind of integration packages in Skyvia. So far data flow is available only in beta . Data flow allows you to integrate multiple data sources in one package and enables advanced data transformations on the way from source to final destinations. Despite being intended for complex integration scenarios, data flow is quite an easy-to-use tool, which requires no coding and can be used both by IT and non-IT professionals. You can use data flow in the scenarios when data replication or import is not enough. These are the scenarios when you need to: load data into multiple sources at once; use complex, multistage transformations like lookup by an expression; obtain data from one data source, enrich them with data from another data source and finally load them into the third one. Easy-to-Use Designer For users\u2019 comfort and better usability, Skyvia team has developed a diagram editor tool , a data flow working area, which allows arranging the entire process of data movement and any layout of components on the diagram visually. It allows building data flows of any configuration, size and complexity and easily managing them. When you start building a data flow, you drag components onto the diagram and configure their settings in the details sidebar, which automatically opens on the right as soon as you drop the component onto the diagram. You connect components with links, which are displayed as arrows on the diagram, to show the direction of data movement between components from source to target, and, if necessary, you apply data transformations inbetween. With our convenient diagram navigation, you can move the entire data flow, minimize or maximize it, move the separate components around, change their location, etc. Data Flow Components Data flow components are divided into 3 categories: sources , targets and transformation components . Using source , you extract data from a cloud app or database and bring them to the data flow. For various data modifications, you use various transformation components among which are Extend, Lookup, Conditional Split, Row Count, Split, etc. Transformations help you add new columns to the input records, delete old columns or change variable values, split data according to specified conditions and send modified data to multiple destinations, i.e. targets (databases or cloud apps). We do not limit you in the number of targets you can create in the data flow. On the screenshot, among other package kinds you can see a data flow package with multiple targets. Scheduling the Data Flow Package Skyvia allows scheduling your data flow packages for automatic execution as any other type of data integration packages. All the advanced scheduling features are preserved in the new data integration type. Scheduling might be very useful if you want to configure a data flow package to run periodically on certain days and at a particular time or if you want to delay a data flow execution to a later time. Viewing the Data Flow Runs Skyvia displays the data flow runs in the same way as for other data integration packages. On the Monitor tab, you can view the current data flow status and 5 most recent runs, and, on the Log tab, you can view older runs as well as all runs for a specified period. In data flows, you determine the number of success and error records yourself according to the result settings. You can also create and write logs on your own. When you click a certain package run, the History Details window pops up. If you have created logs, the data flow generates them, and you can download their results from this window. Our Data Flow documentation will help you get familiar with all the advanced feautures the data flow offers. New Connectors We have also released four new connectors in Skyvia this month. Among them are Snowflake, Google Ads, Mailjet and Harvest. Note that they are supported only in the new data integration runtime. So far, Skyvia does not support backup for these connectors. Snowflake \u2014 a scalable cloud-based data platform, which enables data storage, processing and analytic solutions that are faster, more flexible and easier to use. The Snowflake data platform is not built on any existing database technology or \u201cbig data\u201d one, instead it combines a completely new SQL query engine with an innovative architecture natively designed for the cloud. Google Ads \u2014 an online advertising platform developed by Google. Using Google Ads, you can create online ads to reach people exactly when they are interested in the products and services that you offer. Mailjet \u2014 a secure and reliable cloud-based email delivery and tracking platform, which allows users to send marketing and transactional emails. Harvest \u2014 a convenient cloud-based time tracking and invoicing tool, which has been designed for businesses of all sizes." }, { "url": "https://docs.skyvia.com/recent-releases/2021/may-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 May 2021 SQL Statements and Custom Queries in Export Packages This month, we are excited to announce a new feature available in export packages. From now on, Skyvia users can query data in export packages and run them on schedule. It will greatly simplify the work when you need to create a custom query, formatting the data the way you need, in a source and automatically export the queried results as CSV to your PC or an FTP site. Export Package Improvements We have developed a new mode for export packages \u2014 Advanced mode . This mode is available when the new data integration runtime is selected. You need to select the Use new runtime checkbox in your export package before adding a task. In the Advanced task editor mode, you can choose what action to use \u2014 either type and edit SQL statement of any complexity on your own ( Execute Command ) or create custom queries by means of our visual query builder ( Execute Query ). The new mode is going to be equally convenient for both SQL professionals and SQL beginners. Moreover, we have tried to make the advanced mode design as ergonomic and user-friendly as possible. What Benefits New Functionality Brings There are several benefits Skyvia offers for users in the new export package functionality. Before when you selected a table for export from the database of cloud app, the resulting CSV file had the column order as in the source. Now you have a better opportunity, you can influence the column order you receive in the CSV file. In the advanced mode, the generated CSV file will have the same column order as you personally define in the Query Builder or SQL statement. Moreover, you can rename table columns if necessary. For this, you need to add aliases when entering SQL statements or select a column you\u2019ve added to the Results pane of the Query Builder and edit the column name by clicking the corresponding icon on the right. It is also worth mentioning that now you can create a query that will use aggregation function to group data by any required column. For this, in the Query Editor, you simply add aggregation function to the column in the SELECT clause and create a GROUP BY clause with all other unaggregated columns. Please note that Skyvia supports only SQL SELECT statements in Export packages. Read our updated documentation to find out more about the new feauture." }, { "url": "https://docs.skyvia.com/recent-releases/2021/november-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 November 2021 Updated Query Experience We are delighted to announce the launch of the updated Query design. According to our vision, the old Query Gallery design needed changes. Both Query Gallery and General Gallery of Skyvia had to look alike to make it easier for users when searching predefined templates. The Query Gallery has been redesigned for comfort of our clients as comfort of our clients is always first! Now, when you want to use a predefined query scenario in your query builder, you simply click Open from Gallery option in the drop-down menu and switch to the Gallery Manager, where we combined all useful query scenarios accessible by a single click. Private queries will no longer be displayed in the Query Gallery. When you create and save private queries for future use, they are saved directly to the Objects list in your Skyvia account and are accessible from there. To open a previously created query, select Open existing query in the drop-down menu and switch to the Objects Manager window. Find a query you want to use and click Select to open query with all its configured settings. Improvements in Import Packages New Batch Size Option From now on, you are able to manage the number of records in a batch when importing data! This allows you to better control API use when loading data to data sources. When creating or editing import packages, you can set the Batch Size option. This option is available in the import packages, which use New runtime . It regulates the number of records sent to the target as one batch. By default, Skyvia processes data in batches of up to 2,000 rows. The batches of such size are read from the source at once. However, while processing records, using filtering, lookups, etc., the size of batches can decrease, and data can be sent to the target in smaller batches. The Batch Size option allows you to cache the records in a buffer before finally sending them to a target, and send batches of the fixed size. Benefits This can be useful to both increase the batch size for loaded data in order to load more records per an API call to improve API call use efficiency, and to decrease batch size if the target has some additional custom limitation or if smaller batches provide better performance. API Batch Size Limitations Note that each data source has its own API batch size limitations, and sometimes different objects have different limitations. Some of them don\u2019t support sending records in batches at all. Skyvia cannot exceed data source API limits, so if the max batch size allowed by the target data source and table is less than the specified Batch Size , the buffered records are split into multiple internal batches. Enabling/Disabling Tasks in Import Packages Another new option in Import Package \u2014 Enable/Disable tasks. This option is available in the import packages, which use New runtime . When you select Disable , you disable one of your tasks in the package. It brings considerable benefits when you have several tasks in your package and need to stop executing one of them for a while. Previously, if you had several tasks in your import package and you wanted to execute only one of them, you had to clone the whole package in order not to lose mapping settings of all the tasks. In the cloned package, you deleted unnecessary tasks and ran a task you needed. The whole process was a bit complicated. Now, everything is much easier and more comfortable, you simply disable the task, which you don\u2019t want to run and later, if necessary, enable it back with proper mapping settings. So now it is as simple as \u201cturning on/off\u201d a separete task and including/excluding its execution from the integration for a while. No other efforts needed! Adding Description to Tasks in Packages Now, you are also able to add comments and notes to all tasks or to a specific task if you need to highlight task peculiarities or emphasize its importance among others in written. To enable commenting, you need to select Add Description in the drop-down menu of a certain task and enter the required text in the box under the task as shown in the screenshot: Improvements in Backups Deleting Separate Snapshots in Backups We have also improved the cleanup function in Backups. Before, you could delete snapshots automatically by turning on the Autoclean toggle next to backups you need to autoclean or you could select snapshots for some period in a backup and delete them manually. This functionality is convenient, however, we\u2019ve decided to make it even better. Now, you can clean separately selected snapshots of the backup. For this, you go to the required backup, switch to the Snapshots tab, click the Clean up button and select checkboxes next to snapshots you want to delete. That is as simple as that. Tracking Changes by the SystemModTimestamp field in Salesforce Now, optionally you are able to track changes by the SystemModstamp field in Salesforce. For this, you need to find your Salesforce connection and select the Use SystemModstamp for data changes tracking checkbox in the Connection Editor page. By default, replication, synchronization and import functionality in Skyvia uses the LastModifiedDate field to track changed records. Though, if a record is changed by a trigger or any other automated process, then the LastModifiedDate field will not be updated. However, now, this issue is solved and you can switch your package to use the SystemModstamp field. And any updates by triggers and automation processes will also be taken into account. New Connectors And the last but not least important is a release of 2 new connectors \u2014 Capsule CRM and Aha!. Read brief information about them below: Capsule CRM \u2014 an online customer relation management system intended to build stronger customer relationships, make more sales and save time. Aha! \u2014 web-based roadmap software. With Aha! you can set strategies, prioritize features, share visual plans as well as crowdsource feedbacks. You can also use Aha! as a fully extendable agile development tool." }, { "url": "https://docs.skyvia.com/recent-releases/2021/september-2021.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2021 September 2021 Meet new Skyvia update with a number of major improvements. We are glad to present you the release of our Data Flow solution, Salesforce Reports support in Data Flow, Export, and Import, and ability to map nested object fields for QuickBooks in Import. We have also released three new connectors. Data Flow Release This month, our Data Flow solution is out of Beta status. With its release, we have also added the possibility to use Salesforce reports as a source in your data flows. You don\u2019t need to build a query or create an SQL SELECT statement if you have a ready Salesforce report that already returns all the necessary data. After entering the release status, rows processed by data flows will be counted to data integration subscription limit. Since the number of success rows is defined by the user in data flow, this is not the number that will be considered for subscription limit. Instead, the new billed rows number will be available in the Result Details of each data flow run in addition to success and failed rows. The number of billed rows is counted in the following way: we summarize the number of success rows for all the Target components that load data NOT to cache or log connections. This change, however, won\u2019t take effect immediately. You can use Data Flow for free for another week, and we will start billing the records since October 8th, 8:00 UTC . Salesforce Reports Support One of the features our users asked for is support for Salesforce Reports. While Skyvia already provided tools for building queries, that return the same data as reports, so our users could achieve the same results, using a ready Salesforce report is more convenient, as you don\u2019t need to configure anything. Now you can easily select a Salesforce report to run. This feature is available as a Source action in Data Flow, as well as in Import and Export packages. Note that in order to query data from a Salesforce report in Import and Export, the package must use new data integration runtime, and you must use advanced mode in the package tasks. Nested Objects in QuickBooks We have supported nested object collections in mapping for QuickBooks. It makes import of invoices to/from QuickBooks easy, and there is no need to fiddle with JSON and other complex things. Now you can simply map fields of the nested objects in the same way as the usual fields. Configuring Nested Object Mapping Nested object mapping is only available for the import packages on the new data integration runtime. You need to select the Use new runtime checkbox. After this, in package settings, select the Nested Objects checkbox. After this, you can use nested objects in your mapping. Nested object fields can be mapped in both directions. When nested objects are in a target object, you can map them to child source objects, queried via lookup. When they are in source, you can map them to target child objects when importing data into multiple related objects at once . Current Implementation and Future Plans Currently, this feature is implemented for QuickBooks invoices. QuickBooks is one of the most popular data sources with nested objects, and importing QuickBooks invoices is one of the most popular scenarios. However, in the near future we plan to make this feature available for other QuickBooks objects with nested object collections and other data sources. We also have plans to support nested objects in other data integration scenarios, in addition to import. Informative Tutorials To get started with the new feature faster, you can read our detailed tutorials. We have prepared three tutorials for the most popular use cases \u2014 Salesforce \u2014 QuickBooks integration: How to Import Data from Salesforce Opportunities to QuickBooks Invoices How to Import Data from QuickBooks Invoices to Salesforce Opportunities How to Move Customers and Invoices Between QuickBooks Accounts If you need to integrate your QuickBooks invoices with other data sources, you are welcome to try Skyvia! New Connectors Release of new connectors has also been in our roadmap in September. We have managed to support the following connectors: Starshipit \u2014 a provider of integrated and automated shipping and tracking solutions for various online businesses. Klaviyo \u2014 an sms and email marketing automation platform for ecommerce, which helps grow businesses. Google Sheets \u2014 a cloud-based solution, which allows users to edit, organize, analyze different types of information online, and collaborate with each other." }, { "url": "https://docs.skyvia.com/recent-releases/2022/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2022 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2022/april-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 April 2022 New Connectors In April, we are glad to announce the release of 8 new connectors. They have been added to the following categories: Email Marketing Mailgun \u2014 an email delivery service. Mailgun\u2019s email API for application-driven emails gives you an easy way to automate your sending based on user actions in your app or connect with any platform. Forms and Surveys Typeform \u2014 a cloud-based solution for online form building and online surveys. The solution is designed for companies of all sizes. Typeform features survey design, where users can design customized surveys simply via the drag-and-drop interface. Formstack \u2014 a workplace productivity platform providing no-coding online forms. Formstack Forms simplifies the data collection process and allows you to build convenient online forms quickly. Documents Formstack Documents \u2014 document generation software to merge data into custom-built documents. E-Commerce category WooCommerce \u2014 the most popular and arguably best ecommerce plugin for WordPress. Being open source, it offers limitless customizations. Using WordPress with WooCommerce means you\u2019ll have the support of the robust open source community. Recharge \u2014 a leading subscription payment platform designed for merchants to set up and manage dynamic recurring billing across web and mobile. Payment Processing Chargify \u2014 subscription management software for B2B SaaS. The software is built for the evolving needs of fast-growth companies. Product Management Onfleet \u2014 a last-mile delivery management SaaS platform. It helps businesses manage local deliveries (seamlessly optimize routes, dispatch, track drivers, notify recipients, etc)." }, { "url": "https://docs.skyvia.com/recent-releases/2022/august-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 August 2022 New Connectors [Survicate](https://docs.skyvia.com/connectors/cloud-sources/survicate_connections.html) \u2014 a customer feedback automation solution that supports email, in-product, web and mobile surveys. [Pinterest](https://docs.skyvia.com/connectors/cloud-sources/pinterest_connections.html) \u2014 an image and video sharing and social media service for finding ideas. [UserVoice](https://docs.skyvia.com/connectors/cloud-sources/uservoice_connections.html) \u2014 a product feedback management solution that collects and organizes feedback from multiple sources to provide a clear, actionable view of user feedback for product teams. [CleverReach](https://docs.skyvia.com/connectors/cloud-sources/cleverreach_connections.html) \u2014 an email marketing tool that makes sending newsletters fast and easy. [Twilio](https://docs.skyvia.com/connectors/cloud-sources/twilio_connections.html) \u2014 a service that provides programmable communication tools for making and receiving phone calls, sending and receiving text messages, and performing other communication functions using its web service APIs. Connector Updates Zoho CRM Skyvia now supports [COQL Query for Zoho CRM](https://docs.skyvia.com/connectors/cloud-sources/zoho_connections.html) to speedup the process of query execution. COQL is available for the following objects: Accounts, Contacts, Deals, Campaigns, Tasks, Cases, Events, Calls, Solutions, Products, Vendors, PriceBooks, Quotes, SalesOrders, PurchaseOrders, Invoices . For more details check [Zoho CRM connector documentation](https://docs.skyvia.com/connectors/cloud-sources/zoho_connections.html#zoho-crm-connections) ." }, { "url": "https://docs.skyvia.com/recent-releases/2022/december-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 December 2022 New Features Nested Types in Data Flow We added nested types support, new Data Flow components , and extended component settings. Now you can easily transform, rebuild, and map your nested objects and arrays. Visit our docs for more details and use case scenarios. Usage Summary From now on, you can see the statistics on records used by Data Integration packages. With Usage Summary , you can find out how many records your workspaces and packages consume for a specific period. Note that the usage data is available from the 6th of September 2022. HubSpot Pipelines API Skyvia now supports HubSpot [pipelines API V3](https://developers.hubspot.com/docs/api/crm/pipelines) . We added four new objects to our HubSpot connector: DealPipelines, DealPipelineStages, TicketPipelines, TicketPipelineStages . All these objects support INSERT, UPDATE and DELETE actions. New Connectors FullStory \u2014 a web-based digital intelligence system that enables users to track each customer activity and use this data afterwards to optimize the customer experience. MailerLite \u2014 user-friendly email marketing and website building tool. TikTok Ads \u2014 a tool for TikTok business users designed for targeting, ad creation, insight reports, and ad management. NetSuite v2 (REST) \u2014 the new connector for NetSuite, a unified cloud business management solution, including ERP/financials, CRM, and e-commerce. This connector works via NetSuite REST API (unlike the previous one that uses SOAP API). It provides support for NetSuite fields storing array data but does not support backup. ConvertKit \u2014 a hub for audience growth, marketing automation, and digital products selling. Jibble \u2014 a time tracking and attendance app that helps companies stay on top of their teams\u2019 activities." }, { "url": "https://docs.skyvia.com/recent-releases/2022/february-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 February 2022 New Connectors In February, Skyvia has managed to release 5 new connectors. Among them are Delighted, Okta, Help Scout, AfterShip and Jira Service Management. Delighted \u2014 a cloud-based customer experience management solution intended to help small to large-sized enterprises gather and analyze feedbacks from customers and target audiences by conducting surveys across a variety of platforms. Okta \u2014 an Identity-as-a-Service (IDaaS) platform, which provides cloud software that helps enterprises manage and secure user authentication into applications, and help developers build identity controls into website web services, apps, etc. Help Scout \u2014 a cloud-based SaaS (software as a service) HIPAA-compliant help desk solution that helps businesses and teams manage their customer relationships with flexibility and affordability. AfterShip \u2014 an automated cloud-based shipment tracking solution that helps eCommerce businesses track their online shipments and keep their customers updated on the status of their deliveries from their online shops. Jira Service Management \u2014 a service management solution for IT and service teams, developed by Atlassian. It is built on Jira, it encompasses deeper service management practices across service request, incident, problem, asset, configuration management, etc." }, { "url": "https://docs.skyvia.com/recent-releases/2022/january-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 January 2022 New Connectors In January, the release of Freshservice and QuickBooks Time took place. Find out more about new cloud apps below. Freshservice \u2014 a cloud-based IT Service Management solution. Freshservice helps IT organizations streamline their service delivery processes with a strong focus on user experience and employee happiness. QuickBooks Time \u2014 a web-based and mobile time tracking and employee scheduling app. It helps to simplify your business and focus on results. Control Flow Released, Execute Integration Component Added Moreover, this month our Control Flow solution is out of Beta status. With its release, we have also added the new Execute Integration component that allows running other data integration packages. Now you are able to orchestrate not only data flows specifically created within the control flow but also run all kinds of data integration packages, including other control flows. Control Flow Billing Since Control Flow enters the release status, rows processed by the control flow will be counted to subscription limits. In the control flow run details, you can see the number of billed records in addition to the number of success and failed records. This number shows the rows loaded by the data flows that are created in the control flow via the Data Flow components. To be more precise, it is the sum of all success rows for all the Target components in these data flows that use any connections other than cache or log. This number does not include records processed by other integration packages that are executed via the Execute integration components. You can find the number of records they processed in the logs of the respective packages. They are counted to subscription limits as well. Execute Integration Component The main feature of the control flow release is the new Execute Integration component. It allows running existing data integration packages \u2014 data flows, import, export, synchronization, and replication packages, and even other control flows. Now you can truly orchestrate all your integrations \u2014 run packages one after another or depending on specific conditions, etc. For example, you can run your replication to a data warehouse and then immediately run a data flow building a report on the updated data and send it to Google Sheets. And with the ability to run other control flows, you can create truly powerful and modular integrations, which you can easily design and test in parts. Note that it is currently not possible to run a control flow recursively from within itself or via intermediate control flows. Two-Factor Authentication This month, we have also implemented a two-factor authentication for accounts. If you want to keep your Skyvia account safe, adding two-factor authentication (2FA) is a step you should take. With 2FA, you add an extra layer of security to your account, which means you use something apart from your username and password to log in and that is a single-use code generated by an authenticator app on your phone. Downloading an Authenticator App Before enabling a two-factor authentication in your Skyvia account, download and install on your phone one of the authenticator apps mentioned further. Skyvia allows authentication through several dedicated apps \u2013 Google Authenticator , Microsoft Authenticator , and Twilio Authy . Enabling Two-Factor Authentication It doesn\u2019t take long to set up 2FA in your account. Simply go to your Profile , switch to the Two-Factor Authentication tab and click the corresponding button under the Disabled status. To successfully complete the two-factor authentication procedure, in the next windows, scan the QR code with the authenticator app, enter a 6-digit verification code to the Skyvia setup window, confirm the connection between your Skyvia account and authy app and download recovery codes provided by Skyvia. That is it, you\u2019ve done it. Recovery Codes Skyvia generates 12 unique recovery codes during your 2FA setup, which are specifically tied to your Skyvia account. You can use them to verify your identity when you have no access to your two-factor authentication method, for example, if you have changed your app or lost your mobile device. Every recovery code can be used only once. You can always generate a new set of recovery codes if you have used many of existing ones and their total number has significantly decreased. Please remember that recovery codes protect you from being locked out of your Skyvia account when something happens with your 2FA device. Trusted Devices You have a possibility to add your secured work or personal laptop as a trusted device, which means you temporarily disable 2FA authentication for it in Skyvia. How can you do it? Everything is simple. Next time when signing in to Skyvia, select a corresponding checkbox in the Two-Factor Authentication window. This allows the device to log in without two-factor authentication for 30 days. We hope you\u2019ll enjoy improvements we\u2019ve made to Skyvia. You can read more on how to set up 2FA here ." }, { "url": "https://docs.skyvia.com/recent-releases/2022/july-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 July 2022 New Features Clone OData and SQL Endpoints We added Clone button to the endpoints page. Use it to clone your endpoint in two clicks. New Connectors Zuora \u2014 the leading monetization platform for businesses to launch and manage subscription-based services. Confluence Cloud \u2014 the cloud corporate knowledge management software (wiki) by Atlassian, built for collaboration from any location. Agile CRM \u2014 a cloud based SaaS CRM system that automates sales, marketing, and customer service while avoiding data leaks and enabling consistent messaging. Nimble \u2014 a social sales and marketing CRM that enables you to store and organize contacts, create tasks for follow-up reminders, send trackable templates to targeted lists, track to-do\u2019s, manage multiple pipelines at once, and more. AWeber \u2014 a cloud sales email automation software for small teams with templates for any occasion, drag-n-drop email designer, dynamic content and more. PersistIQ \u2014 an easy-to-use and affordable cloud software for sales email automation designed for small teams. Connector Updates Hubspot Skyvia now supports custom objects for HubSpot. Select the corresponding checkbox in your HubSpot connection and refresh metadata cache in order to work with them. BigCommerce BigCommerce connector now supports newly added Wishlists, WishlistItems, PriceListAssignments, and OrderMetaFields objects. This allows users to work with BigCommerce wishlists, price lists, and metafield attributes for orders. Oracle We extended the list of advanced connection settings for Oracle connector with an SSH option. You can use it to establish a secure connection between Skyvia and your Oracle server. Breaking Changes Google Drive Import Skyvia no longer allows users to import source files that were moved from Google Drive to Trash after the import creation. You will receive an error message instead." }, { "url": "https://docs.skyvia.com/recent-releases/2022/june-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 June 2022 New Connectors Skyvia announces the release of 6 new connectors this month. Among them are Amplitude, CallRail, Todoist, Reply, Calendly and Greenhouse: Amplitude \u2014 a product analytics platform that helps businesses track visitors by means of collaborative analytics. It helps teams understand users, drive conversions, increase engagement, revenue and growth. CallRail \u2014 the call tracking and marketing analytics software. It helps matching inbound calls, text, forms, and live chats to marketing campaigns to find out what\u2019s working and what\u2019s not. Todoist \u2014 a web service and software for task management for individuals and small businesses. It helps combine tasks, projects, comments, attachments, notifications, etc. Reply \u2014 a sales engagement platform that helps you automate and scale multichannel outreach, so you can generate more leads, acquire new customers and grow revenue faster. Calendly \u2014 a powerful scheduling software to organize meetings and appointments between individuals and organizations. Calendly eliminates email back and forth, helps save time and provide better service and increase sales. Greenhouse \u2014 an applicant tracking and recruitment management solution that help companies hire and onboard the right talent." }, { "url": "https://docs.skyvia.com/recent-releases/2022/march-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 March 2022 New Connectors More new connectors have been released. This month among them are Close, Intercom, Zendesk Sell, Shippo. Close \u2014 an all-in-one high-performance CRM for growing sales teams and turning more leads into revenue. Intercom \u2014 a complete customer communications platform with bots, apps, product tours, etc. that enables targeted communication with customers on the website, inside the web, mobile apps, etc. Zendesk Sell \u2014 a customer relationship management (CRM) system that enhances productivity, processes, and pipeline visibility for sales teams. It helps sales teams streamline all their operations, track their contacts, result in an increase in sales, etc. Shippo \u2014 a multi-carrier shipping software provider for e-commerce businesses that offers discounted shipping rates and ability to track packages, schedule pickups and print shipping labels. The above connectors can be easily integrated with other data sources and queried. Their data can be published via OData or SQL endpoints." }, { "url": "https://docs.skyvia.com/recent-releases/2022/may-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 May 2022 New Connectors In May, we have supported 7 new connectors in Skyvia \u2014 Wrike, Iterable, Recurly, Everhour, Float, Outbrain and GitHub. Find out more about them below: Wrike \u2014 a digital work management tool used for tracking and coordinating projects, streamlining the internal project management and collaboration processes between team members, whether they are in the same office or separated by an ocean. Iterable \u2014 a cross-channel marketing platform that powers unified customer experiences and empowers you to create, optimize and measure every interaction across the entire customer journey. Recurly \u2014 a subscription management and recurring billing platform. It accelerates subscriber growth and scales your business. Everhour \u2014 the powerful time tracking software with hassle-free integrations. It is an easy and accurate time tracker for budgeting, client invoicing and painless payroll. Float \u2014 a resource management, planning and scheduling software platform to plan team\u2019s work. Outbrain \u2014 one of the leading native advertising platforms that helps emerging brands to connect with consumers on the open web through engaging ad formats that inspire action. GitHub \u2014 a hosting service for software development and version control using Git." }, { "url": "https://docs.skyvia.com/recent-releases/2022/november-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 November 2022 New Connectors Smartsheet \u2014 cloud service for work management and collaboration. Hive \u2014 cloud service for work management and collaboration. Zoho Projects \u2014 cloud-based solution for project planning and tracking projects for team collaboration, and achieving project goals. Jotform \u2014 cloud service for managing online forms. Discourse \u2014 open-source cloud forum service." }, { "url": "https://docs.skyvia.com/recent-releases/2022/october-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 October 2022 New Features Manage Object Folders From now on you can easily manage Skyvia object location from the object details page. Additionally, you can select the folder for the Query during its creation. New Connectors ODBC \u2014 this connector allows connecting to various data sources via ODBC drivers. It allows only Agent connections , and the Agent app must be installed on the same computer as the drivers. Azure Application Insights \u2014 a crossplatform service for software diagnostics data collection, analysis and visualization. Xero \u2014 an accounting app with automatic bank feeds, invoicing, accounts payable, expense claims, fixed asset depreciation, purchase orders, bank reconciliations, and other features. ChargeOver \u2014 a payment process management tool for billing automation. Chargebee \u2014 a subscription billing and revenue management platform. ProdPad \u2014 a product management tool helping its customers to cope with product roadmaps, priority charts, customer feedbacks, and workflows. GetResponse \u2014 an email marketing tool for managing contacts, planning marketing campaigns, analyzing results, and planning new marketing strategies." }, { "url": "https://docs.skyvia.com/recent-releases/2022/september-2022.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2022 September 2022 New Connectors [Customer.io](https://docs.skyvia.com/connectors/cloud-sources/customerio_connections.html) \u2014 an automated cloud messaging platform for marketing automation that allows crafting and sending data-driven emails, push notifications, and SMS messages. [Trello](https://docs.skyvia.com/connectors/cloud-sources/trello_connections.html) - a project, workflow and task management tool for teams collaboration. [ClickSend](https://docs.skyvia.com/connectors/cloud-sources/clicksend_connections.html) - \na business communications software to send and receive SMS, Email, Voice and even post via web app or API. [Yotpo](https://docs.skyvia.com/connectors/cloud-sources/yotpo_connections.html) - \ncloud-based content marketing platform for e-commerce businesses that enables users to collect user-generated content and accelerate direct-to-consumer growth. [Azure DevOps](https://docs.skyvia.com/connectors/cloud-sources/azure_devops_connections.html) - end-to-end development tool for version controlling, reporting, requirements management, project management, automated builds, testing and release management. [Affinity](https://docs.skyvia.com/connectors/cloud-sources/affinity_connections.html) - online customer relation management tool focused on contacts and profiles creation automation, easy dealing and pipeline management." }, { "url": "https://docs.skyvia.com/recent-releases/2023/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2023 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2023/april-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 April 2023 Automation Early Beta Automation early beta is live. From now on, you can start automating your tasks with Skyvia. In the current state, the functionality is limited, and some features, such as Connection and Webhook triggers, are unavailable. Automation is a work-in-progress project, and we\u2019ll be introducing more and more features with each upcoming release. The user interface is also subject to change as we gather feedback and make improvements. To learn more about automation possibilities in Skyvia, visit the Automation section. And now, don\u2019t hesitate and go automate something. New Connectors ChartMogul \u2014 a real-time analytics and reporting tool for businesses with subscription billing. Airtable \u2014 a powerful cloud database-spreadsheet hybrid platform for collaboration and data management. Connectors Updates Google Ads API v13 are supported in Google Ads connector. Zoho CRM API v4 are supported in Zoho CRM connector. Note that there are certain performance and API call consumption considerations regarding API v4. An API call is performed for each record when querying values from the Photo field and when querying attachment content. See Performance Optimization for more details. Custom fields are supported for Pipeline CRM ." }, { "url": "https://docs.skyvia.com/recent-releases/2023/august-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 August 2023 New Connectors FormCrafts \u2014 online tool for building forms. Zammad \u2014 a web based open source helpdesk/customer support system. AlloyDB \u2014 a PostgreSQL-compatible high-performance database service from Google Cloud. Segment \u2014 a customer data platform (CDP). Productive.io \u2014 a comprehenisve agency management platform. Connector Updates Zoho CRM Skyvia now offers full support for Zoho CRM Attachments and photos. We have supported [Importing Binary Data](https://docs.skyvia.com/data-integration/import/how-to-guides/importing-binary-data.html) for the Photo field of the Contacts , Leads objects and the Content field of the attachments objects: LeadAttachments , AccountAttachments , CampaignAttachments , CaseAttachments , ContactAttachments , DealAttachments , MeetingAttachments , InvoiceAttachments , PriceBookAttachments , ProductAttachments , PurchaseOrderAttachments , QuoteAttachments , SalesOrderAttachments , SolutionAttachments , TaskAttachments , VendorAttachments . Note that loading data to these fields is supported only for Zoho CRM API v4. To access the fields, you need to make sure your Zoho CRM connection uses API v4 and then clear its metadata cache . HelpScout You can now work with the Customers object custom fields (properties). You can insert data into these fields but not update them. The UPDATE operation support is in our roadmap. You can find more details in Help Scout documentation. Sendinblue We have significantly optimized work with the Contacts table. Now Skyvia loads data from and to this table much faster. GitHub We have added the RepositoryContents object to our GitHub connector. You can now insert, update, and delete files using stored procedures. More details are available in the GitHub connector documentation. Wrike New Objects We have added the FoldersAndSpaces and Projects objects to our Wrike connector. Skyvia supports Incremental Replication for them. Comments We added a new Type field to the Comments object. We removed the Start and End fields earlier used for filtering.\nFrom now on, you get comments for the last seven days by default, when querying. To get comments for other periods, you can set filter by the PeriodDate field in the following format: {\"start\":\"2023-06-28T14:50:31Z\",\"end\":\"2023-06-30T14:50:31Z\"} . Smartsheet We have added a new Rows object to our Smartsheet connector. You can select data from this object and perform INSERT and DELETE operations. \nFor more details, please refer to Smartsheet documentation. Klaviyo We have added the SubscriptionsEmail, SubscriptionsEmail_Consent, SubscriptionsEmail_Timestamp, SubscriptionsEmail_Method, SubscriptionsEmail_MethodDetail, SubscriptionsEmail_DoubleOptin, SubscriptionsEmail_Suppressions, SubscriptionsEmail_ListSuppressions, SubscriptionsSMS, SubscriptionsSMS_Consent, SubscriptionsSMS_Timestamp, SubscriptionsSMS_Method, SubscriptionsSMS_MethodDetailfields fields to the Profiles object. New procedures are now available to suppress and unsuppress profiles. For more details, go to Klaviyo documentation." }, { "url": "https://docs.skyvia.com/recent-releases/2023/december-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 December 2023 New Connectors Unbounce Unbounce \u2014 an AI-powered marketing platform that enables businesses to quickly build personalized landing pages, popups, and sticky bars, enhancing visitor experiences and driving increased sales and signups. Thinkific Thinkific is a platform where you can create, manage and sell education products like online courses and learning communities. Connector Updates Yotpo Skyvia now supports [Yotpo Loyalty API](https://loyaltyapi.yotpo.com/reference/reference-getting-started#welcome-to-yotpos-loyalty--referrals-api-reference) . \nTo be able to work with Loyalty objects, enter the Api key and Guid connection parameters in the Connection Editor. \nMore details are available here . Aha! From now on, to connect to Aha!, you need the Subdomain and API Key. The details on how to obtain the credentials are available in Aha! documentation . Brevo (formerly, Sendinblue) Due to Brevo API update, we have updated our Brevo connector. We added new stored procedures for sending and updating email and SMS campaigns. We also have added new Tasks and Deals objects to our connector. See the detailed procedures and objects descriptions in [Brevo article](https://docs.skyvia.com/connectors/cloud-sources/sendinblue_connections.html#connector-specifics) . EmailOctopus Our EmailOctopus connector now supports custom fields for the Contacts object. \nWe have also added a new ListTags object to our connector. See more details in EmailOctopus topic." }, { "url": "https://docs.skyvia.com/recent-releases/2023/february-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 February 2023 New Connectors Pipeliner CRM \u2014 an interactive CRM platform for sales and marketing management. Elasticsearch \u2014 a distributed search and analytics engine used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases. Connector Updates Klaviyo Updates Skyvia supported new API version for Klaviyo connector . This allows Skyvia to work with more Klaviyo data: tags, tag groups, catalog items, segments, etc. We also improved support for some other objects, for example, supported incremental replication and synchronization for lists. Besides, metrics objects changed with the switch to the new API. MetricDataDaily , MetricsDataMonthly , MetricsDataWeekly , MetricValuesDataDaily , MetricValuesDataMonthly , MetricValuesDataWeekly objects have been removed. Instead, the following objects have been added: DailyMetrics , Monthlymetrics , WeeklyMetrics . These objects present the information in a more convenient form for data analysis." }, { "url": "https://docs.skyvia.com/recent-releases/2023/january-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 January 2023 New Features Flexible Table Naming in Replication Skyvia now allows setting up naming for target database tables in replication packages. You can configure naming settings for all the tables in a package, specify custom names for separate tables, and specify schema to load data to for SQL Server. This can be useful in a number of cases: Replicating multiple instances of the same cloud app to one database but to different tables (or SQL Server schemas). Replication to already existing tables with custom names. Replication in cases when some other software expects specific table names, not coinciding with cloud object names. New Connectors QuickBooks Desktop \u2014 an accounting software for managing and controlling finance-related processes for inventory-based businesses that need to scale as they grow. Paymo \u2014 a comprehensive platform for project management, tracking time, and billing." }, { "url": "https://docs.skyvia.com/recent-releases/2023/july-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 July 2023 New Features Automation Reworked the Automation UI to free up space for the automation flow diagram and ease the access to automation tools and features. Update Schema in Replication We have added metadata change detection and target schema update functionality to replication. Now you can configure your replication integrations to track metadata changes in source and syncing them to target database schema when using incremental updates. Note that not all source metadata changes are synced to the target schema. Some changes cannot be applied because of target database limitations, and some can be ignored without any issues with replication. This feature greatly reduces the need to perform full replication each time metadata is changed in a cloud app. Now you need to perform a full replication again only in case of a limited set of metadata changes that cannot be synced to target databases. Connector Updates DEAR Inventory Skyvia now supports the CRM group objects Leads, Opportunities, Tasks, TaskCategories, Workflows for the [DEAR Inventory](https://docs.skyvia.com/connectors/cloud-sources/dearinventory_connections.html#establishing-connection) connector.\nYou can perform the SELECT, INSERT, and UPDATE operations with these objects. These objects do not support Incremental Replication and Synchronization. From now on, you can manipulate the nested data of the Contacts and Addresses fields in the Leads object and the Steps field in the Workflows object. For this, enable the Nested Objects feature in Import integration. Klaviyo We added two more stored procedures to our Klaviyo connector: CreateEvent and CreateProfileAndEvent.\nYou can find more details on how to use stored procedures in [Klaviyo](https://docs.skyvia.com/connectors/cloud-sources/klaviyo_connections.html#establishing-connection) documentation. BigCommerce We supported BigCommerce [Custom Template Associations API](https://developer.bigcommerce.com/docs/rest-content/custom-template-associations) . Template associations are now available via the general CustomTemplateAssociations object as well as via specific objects: ProductTemplateAssociations , CategoryTemplateAssociations , BrandTemplateAssociations , PageTemplateAssociations . HubSpot We supported HubSpot Engagements API 3. Now the Engagements records are also available per their via the Calls , Communications , Emails , Meetings , Notes , PostalMail , and Tasks objects. Note that the old Tasks object that was available previously, is renamed to EventTasks . We have also added the CallProperties , CommunicationProperties , EmailProperties , MeetingProperties , NoteProperties , PostalMailProperties , and TaskProperties objects that store information about the corresponding object columns. Besides, the Quotes object now supports loading data into it." }, { "url": "https://docs.skyvia.com/recent-releases/2023/june-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 June 2023 New Features Source Values in the Import\u2019s Error Log Now you can select what data will appear in the Import\u2019s error log. Choose between initial data from source and processed values that you load to target. For more details, visit [Import](https://docs.skyvia.com/data-integration/import/configuring-import.html) topic. New Connectors Monday Monday.com \u2014 a collaboration platform that helps businesses to manage tasks, projects, and workflows to streamline communication, enhance productivity, and track progress. Connector Updates Salesforce We added Bulk API v2 support to improve the user experience while performing integration monitoring inside the Salesforce app. Now you can choose which version of Bulk API to use in the advanced settings of the Salesforce connection. Pipeliner CRM Skyvia now supports custom fields in the following Pipeliner CRM objects: Account, Contacts, Leads, Opportunities, Tasks, Appointments, Products, Projects . Square Now you can select all data from the Square Orders object including nested objects: LineItems, Fulfillments, Discounts, Taxes, ServiceCharges . Customer.io Skyvia now supports loading data into the Customer.io Customers object. Note that Skyvia uses Customer.io Tracking API for working with Customer.io, and thus, it now requires Tracking Site ID and Tracking API Key for connecting." }, { "url": "https://docs.skyvia.com/recent-releases/2023/march-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 March 2023 New Connectors Pipeline CRM \u2014 a cloud, easy-to-use CRM platform for small and mid-size businesses. Outreach \u2014 a sales execution platform designed to increase sales performance. Connector Updates ConvertKit Updates We added new fields to the Sequences object in ConvertKit connector: Hold, Repeat, SendTime, SendTimeZone, RecipientRules_LandingPages, RecipientRules_Courses, RecipientRules_Tags, RecipientRules_Lists, Mon, Tue, Wed, Thr, Fri, Sat, Sun, SendTimeZoneAbbr, EmailTemplates . For user convenience the EmailTemplates field of the Sequences object is represented as a separate object SequencesEmailTemplates ." }, { "url": "https://docs.skyvia.com/recent-releases/2023/may-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 May 2023 Query Parameters From now on, Skyvia allows configuring parameters in Query . Query parameter is a placeholder for varying values you can use instead of constant values in query filters. You can change query parameter values without modifying the query itself. You can use parameters in the Query Builder , in filters, or in SQL code. Also query parameters are supported for Execute Query action in [Automation](https://docs.skyvia.com/automation/) , Import , Export integrations, Source components of Data Flow and Control Flow integrations. You can add parameters to your queries and preview a query before running the integration. Copy and Paste Components in Control Flow and Automation Now you can copy and paste components by using hotkeys and component management menu in [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) and [Automation](https://docs.skyvia.com/automation/) . New Connectors TMetric TMetric \u2014 a powerful time management tool that helps companies to increase productivity, monitor team performance, manage projects and tasks. Connector Updates SendGrid SendGrid Contacts object now supports INSERT, UPDATE and DELETE operations. Also Skyvia now supports custom fields for the Contacts object. Zoho CRM Subforms are supported for Zoho CRM . You can work with them either as with fields of the corresponding parent objects or as with separate objects. Nested objects feature in import and data flow is also supported for them. Note that you need to select API version Ver4 in Zoho CRM connection settings to work with subforms. HubSpot You can now work with HubSpot custom object properties via the corresponding *Properties objects. Zoho Books Added nested objects support for Zoho Books connector. Now you can map complex structured fields more easily. Visit [ZohoBooks connector](https://docs.skyvia.com/connectors/cloud-sources/zohobooks_connections.html) page for more details. Magento Magento store views are supported. You can now select the store view against which the API requests are executed in the connection settings. This parameter is available for Magento 2.x. Confluence Cloud Skyvia now supports the [API V2](https://developer.atlassian.com/cloud/confluence/rest/v2/intro/#about) version for the Confluence Cloud connector. More Confluence objects are now available. The Content object is replaced with BlogPosts, Pages and related objects." }, { "url": "https://docs.skyvia.com/recent-releases/2023/november-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 November 2023 New Connectors HelpDesk \u2014 a simple ticketing system for tracking, prioritizing and resolving customer support tickets, developed by LiveChat Software. Zoho Billing \u2014 an invoicing, expense management, project billing, and recurring billing tool from Zoho. ServiceNow \u2014 is an intelligent platform for digital transformation that enables companies to digitize any process across their organization with pre-built and customizable workflow solutions. Acumatica \u2014 is an enterprise resource planning (ERP) platform helping small and midsize businesses with financial management, customer relationship management (CRM), project accounting, and distribution management. Connector Updates AWeber We have added new objects to our AWeber connector, SubscriberActivities, SentBroadcastOpens , and SentBroadcastClicks .\nThese objects are read-only. BigCommerce New Objects We have added new objects to our BigCommerce connector: CategoryTrees, Channels, CustomerFormFieldValues, CustomerAddressFormFieldValues, CustomerAttributes, Carts, CartLineItems, CartCustomItems, CartGiftCertificateItems, CartMetafields, SystemLogs, TaxRates, TaxZones, TaxProperties . The CustomerCustomFields and CustomerAddressCustomFields objects are obsolete. They remain in the connector for compatibility purpose. Instead the latest update we added the CustomerFormFieldValues and CustomerAddressFormFieldValues . Existing Object Updates We have added the following new fields to the Customers object: Channels, FormFields, Attributes , and Addresses . Besides, the CustomFields field is removed from the Customers object. It is replaced with the FormFields field. Please note that this is a breaking change, and you may need to modify your existing BigCommerce integrations and backups and manually exclude the CustomFields field from them. For more details, please see our BigCommerce connector topic. Todoist Todoist API has migrated from version v1 to v2. Due to this, we updated our Todoist connector. \nYou can find detailed information about the updates [here](https://developer.todoist.com/rest/v2/#migrating-from-v1) . Salesforce We have added the Ignore Blank Values for Update checkbox to the Salesforce advanced connection parameters. It determines whether the Update operation ignores blank (null) values and keeps data in the corresponding Salesforce fields or it writes null values to Salesforce. Google BigQuery We have supported authentication via service accounts for Google BigQuery. For more details, please see our Google BigQuery connector topic. FreshBooks Now you can work with the Projects and Services data in FreshBooks. We have supported both reading and writing to these objects." }, { "url": "https://docs.skyvia.com/recent-releases/2023/october-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 October 2023 New Connectors Sendcloud \u2014 a shipping platform that helps e-commerce businesses manage deliveries and provides seamless shipping workflow. Connector Updates Customer.io We have added new stored procedures to allow working with custom objects. From now on you can create or remove custom objects records, and add, modify or remove records associations with customers. For more details, see the Customer.io connector topic. Affinity We have added the Suppress Extended Requests connection parameter to Affinity connection. Use this parameter to enable using the additional web requests to query some special fields from the Persons and Organizations objects. \nfor more details see the Affinity connector topic. Shippo Selecting data from the TrackingStatus object now requires filtering by the Carrier and TrackingNumber fields to query data from it. HubSpot We have added the Column-wise chunking checkbox to the HubSpot connection editor. This checkbox enables splitting requests to ContactListContacts , Contacts , and Deals objects into multiple requests with different fields when request URI length exceeds 8000 characters. This allows you to avoid the 414 Request-URI Too Large error. Greenhouse We have added the CreatedDate and UpdatedDate fields to the Applications object. From now on Skyvia supports Incremental Replication and Synchronization for the Applications object. Productive.io Deals and Budgets The Deals object has been divided into two separate objects Deals and Budgets for correct custom fields processing.\nThus, the foreign keys which earlier referred to the Deals object were removed from the Comments, Invoices, Expenses and Activities objects. Custom Fields We have added support for Productive.io custom fields . The following Productive.io objects contain custom fields: Bookings, Budgets, Companies, Deals, Expenses, Invoices, People, Projects, Tasks . Klaviyo Due to the latest Klaviyo API updates, we have updated our Klaviyo connector. Objects Updates We added the following objects to the connector: Accounts, EmailCampaignTags, SMSCampaignTags, EmailCampaignMessages, SMSCampaignMessages, Coupons, CouponCodes, Images . The EmailCampaigns and SMSCampaigns objects replaced the Campaigns object. Procedures updates The SuppressProfiles and UnsuppressProfiles procedures require a new format for the SuppressionEmail parameter.\nSee more details in our Klaviyo connector topic." }, { "url": "https://docs.skyvia.com/recent-releases/2023/september-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2023 September 2023 New Features Automation Versions We added Automation versioning to give you better control over the past automation modifications. Now you can check all previous versions of the current automation, filter and comment on them, and restore your automation to any previous version. To learn more, visit Managing Automation Versions documentation page. New Connectors Teamwork \u2014 a project management tool for resources and workload planning, progress and milestones monitoring, and collaboration. Teamwork CRM \u2014 a CRM tool that helps users effectively manage their sales, focused on ease of use and visibility. Teamwork Desk \u2014 a Teamwork integrated help desk tool for customer communication. Zulip \u2014 a communication tool providing messaging services. Microsoft Ads \u2014 a tool for advertising on the Bing search network and its partner networks (Yahoo and AOL). Connector Updates HubSpot Since HubSpot Calendar API has been sunset on August 31, 2023, we have removed Events and EventTask objects and all the relations involving them. Zendesk We have switched to [cursor-based pagination (CBP)](https://developer.zendesk.com/documentation/api-basics/pagination/paginating-through-lists-using-cursor-pagination/?_ga=2.45865444.1719505781.1695221851-701757852.1650540075) for Zendesk objects that support it. This fixes any errors with the [new offset-based pagination limits](https://support.zendesk.com/hc/en-us/articles/5591904358938-New-limits-for-offset-based-pagination) ." }, { "url": "https://docs.skyvia.com/recent-releases/2024/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases Recent Releases This section contains the list of Skyvia releases for the year 2024 grouped by months." }, { "url": "https://docs.skyvia.com/recent-releases/2024/april-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 April 2024 Trigger Conditions in Automation We added a Trigger condition setting to connection triggers in Automation. Now you can create an expression to filter and validate your events before the automation execution. Check the Triggers documentation for more details. Retrospective Updates for Incremental Replication We added attribution window parameter to Facebook Ads, Google Ads, LinkedIn Ads, X Ads, TikTok Ads and Microsoft Ads connectors. Now, when you run an incremental replication, it will take into account a retrospectively updated reporting data. New Connectors Excel Online \u2014 a cloud version of a robust spreadsheet editor for various tasks, from simple data entry and basic calculations to complex financial modeling and statistical analysis. Motion \u2014 a cloud productivity platform, using AI for planning and time management. Connector Updates GetResponse The Contacts object now supports the INSERT operation in integrations. Now you can insert records using procedures and integrations. We also supported the Contacts custom fields. More details are available in GetResponse topic. ShipStation You can now map the Items field in the Orders object using the Nested Objects option in import. You can also replicate them into separate tables with our new replication runtime. We simplified the Items field data structure for more convenient mapping. This update impacts the Items field existing integrations. We supported native ShipStation filters in our connector. Use them to improve performance and save API calls. More details are available in the ShipStation topic. Shopify We updated our Shopify connector to API version [2024-01](https://shopify.dev/docs/api/release-notes/2024-01) . Smartsheet You can now select data from the *_Rows objects fields containing data of different types without errors. The Typed Sheet Cells connection parameter allows you to set Skyvia behavior when querying such fields. \nMore information is available in Smartsheet topic. Square The GiftCardActivities object is now available for using in integrations and automations. This object supports the INSERT operation. Find more details in Square topic. Trello The IdBoard field in the CardMembers object was deprecated. We also updated the BoardMembers object. Skyvia performs additional extended requests for part of its fields.\nFind more deatils in the Trello topic. Zoho Inventory We added new PurchaseReceives, PurchaseReceivesLineItems, ShipmentOrders , and ShipmentOrdersLineItems objects to our Zoho Inventory connector. You can add, modify and delete records in the PurchaseReceives and ShipmentOrders objects. The PurchaseReceivesLineItems and ShipmentOrdersLineItems objects are read-only. More details are available in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/august-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 August 2024 Export Improvements We have added the Create Empty Files check box to Export integration. It determines whether to create result files if there are no records to export (the exported object has no records, or no records match filter conditions, or etc.). Replication from MySQL Skyvia extends the list of databases supported in replication as a source with MySQL. It supports incremental replication, when you can replicate recent changes from MySQL tables, if the tables have suitable timestamp or autoincrement columns. Alternatively, you can opt for full resync. You can learn more about replication from databases here . New Connectors Clockify \u2014 a time tracking service that lets you track teams activity and work hours across projects. SmartSuite \u2014 a collaborative work management platform designed to help teams plan, track, and manage workflows, including projects, ongoing processes, and daily tasks at all organizational levels. Toggl Track \u2014 a time-tracking and reporting solution that helps individuals and teams manage projects. Connector Updates Amazon S3 We have added Working Directory parameter to the Advanced Settings in Amazon S3 connection editor . It allows you to specify the folder in the Amazon S3 bucket to use as a root folder in Skyvia. You may use this parameter, for example, if you only have access to a specific folder in the Amazon S3 bucket, and cannot access its root folder. Azure DevOps We supported the custom fields in our Azure DevOps connector. Also, we extended the WorkItemRevisions object with the Fields_System.AssignedTo_* objects. More information is available in Azure DevOps topic. Cin7 Core Inventory We updated the Sale object. It has a complex structure and now supports the Nested Objects feature in Import and the Unwind Nested Objects feature in Replication. More information about this and other objects is available in Cin7 Core Inventory topic. Close Our Close connector now supports the OAuth 2.0 authentication. You can choose the authentication type in the Connection Editor.\nMore information is available in the Close topic. Constant Contact We improved support for the custom fields for our Constant Contact connector and added new stored procedures. See more information in Constant Contact topic. Facebook Ads From now on you can integrate lead generation forms with other platforms and apps using Skyvia. We supported the Pages, LeadgenForms and FormLeads objects in our Facebook Ads connector. More information is available in Facebook Ads topic. Float The Status field is now available in the TimeOffs object in our Float connector. This field supports filtering using the = operator. \nSee more details in Float topic. FreshBooks We added new objects to our FreshBooks connector: BillVendor, Bill, BillLine, BillPayment . More information about the objects and the connector is available in the FreshBooks topic. Freshsales Suite We supported the custom fields in our Freshsales Suite connector. See more details in the Freshsales Suite topic. Front We have supported native sorting and filtering (via Front API) for more Front objects and fields. This makes the corresponding sorting and filtering operations faster, and they require less Front API calls. HubSpot We updated our connector due to the Owners API sunset in August 2024. Due to this update the Owners object structure was changed. If you use the Owners object in your integrations, refresh the fields mapping to avoid errors. The Teams object is now available in our HubSpot connector. This object is read-only. Relogin to HubSpot to use this object. More information is available in the HubSpot topic. Insightly CRM We have improved support for custom fields and custom objects for Insightly CRM . Now custom fields can be accessed as a usual object fields, not as a single field, storing an array of JSON objects. Custom objects are now also available as separate objects. If you prefer the old behavior, you can clear the Use Custom Fields check box. Outreach The Audits object will be deprecated in Outreach API from September 2024. According to this update we removed the Audits object and added the AuditLogs object instead. More information is available in the Outreach topic. Pipedrive The Deals object is now available in the API V2 version. \nWe also supported native sorting for Pipedrive object. See more information in the Pipedrive topic. Productive.io We updated our Productive.io connector according to the latest changes in Productive.io API. These changes are internal and won\u2019t impact your Productive.io connections. QuickBooks Desktop The following objects are now available in our QuickBooks Desktop connector: AccountTaxLineInfo, ARRefundCreditCard, BillingRate, BillingRateItemLine, BuildAssembly, BuildAssemblyLineItem, Company, CompanyActivity, Currency, CustomerMsg, DateDrivenTerms, EmployeeNote, Form1099CategoryAccountMapping, Host, InventoryAdjustment, InventoryAdjustmentLine, RefundAppliedToTxnLine, ReceivePayment, ReceivePaymentLine, VendorBillToPay, VendorCreditToApply . More information about these and other objects in the QuickBooks Desktop topic. Shopify We supported importing binary attachments in our Shopify connector. You can now import products with images. More information about our Shopify connector is available in the Shopify topic. SurveyMonkey The SurveyResponsesAnswers object now supports the Incremental Replication. Skyvia can track new and updated records in this object. More information in is available in SurveyMonkey topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/december-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 December 2024 Replication from Oracle Skyvia extends the list of databases supported in replication as a source with Oracle. It supports incremental replication, when you can replicate recent changes from Oracle tables, if the tables have suitable timestamp or autoincrement columns. Alternatively, you can opt for full resync. You can learn more about replication from databases here . New Connectors Jira Software Cloud \u2014 a project management tool to plan and track work across every team. Sage Accounting \u2014 a cloud-based accounting solution for small and medium-sized businesses, with features such as invoicing, expense tracking, cash flow management, and financial reporting. Zoho WorkDrive \u2014 a secure online file storage and collaboration platform for modern teams, small businesses, and large enterprises. Connector Updates Hive We have updated the Hive connector to remove the limitation on the number of returned records from the Actions table. The ActionComments table now displays a maximum of 200 comments for each Action . Learn more about the Hive connector . Marketo From now on the Use Bulk Extract parameter affects not only the Leads object. We supported Marketo Bulk Extract for the Activities object and the custom objects. See the details in Marketo NetSuite (SOAP) We have updated the WSDL version for the NetSuite SOAP connector to 2022.2. Learn more about the NetSuite SOAP connector . Snowflake Now you can connect to Snowflake using the OAuth 2.0 authentication. More details are available in Snowflake topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/february-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 February 2024 User Invitations We reworked the user invitation process. Now you can invite users to your account and provide access to specific workspaces at the same time. Also, we added separate tabs for the current and invited users overview, Resend invitation button, and Copy invitation link option for the quick sharing. Hashing Functions Skyvia\u2019s expression syntax now includes sha256_encrypt and sha512_encrypt functions. They hash binary or string values using [SHA-256 and SHA 512 algorithms](https://en.wikipedia.org/wiki/SHA-2) respectively, adding a string or binary salt to the value as a suffix. This allows you to anonymize sensitive data fields when loading data to other systems. See the example here . Usage Summary Now on the Usage Summary page you can also track the billed traffic statistics for the Connect product. New Connectors LionOBytes LionOBytes is a cloud platform providing AI-based customer relationship management (CRM), field service management (FSM), and enterprise resources planning (ERP) solutions for small businesses. DigitalOcean DigitalOcean is an infrastructure as a service (IaaS) provider that simplifies cloud computing for developers. Connector Updates Acumatica Skyvia now supports Agent connection for Acumatica. You can now connect to Acumatica servers located in local networks or on users\u2019 computers behind the firewall with Skyvia Agent application .\nSee how to create an agent connection in Acumatica topic. ChargeOver Skyvia now supports the Incremental Replication for Transactions object. We supported native ChargeOver filters in our connector. You can now map nested fields of complex structured objects using the Nested Objects option in Import integrations. You can also replicate them into separate tables with our new replication runtime. See more details in ChargeOver topic. Cin7 Core Inventory We added two new objects to our Cin7 Core Inventory connector: SaleOrderLines and PurchaseOrderLines . These objects are read-only. ClickUp You can now use filter by the Assignee field in TimeEntries object. Find more details in the ClickUp topic. Float We added the PublicHolidays object to our Float connector. Find more details on this object specifics in the Float connector topic . Freshservice Now you are able to create child tickets for existing tickets using a new CreateChildTicket stored procedure. \nSee more details in Freshservice topic. Google Ads Skyvia now supports the latest Google Ads API version. We added new AssetGroupProductGroupView, CustomerLifecycleGoals, CampaignLifecycleGoals objects to our Google Ads connector. These objects are read-only. Skyvia supports Incremental Replications for the AssetGroupProductGroupView object. Harvest Skyvia can now parse the complex structured JSON fields. Using the Nested Object feature, you can map them in your import integrations. We also added new stored procedures to our Harvest connector. Now, you can change event types for invoice and estimate messages, as well as restart and stop time entries using Skyvia. See the Harvest connector topic for more details. HubSpot Skyvia now supports querying property history in HubSpot. History data is available in the DealsHistory, ContactsHistory, CompaniesHistory, TicketsHistory , and LineItemsHistory objects. Intercom We supported the latest Intercom API version. Skyvia now supports custom fields for Intercom. We added new Tickets, TicketTypes, ActivityLogs, ContactSubscriptions, HelpCenters objects, updated the Conversations object and removed the Sections object. More Intercom objects now support DML operations. We also added stored procedures which help working with messages, conversations and ticket types. See more details in Intercom topic. Klaviyo You can now connect to Klaviyo via OAuth 2.0 using your Klaviyo credentials. Just select the OAuth 2.0 authentication type in the Connection Editor. More details are available in Klaviyo topic. ShipStation You can now perform Incremental Replication for the following ShipStation objects: Customers, Fulfillments, OrderItems, Orders, Products, Shipments, Stores, Warehouses . And synchronize the Orders and Warehouses . More information is available in ShipStation topic. X Ads We supported the latest X Ads API version." }, { "url": "https://docs.skyvia.com/recent-releases/2024/january-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 January 2024 Replication We\u2019re excited to introduce enhancements to Replication, featuring a new replication runtime and a range of new features to elevate your data replication experience. New Replication Runtime Enjoy a boost in performance and improved usability by selecting the new replication runtime checkbox in the replication settings. New replication runtime comes in with the object-specific LastSyncTime , improved error handling, and new replication modes. Object-Specific LastSyncTime Gain better control over your data replication with the object-specific LastSyncTime . Unlike the previous replication-wide setting, you can now set LastSyncTime for each object independently. This approach provides several benefits: Prevents the failure of the whole replication in case of the errors with one or more objects. Allows to add new objects for replication and create tables for them in Target without recreating tables for other objects marked for replication. Streamlines the error handling by providing errors for each object that failed to replicate and separating them from the integration-level errors. New replication modes, combined with the object-specific LastSyncTime setting allows for better replication customization on the object level. For example, when you use the incremental replication, you no longer need to create a separate replication for the objects that do not support incremental updates. Replication Modes Now you can use the replication modes to define the behaviour of the next replication for each object: Standard mode is assigned to each object automatically and applies the default replication behaviour. Resync mode forces the full replication of the object by dropping and creating a table in Target during each replication run. Resync on demand mode is created for objects that are part of your replication, but are not required for each replication run. In the manual mode you can include the object to the next replication run by resetting its LastSyncTime . Improved Performance, Removed Direct ID Check We reworked the mechanism of applying changes to data in Target to boost the replication performance. New approach does not include the check of the IDs, so we removed this option. No More Foreign Keys As the objects in the new replication runtime do not depend on each other, we no longer create foreign keys for the replicated objects. New Replication Features We are happy to introduce new replication features that provide object history analysis capabilities, alternative ways to work with the nested objects, and the functionality to address compliance issues by hashing your data. History Mode Track the history of the entries in your database with the History mode. Enable it for the chosen objects in Source to store every change made to the objects\u2019 rows in Target using the type 2 slowly changing dimension concept. In this case Skyvia will replace the Update operation with Insert and will add three more columns (status, start date, end date) to the table in Target, so you can perform the data analysis for the specific date or period of time. Nested Objects Replication Now you can choose how to represent fields with complex structured data with the help of the Unwind Nested Objects option. Select JSON Columns to replicate nested object fields as columns with JSON data into the target table or select Separate Tables to replicate nested object fields into additional tables in the database. Once you\u2019ve selected the preferred Unwind Nested Objects option you can manage it for each object separately in the Task Editor . Data Hashing Address compliance requirements with data hashing. Select specific object fields to hash, ensuring the secure transfer of sensitive data. For example, hash the customers\u2019 emails data to provide an added layer of privacy without sacrificing analytical utility. To ensure that the data cannot be decoded with the knowledge of the default hashing algorithm we use a unique salt. Automation We are excited to announce that Automation is out of beta now. Thank you for your valuable feedback and support during the beta period. We continue to work on further development of the Automation product and will keep you informed regarding all the upcoming updates. As for now, we invite you to try it out and automate your workflows. New Connectors Tempo Tempo is a time management platform offering various valuable tools such as timesheets, planner, cost tracking tool, resource management, etc. Front Front is a cloud-based customer operations platform that enables support, sales, and account management teams to streamline communication and deliver service at scale. ClickUp ClickUp is a cloud solution offering project management and collaboration tools such as task and time tracking, team chat, project whiteboard, etc. Connector Updates Klaviyo Authorization The Public API Key parameter is deprecated. You need only the Private API Key to connect to Klaviyo. New Objects We have added BulkProfiles and ImportBulkProfiles objects to our connector. Updated Objects We have added the CreatedDate and UpdatedDate fields to the Lists object. From now on, Skyvia supports the Synchronization and Incremental Replication for this object. Deprecated Objects and Procedures The following objects are deleted from our Klaviyo connector: CampaignRecipients, Campaigns, EmailTemplates, ListExclusions, ListMembers, ListSubscribe, MetricTimeline, PeopleExclusions . The following procedures are deleted from the connector: Track, Identify, IdentifyProperties . Brevo New Connection Parameters We added Suppress Extended Requests and Use Custom Fields connection parameters to the Brevo connector. Enable Suppress Extended Requests to reduce the number of API calls and increase the speed of processing Contacts and ContactLists objects. Note that enabling this parameter will disable the incremental replication of the ContactLists object. Use Custom Fields parameter defines whether Skyvia will process the custom fields of the Contacts object. Enable it to Insert and Update custom field values. Brevo has precreated Email, Lastname, Firstname, and SMS custom fields that were previously displayed in Skyvia with the Attributes_ prefix. Now, they will be named according to the Contact Attribute Name. Xero We have supported [Xero Project API](https://developer.xero.com/documentation/api/projects/overview) in our connector. The Project, ProjectTask, ProjectTime , and ProjectUser objects are now available in Skyvia. More details are available in the Xero documentation. Podio We have optimized the Items object and supported filters by the AppItemId, Title, CreatedVia_AuthClientId , and ExternalId .\nFor user convenience, we added the separate read-only ItemFields object that stores the Fields field content in a tabular format. \nSkyvia automatically creates the separate <App Name>Items object, containing items specific to each application. More details are available in the Podio topic. QuickBooks Desktop We have added new objects to our Quickbooks Desktop connector: Charge, Check, CheckExpenseLine, CheckLineItem, CreditCardCharge, CreditCardChargeExpenseLine, CreditCardChargeLineItem, CreditCardCredit, CreditCardCreditExpenseLine, CreditCardCreditLineItem .\nMore details on the new object specifics are available in the QuickBooks Desktop topic. Monday.com Due to Monday.com migration to new 2023-10 API version we have updated our connector. More details are available in the Monday.com topic. Square We have added a new GiftCards object and new LinkCustomerToGiftCard and UnlinkCustomerToGiftCard stored procedures to our connector. See more details in the Square connector topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/july-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 July 2024 Automation: Webhook Trigger We are excited to announce the addition of Webhook Triggers to Skyvia! Now you can send requests to Skyvia to trigger an automation execution. Once Webhook Trigger is set up, Skyvia generates a webhook URL, that can be called from any external app or service, and starts listening for the incoming requests. Once Skyvia receives a requests, it executes the automation immediately. To learn more, visit the Triggers page. SQL Server Replication Now you can replicate your data from SQL Server to another database or data warehouse. We added three Ingestion modes enabling you to choose how Skyvia will track changes in records during the incremental replication. Learn more about database replication and Ingestion modes here . New Connectors Gmail \u2014 a free cloud mailing service provided by Google. Google Analytics 4 \u2014 a free web analytics service by Google that tracks and reports website traffic. Connector Updates Azure DevOps The objects Profile and Boardrows are now available in our Azure DevOps connector. Both objects are read-only. \nSee more details on these and other objects in the Azure DevOps . ClickUp We supported the native ClickUp filters in the TeamTasks and TimeEntries objects to save API calls and increase query performance. More details are available in ClickUp topic. DigitalOcean Now you can connect to DigitalOcean using the API token. The OAuth authentication for this connector is not available anymore. \nExisting DigitalOcean connection with OAuth authentication become invalid. You have to reconnect to DigitalOcean using API token. Details on how to obtain an API token are available in DigitalOcean topic. HubSpot The Invoices, InvoiceLineItems, LineItemInvoices, and Users objects are now available in our HubSpot connector. These objects are read-only. See more information about the connector in HubSpot topic. Jotform We made the form submission answers structure more convenient for users. Skyvia creates a separate object for each form with a form name prefix in its name <FormName>_FormSubmissions . Each submission is a separate record in such an object. Form boxes are the object fields. Form answers are the field values. Specifics are described in Jotform topic. QuickBooks Desktop Our QuickBooks Desktop connector now supports custom fields for the following objects: Customer, Employee, Vendor, Item .\nSee more in QuickBooks Desktop topic. SendGrid Now you can send emails to the specific recipients with our new stored procedure. More information is available in SendGrid topic. Thinkific The Orders object now supports the Incremental Replication. See the full connector description in Thinkific topic. Zoho Inventory We supported the native Zoho Inventory filters in our connector. Use the >= and > operators when filtering by the UpdatedDate field in the Items, SalesOrders, Contacts, Invoices, PurchaseOrders , and Bills objects to save API calls and increase query performance. See more information in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/june-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 June 2024 New Connectors Follow Up Boss \u2014 a CRM platform designed for real estate business management. Connector Updates Asana We supported associating new or existing tasks to a specific project section. See how to do it in the Asana connector topic. Azure DevOps We added a new WorkItemRevisions object to our Azure DevOps connector. This object is read-only. More information about this and other objects is available in Azure DevOps topic. Freshservice We supported the multiple workspace accounts. Now you can get records from the Tickets, Assets, AgentGroups, Changes, Releases, Problems objects belonging to different workspaces. Global custom fields and custom fields belonging to specific workspaces are now available in our Freshservice connector. See more information about the update and other connector specifics in Freshservice topic. Google Ads We updated our Google Ads connector to v16.1 API version. We added new fields to the *Report objects. More information about the connector is available in Google Ads topic. Iterable From now on Skyvia supports EU endpoints of the Iterable API. To switch to the EU data center, set the corresponding parameter in the Iterable connection. More details are available in the Iterable topic. Insightly CRM Now, you can conveniently map the nested objects of the custom fields in the Opportunities, Contacts, Leads, OpportunityLineItems, Products, Tasks, and Projects objects. More information is available in Insightly CRM topic. Jira Service Management We added more stored procedures to our Jira Service Management connector. Now, you can add customers and organizations to the service desk and remove organizations from it. Information about these and other stored procedures is available in the Jira Service Management topic. Klaviyo We updated our Klaviyo connector to the latest API version, v2024-05-15. Due to the deprecation of the v2023-01-24 version, we removed the CreateProfileAndEvent stored procedure. The detailed connector description is available in Klaviyo topic. Magento You can now map the nested objects of the complex structured fields in the Customers, SalesOrders , and Products objects using the Nested Objects option in Import integrations. You can also replicate them into separate tables with our new replication runtime. Use the Unwind component to map them in data flow integrations. See more details in the Magento topic. Onfleet You can now work with worker analytics in Skyvia using the new WorkerAnalytics object. We added the From and To fields to the TeamTasks, WorkerAnalytics and WorkerTasks objects. Use these fields to filter when querying data from these objects. See more information in the Onfleet topic. Outreach We have upgraded the native filters for our Outreach connector. The CreatedDate and UpdatedDate fields in specific objects support the < , <= , > , >= filter operators. Earlier they supported only the = operator. The details on filtering specifics are available in Outreach topic. Pipedrive We updated our Pipedrive connector metadata according to the v1 version of Pipedrive API: We added a new CallLogs object. It supports the INSERT and DELETE operations. We also added a new NoteComments object. It supports the INSERT, UPDATE, and DELETE operations. When updating this object, map the NoteId and the Id fields for better performance. We supported the INSERT, UPDATE, and DELETE operations for the DealFields, OrganizationFields, PersonFields, and ProductFields objects, which were earlier read-only. We changed the VisibleTo field data type to numeric in all objects where it is available. We added new fields to many objects according to the API. We did not remove the obsolete fields for compatibility. We supported the stored procedures in our Pipedrive connector. We also supported the v2 API version for the following Pipedrive objects: DealProducts, Products, ProductVariations, Stages, Pipelines . You can switch between the API versions in the Connection Editor. Detailed information about the connector and its objects is available in the Pipedrive topic. Podio Now you can map the Groupings_Groups field in the Views object in convenient format using the Nested Objects Feature.\nSee more details in Podio topic. ShipStation We supported more operators for filters in our ShipStation connector. From now on, you can use the > , >= , < , and <= operators in filters by date fields in the Orders, OrderItems, Shipments and ShipmentItems objects. See more details in the ShipStation topic. Shopify The GiftCards object is now available in our Shopify connector. To apply the update, log in to Shopify again in the Connection Editor. More information about the connector is available in the Shopify topic. Smartsheet Skyvia now supports EU endpoints of the Smartsheet API. To switch to the EU data center, set the corresponding parameter in the Smartsheet connection. More details are available in the Smartsheet topic. Zammad You can now connect to Zammad using the selfhosted URLs and Zammad URLs. We changed the connection parameter in Zammad connection editor from Subdomain to Domain. To connect to Zammad, specify the full domain, for example mysite.mydomain.com or mysite.zammad.com . The existing connections with specified subdomain parameter become invalid. Update your exsisting connection specifying the full domain. You can find more details about Zammad connector in Zammad topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/march-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 March 2024 New CSV Source file modes in Data Flow We added alternative file modes to the Data Flow CSV Source component. Now you can use masks to work with date in file names and upload the newest files automatically. Also, you can use expressions for a more flexible file name configuration. It allows to build custom file name templates using available functions, variables and parameters. \nSee the CSV Source topic for more details. New Connectors Zoho Sprints Zoho Sprints is a cloud-based solution for agile teams designed for collaborative project planning. Connector Updates Constant Contact You can now replicate the following Constant Contact objects with Incremental Updates option enabled: EmailBouncesReport, EmailDidNotOpensReport, EmailForwardsReport, EmailOpensReport, EmailOptoutsReport, EmailSendsReport, EmailUniqueOpensReport . \nSee more details in the Constant Contact topic. Freshservice Skyvia now supports custom objects in all existing workspaces, which belong to an account. GitHub We added new objects to our GitHub connector: Branches, Gists, OrganizationMembers, PublicGists, UserMemberships, RepositoryPullRequestReviews . We changed the primary key in the RepositoryPullRequests object to the composite key RepositoryName + Number . More information is available in the GitHub topic. ShipStation We added new fields for filtering in the Orders object. Use them to improve performance of your queries and integrations. \nMore details are available in the ShipStation topic. Smartsheet Now you can use your sheets in your integrations. Skyvia represents Smartsheet sheets as separate objects with *_Rows suffix in their names. Use these objects in your integrations, conveniently map their columns, replicate and synchronize them. We also supported new DML operations for existing objects, added new AutomationRules, Dashboards and Webhooks objects and new procedures to the connector. \nSee more details in Smartsheet topic. Zammad Skyvia now supports Zammad custom fields. Custom fields represent custom attributes for the Tickets, Users, Organizations , and Groups objects. Custom fields support the INSERT and UPDATE operations. See the Zammad topic for more details." }, { "url": "https://docs.skyvia.com/recent-releases/2024/may-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 May 2024 New Connectors Dynamics 365 Business Central \u2014 an ERP solution that provides extensive financial management tools for accurate expense and inventory tracking and budget management. My Hours \u2014 a project time-tracking solution designed to manage performance and efficiency. Connector Updates LinkedIn Ads We supported the latest LinkedIn Ads API [Marketing April 2024](https://learn.microsoft.com/en-us/linkedin/marketing/integrations/recent-changes?view=li-lms-2024-04#april-2024) version. The AdForms, CampaignRecommendations and CampaignInsights objects are deprected. More details about the updates are available in LinkedIn Ads topic. SendPulse SendPulse objects now support Incremental Replication and Synchronization. You can now work with ListEmails custom fields, using the separate dynamic <MailingListName>_ListEmails objects. More details are available in SendPulse topic. Shopify We updated our Shopify connector to API version [2024-04](https://shopify.dev/docs/api/release-notes/2024-04) . Wrike You can now work with the custom fields for the Tasks, Projects , and FolderAndSpaces objects. The FoldersAndSpaces and Projects objects now support INSERT, UPDATE, and DELETE operations. We added the Timelogs object to our connector. The Dates_Start and Dates_Due fields in the Tasks object are now available for import. Skyvia no longer performs additional requests to the Folders object when querying. However, it now does for some fields in the Projects and Timelogs . You can find more details about these and other Wrike connector specifics in Wrike topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/november-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 November 2024 New Connectors BambooHR \u2014 a all-in-one HR platform that provides employee database service and reporting, payroll, time, and benefits, hiring and onboarding solutions. Heymarket \u2014 a messaging app providing services for teams communication and collaboration. PagerDuty \u2014 is a SaaS incident management platform designed to help businesses handle alerts, automate workflows, and notify teams about issues. Connector Updates Clockify Custom fields are now available in our Clockify connector. See more information in Clockify topic. Freshsales Suite We extended the Contacts object with the following fields: FirstCampaign, FirstMedium, FirstSeenChat, FirstSource, LastCampaign, LastMedium, LastSeenChat, LastSource, LatestCampaign, LatestMedium, LatestSource, Locale, McrId OtherUnsubscriptionReason, RecordTypeId, SmsSubscriptionStatus, SystemTags, TotalSessions, UnsubscriptionReason, WhatsappSubscriptionStatus . Freshservice Our Freshservice connector now supports binary attachments. Binary attachments are located in the Content field in the SolutionArticleAttachments and SolutionArticleImages objects. More information about Freshservice connector is available in Freshservice topic. Google BigQuery Our Google BigQuery connector now supports flexible column names. Enable the Flexible Column Names connection parameter to accept object names with any character when replicating to Google BigQuery. More details are available in Google BigQuery topic. Kit (formerly ConvertKit) Now you can work with custom fields in Kit. More information is available in Kit topic. Microsoft Ads We added the following objects to our connector: AdDynamicTextPerformanceReport, AdDynamicTextPerformanceDailyReport, AdDynamicTextPerformanceMonthlyReport, ConversionPerformanceReport, ConversionPerformanceDailyReport, ConversionPerformanceMonthlyReport, DestinationUrlPerformanceReport, DestinationUrlPerformanceDailyReport, DestinationUrlPerformanceMonthlyReport, DSAAutoTargetPerformanceReport, DSAAutoTargetPerformanceDailyReport, DSAAutoTargetPerformanceMonthlyReport, DSASearchQueryPerformanceReport, DSASearchQueryPerformanceDailyReport, DSASearchQueryPerformanceMonthlyReport, DSACategoryPerformanceReport, DSACategoryPerformanceDailyReport, DSACategoryPerformanceMonthlyReport . See more information about the connector in Microsoft Ads topic. Monday.com We\u2019ve added a new parameter: Board Item Columns Naming Rule , and renamed the previous parameter to Board Item Tables Naming Rule for clearer customization of table and column names. For more details, see the Monday.com topic. Paymo We supported native filters in our Paymo connector. You can now optimize your queries without exceeding API limits. See more information in Paymo topic. Podio The Calculated type of Podio custom fields now inherits the type of the variable selected for this field. \nMore information is available in Podio topic. Stripe We added two new objects: AllSubscriptions and AllSubscriptionItems , which contain all subscriptions, including inactive ones. See more information in Stripe topic. Trello The Boards object custom fields are now available in Skyvia. More details are available in Trello topic. Zoho Books We added the ItemBatches object to the Zoho Books connector. More information is available in Zoho Books topic. Zoho Inventory We added the ItemBatches object to the Zoho Inventory connector, and the Batches field is now available in both the Items and ItemAdjustmentLineItems objects. See more information in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/2024/october-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 October 2024 Skyvia API Beta We are excited to introduce the Skyvia API Beta. It provides programmatic access to a variety of Skyvia platform features. The beta version enables the User, Workspace, Integration, Automation, Backup, and endpoints management. The API supports a set of features across Skyvia products, and will be extended in future releases. For more details check API Reference . Automation Test Mode We implemented a Test mode in Automation. Now you can test and debug your automation while building or editing it. No billed tasks involved. Learn more about the Test mode here . Backup UI Updates The Backup UI has been updated: All backup details are now available in one place. The search through the snapshots is more convenient. You can add a description to your backups and check the history of changes. New filter behaviour in backup objects. Changes to Schedule and Last Snapshot functionalities. New Connectors Facebook Pages \u2014 a Facebook account for a business, organization, or institution that allows users to advertise and track performance. Connector Updates Harvest Harvest connector now supports native filtering for some of the objects. For more details, see the Harvest connector page. Podio We enabled the metadata caching for Podio. It helps to avoid rate limits errors in your integrations. Full information about this connector is available in Podio topic. Productive.io We have added two new objects to the Productive.io connector: Reports_FinancialItem and Reports_Time .\nFor more details, see the Productive.io connector page." }, { "url": "https://docs.skyvia.com/recent-releases/2024/september-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases 2024 September 2024 Target File Name Reworked and slightly extended Target File Name functionality in Export. Added a separate Target Definition step, where you can use pre-built file name templates and file name masks to quickly generate custom names for the files you export. New Connectors Booqable \u2014 an All-in-one rental software for order management, online booking and inventory. Connector Updates ActiveCampaign Now, you can conveniently map the nested objects of the complex structured OrderProducts field in the ECommerceOrders . See more details in ActiveCampaign topic. AfterShip We updated our AfterShip connector to [07-2024](https://www.aftership.com/docs/tracking/changelog#2024-07-2024-07-11) API version. The length of the tracking ID has changed from 24 characters to 32 characters. The Id field values in the Trackings object now differ from the values in previous API versions. We added new Stores, Orders, Product , and Fulfillments objects. The Orders, Product , and Fulfillments objects have a complex structure and now support the Nested Objects feature in Import and the Unwind Nested Objects feature in Replication. The Notifications object is deprecated. The Emails and SMSes fields are now available in the Trackings object. The LastCheckPoint object is deprecated. All checkpoints are now stored in the CheckPoints object. More information is available in AfterShip topic. ChargeOver We optimized the custom fields setup in our ChargeOver connector. The custom fields are now available under the UI names instead of inconvenient Custom1, Custom2, Custom3, \u2026, Custom20 names. More information about custom fields is available in ChargeOver topic. Freshdesk We supported custom Freshdesk objects in Skyvia. More information about the support and the connector is available in the Freshdesk topic. HubSpot From now on you can get all the needed HubSpot objects associations using the Customize Associations parameter in the Connection Editor. See more details in HubSpot topic. Jira Service Management Now you can remove users from organization via API using our new stored procedure. More details are available in Jira Service Management topic. Shopify We supported Admin API Access Token authentication for Shopify. To use it, you should create and install the custom app in your Shopify account and obtain the corresponding access token." }, { "url": "https://docs.skyvia.com/recent-releases/april-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases April 2023 Automation Early Beta Automation early beta is live. From now on, you can start automating your tasks with Skyvia. In the current state, the functionality is limited, and some features, such as Connection and Webhook triggers, are unavailable. Automation is a work-in-progress project, and we\u2019ll be introducing more and more features with each upcoming release. The user interface is also subject to change as we gather feedback and make improvements. To learn more about automation possibilities in Skyvia, visit the Automation section. And now, don\u2019t hesitate and go automate something. New Connectors ChartMogul \u2014 a real-time analytics and reporting tool for businesses with subscription billing. Airtable \u2014 a powerful cloud database-spreadsheet hybrid platform for collaboration and data management. Connectors Updates Google Ads API v13 are supported in Google Ads connector. Zoho CRM API v4 are supported in Zoho CRM connector. Note that there are certain performance and API call consumption considerations regarding API v4. An API call is performed for each record when querying values from the Photo field and when querying attachment content. See Performance Optimization for more details. Custom fields are supported for Pipeline CRM ." }, { "url": "https://docs.skyvia.com/recent-releases/april-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases April 2024 Trigger Conditions in Automation We added a Trigger condition setting to connection triggers in Automation. Now you can create an expression to filter and validate your events before the automation execution. Check the Triggers documentation for more details. Retrospective Updates for Incremental Replication We added attribution window parameter to Facebook Ads, Google Ads, LinkedIn Ads, X Ads, TikTok Ads and Microsoft Ads connectors. Now, when you run an incremental replication, it will take into account a retrospectively updated reporting data. New Connectors Excel Online \u2014 a cloud version of a robust spreadsheet editor for various tasks, from simple data entry and basic calculations to complex financial modeling and statistical analysis. Motion \u2014 a cloud productivity platform, using AI for planning and time management. Connector Updates GetResponse The Contacts object now supports the INSERT operation in integrations. Now you can insert records using procedures and integrations. We also supported the Contacts custom fields. More details are available in GetResponse topic. ShipStation You can now map the Items field in the Orders object using the Nested Objects option in import. You can also replicate them into separate tables with our new replication runtime. We simplified the Items field data structure for more convenient mapping. This update impacts the Items field existing integrations. We supported native ShipStation filters in our connector. Use them to improve performance and save API calls. More details are available in the ShipStation topic. Shopify We updated our Shopify connector to API version [2024-01](https://shopify.dev/docs/api/release-notes/2024-01) . Smartsheet You can now select data from the *_Rows objects fields containing data of different types without errors. The Typed Sheet Cells connection parameter allows you to set Skyvia behavior when querying such fields. \nMore information is available in Smartsheet topic. Square The GiftCardActivities object is now available for using in integrations and automations. This object supports the INSERT operation. Find more details in Square topic. Trello The IdBoard field in the CardMembers object was deprecated. We also updated the BoardMembers object. Skyvia performs additional extended requests for part of its fields.\nFind more deatils in the Trello topic. Zoho Inventory We added new PurchaseReceives, PurchaseReceivesLineItems, ShipmentOrders , and ShipmentOrdersLineItems objects to our Zoho Inventory connector. You can add, modify and delete records in the PurchaseReceives and ShipmentOrders objects. The PurchaseReceivesLineItems and ShipmentOrdersLineItems objects are read-only. More details are available in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/april-2025.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases April 2025 Trial and Subscription Changes After 2025/04/16, new accounts on Skyvia get a 14 day trial period immediately after the registration, instead of the Free plan. If necessary, you can request the second trial period later. Your subscriptions are also no longer automatically switched to the Free pricing plan after your trial expires or if you don\u2019t pay for the paid plan subscription in time. Instead, they switch to the expired state. In this state you can create new integrations, backups, automations, etc., edit them, or delete them, but they cannot be executed, and all the schedules are disabled. If you want to switch to the Free plan, you can do it by requesting it manually in your subscription settings. You can request the Free plan only after your initial trial period is expired. For more information, refer to the Subscriptions, Payments and Trials topic. New Connectors Azure Blob Storage \u2014 a secure cloud object storage designed for cloud-native workloads, archives, data lakes, HPC, and machine learning. Azure Data Lake Storage \u2014 a massively scalable and secure data lake for your high-performance analytics workloads. Connector Updates Asana A new TimeTrackingEntries object is now available in our connector. See more details in Asana topic. Azure DevOps We added a new Relations field to WorkItems, <ProjectName>_WorkItems, WorkItemRevisions, and <ProjectName>_WorkItemRevisions objects. Learn more about this connector in the Azure DevOps topic. Cin7 Core Inventory Nested Objects support was enabled for the Purchase object, whose fields Order_Lines , Order_AdditionalCharges , and Order_Prepayments contain arrays of nested objects.\nWe also added Nested Objects support for the SaleOrders object, in the Lines and AdditionalCharges fields. For more information see Cin7 Core Inventory topic. ClickUp We fixed the TeamTasks object replication issues caused by pagination specifics in ClickUp API. We also supported filtering by the Status_Status, Project_Id, SpaceId, ListId fields using IN operator for this object. More information about the connector is available in ClickUp topic. Exact Online We added three new options to the Region setting: The Netherlands , Belgium (Flemish/French) and United Kingdom . See more about this connector in the Exact Online topic. HubSpot Skyvia now supports the [File Manager API](https://developers.hubspot.com/docs/guides/api/library/files) . More details are available in the HubSpot topic. Jira Nested Objects support was enabled for the Issues and IssueChangeLogs objects, whose fields IssueLinks and Items contain arrays of nested objects. For more information see Jira topic. Podio We added a new Extended Dynamic Table Names connection parameter for our Podio connector. When enabled it changes the naming format for dynamic Podio objects. See more information in Podio topic. ShipStation The Shipments object now supports filtering by OrderId, CarrierCode, and TrackingNumber fields. Learn more in the ShipStation topic. Survicate Skyvia now supports Survicate API v2: The Responses and ResponseAnswers objects have a different structure in the API v2. We have added the ResponsesV1 and ResponseAnswersV1 objects, accessed via API v1, for compatibility purposes. You may switch your existing Survicate integrations to these objects, but note that Survicate API v1 is considered deprecated. The Surveys object structure was changed with the new API. The Points field is no longer available. Instead, there are the following new fields in this object: AuthorName , AuthorEmail , Folder , FirstResponseAt , and LastResponseAt . Skyvia performs additional extended requests to query values of these fields. To reduce the number of API calls used, you can select the Suppress Extended Requests checkbox, and Skyvia won\u2019t retrieve their values. The SurveyPoints object is available only via API v1. Skyvia also supports the new SurveyQuestions and RespondentAttributes objects for Survicate. For more information see Survicate topic. Zoho Billing We supported INSERT, UPDATE, DELETE DML operations for the Invoices object. This object now also supports Synchronization. Learn more in the Zoho Billing topic. Zoho Books The Invoices object now supports native filtering by the following fields: InvoiceNumber ( = ), CustomerName ( = ), ReferenceNumber ( = ), Date ( > , >= , < , <= ), DueDate ( > , >= , < , <= ), UpdatedDate ( > , >= , < , <= ). Inactive accounts became available in the ChartOfAccounts object. You can now query active and inactive accounts from this object. More details about the connector are available in Zoho Books topic. Zoho Desk We significantly optimized the processing time for the Tickets object. Now the fields AccountId, Classification, IsEscalated, CreatedBy, ModifiedBy, ModifiedTime, SecondaryContacts are available in the Tickets object and doesn\u2019t require additional API requests when querying. More details are available in Zoho Desk topic." }, { "url": "https://docs.skyvia.com/recent-releases/august-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases August 2023 New Connectors FormCrafts \u2014 online tool for building forms. Zammad \u2014 a web based open source helpdesk/customer support system. AlloyDB \u2014 a PostgreSQL-compatible high-performance database service from Google Cloud. Segment \u2014 a customer data platform (CDP). Productive.io \u2014 a comprehenisve agency management platform. Connector Updates Zoho CRM Skyvia now offers full support for Zoho CRM Attachments and photos. We have supported [Importing Binary Data](https://docs.skyvia.com/data-integration/import/how-to-guides/importing-binary-data.html) for the Photo field of the Contacts , Leads objects and the Content field of the attachments objects: LeadAttachments , AccountAttachments , CampaignAttachments , CaseAttachments , ContactAttachments , DealAttachments , MeetingAttachments , InvoiceAttachments , PriceBookAttachments , ProductAttachments , PurchaseOrderAttachments , QuoteAttachments , SalesOrderAttachments , SolutionAttachments , TaskAttachments , VendorAttachments . Note that loading data to these fields is supported only for Zoho CRM API v4. To access the fields, you need to make sure your Zoho CRM connection uses API v4 and then clear its metadata cache . HelpScout You can now work with the Customers object custom fields (properties). You can insert data into these fields but not update them. The UPDATE operation support is in our roadmap. You can find more details in Help Scout documentation. Sendinblue We have significantly optimized work with the Contacts table. Now Skyvia loads data from and to this table much faster. GitHub We have added the RepositoryContents object to our GitHub connector. You can now insert, update, and delete files using stored procedures. More details are available in the GitHub connector documentation. Wrike New Objects We have added the FoldersAndSpaces and Projects objects to our Wrike connector. Skyvia supports Incremental Replication for them. Comments We added a new Type field to the Comments object. We removed the Start and End fields earlier used for filtering.\nFrom now on, you get comments for the last seven days by default, when querying. To get comments for other periods, you can set filter by the PeriodDate field in the following format: {\"start\":\"2023-06-28T14:50:31Z\",\"end\":\"2023-06-30T14:50:31Z\"} . Smartsheet We have added a new Rows object to our Smartsheet connector. You can select data from this object and perform INSERT and DELETE operations. \nFor more details, please refer to Smartsheet documentation. Klaviyo We have added the SubscriptionsEmail, SubscriptionsEmail_Consent, SubscriptionsEmail_Timestamp, SubscriptionsEmail_Method, SubscriptionsEmail_MethodDetail, SubscriptionsEmail_DoubleOptin, SubscriptionsEmail_Suppressions, SubscriptionsEmail_ListSuppressions, SubscriptionsSMS, SubscriptionsSMS_Consent, SubscriptionsSMS_Timestamp, SubscriptionsSMS_Method, SubscriptionsSMS_MethodDetailfields fields to the Profiles object. New procedures are now available to suppress and unsuppress profiles. For more details, go to Klaviyo documentation." }, { "url": "https://docs.skyvia.com/recent-releases/august-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases August 2024 Export Improvements We have added the Create Empty Files check box to Export integration. It determines whether to create result files if there are no records to export (the exported object has no records, or no records match filter conditions, or etc.). Replication from MySQL Skyvia extends the list of databases supported in replication as a source with MySQL. It supports incremental replication, when you can replicate recent changes from MySQL tables, if the tables have suitable timestamp or autoincrement columns. Alternatively, you can opt for full resync. You can learn more about replication from databases here . New Connectors Clockify \u2014 a time tracking service that lets you track teams activity and work hours across projects. SmartSuite \u2014 a collaborative work management platform designed to help teams plan, track, and manage workflows, including projects, ongoing processes, and daily tasks at all organizational levels. Toggl Track \u2014 a time-tracking and reporting solution that helps individuals and teams manage projects. Connector Updates Amazon S3 We have added Working Directory parameter to the Advanced Settings in Amazon S3 connection editor . It allows you to specify the folder in the Amazon S3 bucket to use as a root folder in Skyvia. You may use this parameter, for example, if you only have access to a specific folder in the Amazon S3 bucket, and cannot access its root folder. Azure DevOps We supported the custom fields in our Azure DevOps connector. Also, we extended the WorkItemRevisions object with the Fields_System.AssignedTo_* objects. More information is available in Azure DevOps topic. Cin7 Core Inventory We updated the Sale object. It has a complex structure and now supports the Nested Objects feature in Import and the Unwind Nested Objects feature in Replication. More information about this and other objects is available in Cin7 Core Inventory topic. Close Our Close connector now supports the OAuth 2.0 authentication. You can choose the authentication type in the Connection Editor.\nMore information is available in the Close topic. Constant Contact We improved support for the custom fields for our Constant Contact connector and added new stored procedures. See more information in Constant Contact topic. Facebook Ads From now on you can integrate lead generation forms with other platforms and apps using Skyvia. We supported the Pages, LeadgenForms and FormLeads objects in our Facebook Ads connector. More information is available in Facebook Ads topic. Float The Status field is now available in the TimeOffs object in our Float connector. This field supports filtering using the = operator. \nSee more details in Float topic. FreshBooks We added new objects to our FreshBooks connector: BillVendor, Bill, BillLine, BillPayment . More information about the objects and the connector is available in the FreshBooks topic. Freshsales Suite We supported the custom fields in our Freshsales Suite connector. See more details in the Freshsales Suite topic. Front We have supported native sorting and filtering (via Front API) for more Front objects and fields. This makes the corresponding sorting and filtering operations faster, and they require less Front API calls. HubSpot We updated our connector due to the Owners API sunset in August 2024. Due to this update the Owners object structure was changed. If you use the Owners object in your integrations, refresh the fields mapping to avoid errors. The Teams object is now available in our HubSpot connector. This object is read-only. Relogin to HubSpot to use this object. More information is available in the HubSpot topic. Insightly CRM We have improved support for custom fields and custom objects for Insightly CRM . Now custom fields can be accessed as a usual object fields, not as a single field, storing an array of JSON objects. Custom objects are now also available as separate objects. If you prefer the old behavior, you can clear the Use Custom Fields check box. Outreach The Audits object will be deprecated in Outreach API from September 2024. According to this update we removed the Audits object and added the AuditLogs object instead. More information is available in the Outreach topic. Pipedrive The Deals object is now available in the API V2 version. \nWe also supported native sorting for Pipedrive object. See more information in the Pipedrive topic. Productive.io We updated our Productive.io connector according to the latest changes in Productive.io API. These changes are internal and won\u2019t impact your Productive.io connections. QuickBooks Desktop The following objects are now available in our QuickBooks Desktop connector: AccountTaxLineInfo, ARRefundCreditCard, BillingRate, BillingRateItemLine, BuildAssembly, BuildAssemblyLineItem, Company, CompanyActivity, Currency, CustomerMsg, DateDrivenTerms, EmployeeNote, Form1099CategoryAccountMapping, Host, InventoryAdjustment, InventoryAdjustmentLine, RefundAppliedToTxnLine, ReceivePayment, ReceivePaymentLine, VendorBillToPay, VendorCreditToApply . More information about these and other objects in the QuickBooks Desktop topic. Shopify We supported importing binary attachments in our Shopify connector. You can now import products with images. More information about our Shopify connector is available in the Shopify topic. SurveyMonkey The SurveyResponsesAnswers object now supports the Incremental Replication. Skyvia can track new and updated records in this object. More information in is available in SurveyMonkey topic." }, { "url": "https://docs.skyvia.com/recent-releases/december-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases December 2023 New Connectors Google Analytics 4 Google Analytics 4 is a free web analytics service by Google that tracks and reports website traffic. Connector Updates Yotpo Skyvia now supports [Yotpo Loyalty API](https://loyaltyapi.yotpo.com/reference/reference-getting-started#welcome-to-yotpos-loyalty--referrals-api-reference) . \nTo be able to work with Loyalty objects, enter the Api key and Guid connection parameters in the Connection Editor. \nMore details are available here ." }, { "url": "https://docs.skyvia.com/recent-releases/december-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases December 2024 Replication from Oracle Skyvia extends the list of databases supported in replication as a source with Oracle. It supports incremental replication, when you can replicate recent changes from Oracle tables, if the tables have suitable timestamp or autoincrement columns. Alternatively, you can opt for full resync. You can learn more about replication from databases here . New Connectors Jira Software Cloud \u2014 a project management tool to plan and track work across every team. Sage Accounting \u2014 a cloud-based accounting solution for small and medium-sized businesses, with features such as invoicing, expense tracking, cash flow management, and financial reporting. Zoho WorkDrive \u2014 a secure online file storage and collaboration platform for modern teams, small businesses, and large enterprises. Connector Updates Hive We have updated the Hive connector to remove the limitation on the number of returned records from the Actions table. The ActionComments table now displays a maximum of 200 comments for each Action . Learn more about the Hive connector . Marketo From now on the Use Bulk Extract parameter affects not only the Leads object. We supported Marketo Bulk Extract for the Activities object and the custom objects. See the details in Marketo NetSuite (SOAP) We have updated the WSDL version for the NetSuite SOAP connector to 2022.2. Learn more about the NetSuite SOAP connector . Snowflake Now you can connect to Snowflake using the OAuth 2.0 authentication. More details are available in Snowflake topic." }, { "url": "https://docs.skyvia.com/recent-releases/february-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases February 2023 New Connectors Pipeliner CRM \u2014 an interactive CRM platform for sales and marketing management. Elasticsearch \u2014 a distributed search and analytics engine used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases. Connector Updates Klaviyo Updates Skyvia supported new API version for Klaviyo connector . This allows Skyvia to work with more Klaviyo data: tags, tag groups, catalog items, segments, etc. We also improved support for some other objects, for example, supported incremental replication and synchronization for lists. Besides, metrics objects changed with the switch to the new API. MetricDataDaily , MetricsDataMonthly , MetricsDataWeekly , MetricValuesDataDaily , MetricValuesDataMonthly , MetricValuesDataWeekly objects have been removed. Instead, the following objects have been added: DailyMetrics , Monthlymetrics , WeeklyMetrics . These objects present the information in a more convenient form for data analysis." }, { "url": "https://docs.skyvia.com/recent-releases/february-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases February 2024 User Invitations We reworked the user invitation process. Now you can invite users to your account and provide access to specific workspaces at the same time. Also, we added separate tabs for the current and invited users overview, Resend invitation button, and Copy invitation link option for the quick sharing. Hashing Functions Skyvia\u2019s expression syntax now includes sha256_encrypt and sha512_encrypt functions. They hash binary or string values using [SHA-256 and SHA 512 algorithms](https://en.wikipedia.org/wiki/SHA-2) respectively, adding a string or binary salt to the value as a suffix. This allows you to anonymize sensitive data fields when loading data to other systems. See the example here . Usage Summary Now on the [Usage Summary](https://docs.skyvia.com/account-management/usage-summary.html) page you can also track the billed traffic statistics for the Connect product. New Connectors LionOBytes LionOBytes is a cloud platform providing AI-based customer relationship management (CRM), field service management (FSM), and enterprise resources planning (ERP) solutions for small businesses. DigitalOcean DigitalOcean is an infrastructure as a service (IaaS) provider that simplifies cloud computing for developers. Connector Updates Acumatica Skyvia now supports Agent connection for Acumatica. You can now connect to Acumatica servers located in local networks or on users\u2019 computers behind the firewall with Skyvia Agent application .\nSee how to create an agent connection in Acumatica topic. ChargeOver Skyvia now supports the Incremental Replication for Transactions object. We supported native ChargeOver filters in our connector. You can now map nested fields of complex structured objects using the Nested Objects option in Import integrations. You can also replicate them into separate tables with our new replication runtime. See more details in ChargeOver topic. Cin7 Core Inventory We added two new objects to our Cin7 Core Inventory connector: SaleOrderLines and PurchaseOrderLines . These objects are read-only. ClickUp You can now use filter by the Assignee field in TimeEntries object. Find more details in the ClickUp topic. Float We added the PublicHolidays object to our Float connector. Find more details on this object specifics in the Float connector topic . Freshservice Now you are able to create child tickets for existing tickets using a new CreateChildTicket stored procedure. \nSee more details in Freshservice topic. Google Ads Skyvia now supports the latest Google Ads API version. We added new AssetGroupProductGroupView, CustomerLifecycleGoals, CampaignLifecycleGoals objects to our Google Ads connector. These objects are read-only. Skyvia supports Incremental Replications for the AssetGroupProductGroupView object. Harvest Skyvia can now parse the complex structured JSON fields. Using the Nested Object feature, you can map them in your import integrations. We also added new stored procedures to our Harvest connector. Now, you can change event types for invoice and estimate messages, as well as restart and stop time entries using Skyvia. See the Harvest connector topic for more details. HubSpot Skyvia now supports querying property history in HubSpot. History data is available in the DealsHistory, ContactsHistory, CompaniesHistory, TicketsHistory , and LineItemsHistory objects. Intercom We supported the latest Intercom API version. Skyvia now supports custom fields for Intercom. We added new Tickets, TicketTypes, ActivityLogs, ContactSubscriptions, HelpCenters objects, updated the Conversations object and removed the Sections object. More Intercom objects now support DML operations. We also added stored procedures which help working with messages, conversations and ticket types. See more details in Intercom topic. Klaviyo You can now connect to Klaviyo via OAuth 2.0 using your Klaviyo credentials. Just select the OAuth 2.0 authentication type in the Connection Editor. More details are available in Klaviyo topic. ShipStation You can now perform Incremental Replication for the following ShipStation objects: Customers, Fulfillments, OrderItems, Orders, Products, Shipments, Stores, Warehouses . And synchronize the Orders and Warehouses . More information is available in ShipStation topic. X Ads We supported the latest X Ads API version." }, { "url": "https://docs.skyvia.com/recent-releases/february-2025.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases February 2025 New Connectors Exact Online \u2014 a cloud-based platform that supports all major business processes such as production, logistics, finance & administration, HR, sales and marketing. Fireflies.ai \u2014 AI tool that helps to transcribe, summarize, search, and analyze team conversations. Keap \u2014 a CRM software that automates your entire business. Paddle \u2014 a payment processing cloud connector for working with Paddle, a payment infrastructure provider for SaaS and software businesses. SharePoint Lists \u2014 a connector for working with lists in SharePoint. Zoho Analytics \u2014 self-service business intelligence platform and analytics software by Zoho. Connector Updates Follow Up Boss We fixed the pagination issue that caused the \u201cDeep pagination disabled\u201d error. See more about this connector in the Follow Up Boss topic. Freshservice We have supported obtaining CustomFields , ShortDescription and Description fields of the ServiceItems object, when querying a single record with DisplayID specified in the filter. Learn more in the Freshservice topic. Jira Service Management We added a new read-only AssetsWorkspaces object. Learn more in the Jira Service Management topic. RepairShopr Our RepairShopr connector can now handle cases when integrations return a \u201cToo Many Requests\u201d error due to API rate limits. See more in the RepairShopr topic. Shopify We updated our Shopify connector to API version [2025-01](https://shopify.dev/docs/api/release-notes/2025-01) . We removed the deprecated objects: Countries, Provinces\nOpenAbandonedCheckouts, ClosedAbandonedCheckouts, OpenAbandonedCheckoutLineItems, ClosedAbandonedCheckoutLineItems . We also added new fields to the FulfillmentOrders, Orders , and Transactions objects. More details are available in Shopify . Slack We supported the SendMessage custom action for our Slack connector. More details are available in SendMessage topic. Stripe Skyvia now supports new fields for the Invoices object: AmountShipping, AutomaticTax_DisabledReason, AutomaticTax_Enabled, AutomaticTax_Liability, AutomaticTax_Status, Billing, Closed, EffectiveAt, FinalizedAt, FromInvoice, LatestRevision, Quote, PaidOutOfBand ShippingCost, ShippingDetails, SubtotalExcludingTax, TestClock, TotalPretaxCreditAmounts, TotalDiscountAmounts, TotalExcludingTax . See more in the Stripe topic. Xero We now support attachments for Xero. The following new objects are available: AccountAttachment, BankTransactionAttachment, BankTransferAttachment, ContactAttachment, CreditNoteAttachment, InvoiceAttachment, ManualJournalAttachment, PurchaseOrderAttachment, QuoteAttachment, RepeatingInvoiceAttachment . Learn more in the Xero topic. Zoho Books Added CreatedDate and UpdatedDate fields to the Journals object. These fields require extended requests. For more details, see Zoho Books article. Zoho Desk Now you can get IDs of deleted objects and send email replies using new stored procedures. Moreover, we added a SendEmailReply action to Zoho Desk connector. See more in the Zoho Desk topic." }, { "url": "https://docs.skyvia.com/recent-releases/january-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases January 2023 New Features Flexible Table Naming in Replication Skyvia now allows setting up naming for target database tables in replication packages. You can configure naming settings for all the tables in a package, specify custom names for separate tables, and specify schema to load data to for SQL Server. This can be useful in a number of cases: Replicating multiple instances of the same cloud app to one database but to different tables (or SQL Server schemas). Replication to already existing tables with custom names. Replication in cases when some other software expects specific table names, not coinciding with cloud object names. New Connectors QuickBooks Desktop \u2014 an accounting software for managing and controlling finance-related processes for inventory-based businesses that need to scale as they grow. Paymo \u2014 a comprehensive platform for project management, tracking time, and billing." }, { "url": "https://docs.skyvia.com/recent-releases/january-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases January 2024 Replication We\u2019re excited to introduce enhancements to Replication, featuring a new replication runtime and a range of new features to elevate your data replication experience. New Replication Runtime Enjoy a boost in performance and improved usability by selecting the new replication runtime checkbox in the replication settings. New replication runtime comes in with the object-specific LastSyncTime , improved error handling, and new replication modes. Object-Specific LastSyncTime Gain better control over your data replication with the object-specific LastSyncTime . Unlike the previous replication-wide setting, you can now set LastSyncTime for each object independently. This approach provides several benefits: Prevents the failure of the whole replication in case of the errors with one or more objects. Allows to add new objects for replication and create tables for them in Target without recreating tables for other objects marked for replication. Streamlines the error handling by providing errors for each object that failed to replicate and separating them from the integration-level errors. New replication modes, combined with the object-specific LastSyncTime setting allows for better replication customization on the object level. For example, when you use the incremental replication, you no longer need to create a separate replication for the objects that do not support incremental updates. Replication Modes Now you can use the replication modes to define the behaviour of the next replication for each object: Standard mode is assigned to each object automatically and applies the default replication behaviour. Resync mode forces the full replication of the object by dropping and creating a table in Target during each replication run. Resync on demand mode is created for objects that are part of your replication, but are not required for each replication run. In the manual mode you can include the object to the next replication run by resetting its LastSyncTime . Improved Performance, Removed Direct ID Check We reworked the mechanism of applying changes to data in Target to boost the replication performance. New approach does not include the check of the IDs, so we removed this option. No More Foreign Keys As the objects in the new replication runtime do not depend on each other, we no longer create foreign keys for the replicated objects. New Replication Features We are happy to introduce new replication features that provide object history analysis capabilities, alternative ways to work with the nested objects, and the functionality to address compliance issues by hashing your data. History Mode Track the history of the entries in your database with the History mode. Enable it for the chosen objects in Source to store every change made to the objects\u2019 rows in Target using the type 2 slowly changing dimension concept. In this case Skyvia will replace the Update operation with Insert and will add three more columns (status, start date, end date) to the table in Target, so you can perform the data analysis for the specific date or period of time. Nested Objects Replication Now you can choose how to represent fields with complex structured data with the help of the Unwind Nested Objects option. Select JSON Columns to replicate nested object fields as columns with JSON data into the target table or select Separate Tables to replicate nested object fields into additional tables in the database. Once you\u2019ve selected the preferred Unwind Nested Objects option you can manage it for each object separately in the Task Editor . Data Hashing Address compliance requirements with data hashing. Select specific object fields to hash, ensuring the secure transfer of sensitive data. For example, hash the customers\u2019 emails data to provide an added layer of privacy without sacrificing analytical utility. To ensure that the data cannot be decoded with the knowledge of the default hashing algorithm we use a unique salt. Automation We are excited to announce that Automation is out of beta now. Thank you for your valuable feedback and support during the beta period. We continue to work on further development of the Automation product and will keep you informed regarding all the upcoming updates. As for now, we invite you to try it out and automate your workflows. New Connectors Tempo Tempo is a time management platform offering various valuable tools such as timesheets, planner, cost tracking tool, resource management, etc. Front Front is a cloud-based customer operations platform that enables support, sales, and account management teams to streamline communication and deliver service at scale. ClickUp ClickUp is a cloud solution offering project management and collaboration tools such as task and time tracking, team chat, project whiteboard, etc. Connector Updates Klaviyo Authorization The Public API Key parameter is deprecated. You need only the Private API Key to connect to Klaviyo. New Objects We have added BulkProfiles and ImportBulkProfiles objects to our connector. Updated Objects We have added the CreatedDate and UpdatedDate fields to the Lists object. From now on, Skyvia supports the Synchronization and Incremental Replication for this object. Deprecated Objects and Procedures The following objects are deleted from our Klaviyo connector: CampaignRecipients, Campaigns, EmailTemplates, ListExclusions, ListMembers, ListSubscribe, MetricTimeline, PeopleExclusions . The following procedures are deleted from the connector: Track, Identify, IdentifyProperties . Brevo New Connection Parameters We added Suppress Extended Requests and Use Custom Fields connection parameters to the Brevo connector. Enable Suppress Extended Requests to reduce the number of API calls and increase the speed of processing Contacts and ContactLists objects. Note that enabling this parameter will disable the incremental replication of the ContactLists object. Use Custom Fields parameter defines whether Skyvia will process the custom fields of the Contacts object. Enable it to Insert and Update custom field values. Brevo has precreated Email, Lastname, Firstname, and SMS custom fields that were previously displayed in Skyvia with the Attributes_ prefix. Now, they will be named according to the Contact Attribute Name. Xero We have supported [Xero Project API](https://developer.xero.com/documentation/api/projects/overview) in our connector. The Project, ProjectTask, ProjectTime , and ProjectUser objects are now available in Skyvia. More details are available in Xero documentation. Podio We have optimized the Items object and supported filters by the AppItemId, Title, CreatedVia_AuthClientId , and ExternalId .\nFor user convenience, we added the separate read-only ItemFields object that stores the Fields field content in a tabular format. \nSkyvia automatically creates the separate <App Name>Items object, containing items specific to each application. More details are available in Podio topic. QuickBooks Desktop We have added new objects to our Quickbooks Desktop connector: Charge, Check, CheckExpenseLine, CheckLineItem, CreditCardCharge, CreditCardChargeExpenseLine, CreditCardChargeLineItem, CreditCardCredit, CreditCardCreditExpenseLine, CreditCardCreditLineItem .\nMore details on the new object specifics are available in the QuickBooks Desktop topic. Monday.com Due to Monday.com migration to new [2023-10 API version](https://developer.monday.com/api-reference/docs/release-notes#2023-10) we have updated our connector. More details are available in Monday.com topic. Square We have added a new GiftCards object and new LinkCustomerToGiftCard and UnlinkCustomerToGiftCard stored procedures to our connector. See more details in the Square connector topic." }, { "url": "https://docs.skyvia.com/recent-releases/january-2025.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases January 2025 New Regular Expression Functions The extract, ismatch, ispresent and replace_pattern functions are now available in expression editor . New Connectors Facebook Pages \u2014 Facebook account for a business, organization, or institution that allows users to advertise and track performance. SharePoint Files \u2014 a storage connector for working with SharePoint files. SharePoint is a cloud based content management and collaboration platform from Microsoft. Connector Updates ClickUp We have added native filtering support for the Archived field in the TeamTasks object. Learn more in the ClickUp topic. Float We have supported the ProjectTasks object in the Float connector and extended the Tasks object with the TaskDays , FilterStartDate , FilterEndDate , and FilterExpand fields. The TaskDays field returns all the dates for recurring tasks when you add correct filters on the FilterStartDate , FilterEndDate , and FilterExpand fields. See more details in the Float topic. FreshBooks We have supported the OtherIncome object in the FreshBooks connector. See more details in the FreshBooks topic. Freshservice We have added CannedResponseImages and CannedResponseAttachments read-only tables to the CannedResponse object. Learn more in the Freshservice topic. G Suite Google Calendar Extended Properties are now available in our G Suite connector. You can find them in JSON field ExtendedProperties of the CalendarEvents object and in the separate object CalendarEventExtendedProperties . Learn more in the G Suite topic. HubSpot We have supported HubSpot conversations API in the HubSpot connector. This adds support for the following objects: Messages , Threads , Inboxes , Channels , ChannelAccounts , Actors . Additionally, we have supported v3 Lists API, which allows creating and managing HubSpot lists and their records from Skyvia. Now you can work with the Lists , ListFolders , ListMemberships , and <ObjectName>ListMemberships objects. See more details in the HubSpot topic. Jira Software Cloud Jira Software Cloud connector now supports custom objects and fields. Learn more in the Jira Software Cloud topic. Klaviyo We have supported the new version of Klaviyo API - v2024-10-15. This added support for new objects, including TrackingSettings , UniversalContents , Forms , Reviews , and SegmentFlowTriggers . Note that if you use OAuth authentication in your Klaviyo connection, you need to edit the connection and re-sign in. Learn more in the Klaviyo topic. NetSuite v2 We have supported Transactions and TransactionLines objects from Netsuite SuiteAnalytics API. Note that these objects are only available in the NetSuite v2 connector, not in the NetSuite connector. Snowflake We supported the key-pair authentication for our Snowflake connector. The details on how to connect to Snoflake using the key-pair authentication are available in Snowflake topic. Square In the Orders object, the data type of all Amount fields has been changed from Int32 to Int64. \nThese fields include: NetAmounts_DiscountMoney_Amount, NetAmounts_ServiceChargeMoney_Amount, NetAmounts_TaxMoney_Amount, NetAmounts_TipMoney_Amount, NetAmounts_TotalMoney_Amount, ReturnAmounts_DiscountMoney_Amount, ReturnAmounts_ServiceChargeMoney_Amount, ReturnAmounts_TaxMoney_Amount, ReturnAmounts_TipMoney_Amount, ReturnAmounts_TotalMoney_Amount, RoundingAdjustment_AmountMoney_Amount, TotalDiscountMoney_Amount, TotalMoney_Amount, TotalServiceChargeMoney_Amount, TotalTaxMoney_Amount, TotalTipMoney_Amount ." }, { "url": "https://docs.skyvia.com/recent-releases/july-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases July 2023 New Features Automation Reworked the Automation UI to free up space for the automation flow diagram and ease the access to automation tools and features. Update Schema in Replication We have added metadata change detection and target schema update functionality to replication. Now you can configure your replication integrations to track metadata changes in source and syncing them to target database schema when using incremental updates. Note that not all source metadata changes are synced to the target schema. Some changes cannot be applied because of target database limitations, and some can be ignored without any issues with replication. This feature greatly reduces the need to perform full replication each time metadata is changed in a cloud app. Now you need to perform a full replication again only in case of a limited set of metadata changes that cannot be synced to target databases. Connector Updates DEAR Inventory Skyvia now supports the CRM group objects Leads, Opportunities, Tasks, TaskCategories, Workflows for the [DEAR Inventory](https://docs.skyvia.com/connectors/cloud-sources/dearinventory_connections.html#establishing-connection) connector.\nYou can perform the SELECT, INSERT, and UPDATE operations with these objects. These objects do not support Incremental Replication and Synchronization. From now on, you can manipulate the nested data of the Contacts and Addresses fields in the Leads object and the Steps field in the Workflows object. For this, enable the Nested Objects feature in Import integration. Klaviyo We added two more stored procedures to our Klaviyo connector: CreateEvent and CreateProfileAndEvent.\nYou can find more details on how to use stored procedures in [Klaviyo](https://docs.skyvia.com/connectors/cloud-sources/klaviyo_connections.html#establishing-connection) documentation. BigCommerce We supported BigCommerce [Custom Template Associations API](https://developer.bigcommerce.com/docs/rest-content/custom-template-associations) . Template associations are now available via the general CustomTemplateAssociations object as well as via specific objects: ProductTemplateAssociations , CategoryTemplateAssociations , BrandTemplateAssociations , PageTemplateAssociations . HubSpot We supported HubSpot Engagements API 3. Now the Engagements records are also available per their via the Calls , Communications , Emails , Meetings , Notes , PostalMail , and Tasks objects. Note that the old Tasks object that was available previously, is renamed to EventTasks . We have also added the CallProperties , CommunicationProperties , EmailProperties , MeetingProperties , NoteProperties , PostalMailProperties , and TaskProperties objects that store information about the corresponding object columns. Besides, the Quotes object now supports loading data into it." }, { "url": "https://docs.skyvia.com/recent-releases/july-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases July 2024 Automation: Webhook Trigger We are excited to announce the addition of Webhook Triggers to Skyvia! Now you can send requests to Skyvia to trigger an automation execution. Once Webhook Trigger is set up, Skyvia generates a webhook URL, that can be called from any external app or service, and starts listening for the incoming requests. Once Skyvia receives a requests, it executes the automation immediately. To learn more, visit the Triggers page. SQL Server Replication Now you can replicate your data from SQL Server to another database or data warehouse. We added three Ingestion modes enabling you to choose how Skyvia will track changes in records during the incremental replication. Learn more about database replication and Ingestion modes here . New Connectors Gmail \u2014 a free cloud mailing service provided by Google. Google Analytics 4 \u2014 a free web analytics service by Google that tracks and reports website traffic. Connector Updates Azure DevOps The objects Profile and Boardrows are now available in our Azure DevOps connector. Both objects are read-only. \nSee more details on these and other objects in the Azure DevOps . ClickUp We supported the native ClickUp filters in the TeamTasks and TimeEntries objects to save API calls and increase query performance. More details are available in ClickUp topic. DigitalOcean Now you can connect to DigitalOcean using the API token. The OAuth authentication for this connector is not available anymore. \nExisting DigitalOcean connection with OAuth authentication become invalid. You have to reconnect to DigitalOcean using API token. Details on how to obtain an API token are available in DigitalOcean topic. HubSpot The Invoices, InvoiceLineItems, LineItemInvoices, and Users objects are now available in our HubSpot connector. These objects are read-only. See more information about the connector in HubSpot topic. Jotform We made the form submission answers structure more convenient for users. Skyvia creates a separate object for each form with a form name prefix in its name <FormName>_FormSubmissions . Each submission is a separate record in such an object. Form boxes are the object fields. Form answers are the field values. Specifics are described in Jotform topic. QuickBooks Desktop Our QuickBooks Desktop connector now supports custom fields for the following objects: Customer, Employee, Vendor, Item .\nSee more in QuickBooks Desktop topic. SendGrid Now you can send emails to the specific recipients with our new stored procedure. More information is available in SendGrid topic. Thinkific The Orders object now supports the Incremental Replication. See the full connector description in Thinkific topic. Zoho Inventory We supported the native Zoho Inventory filters in our connector. Use the >= and > operators when filtering by the UpdatedDate field in the Items, SalesOrders, Contacts, Invoices, PurchaseOrders , and Bills objects to save API calls and increase query performance. See more information in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/june-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases June 2023 New Features Source Values in the Import\u2019s Error Log Now you can select what data will appear in the Import\u2019s error log. Choose between initial data from source and processed values that you load to target. For more details, visit [Import](https://docs.skyvia.com/data-integration/import/configuring-import.html) topic. New Connectors Monday Monday.com \u2014 a collaboration platform that helps businesses to manage tasks, projects, and workflows to streamline communication, enhance productivity, and track progress. Connector Updates Salesforce We added Bulk API v2 support to improve the user experience while performing integration monitoring inside the Salesforce app. Now you can choose which version of Bulk API to use in the advanced settings of the Salesforce connection. Pipeliner CRM Skyvia now supports custom fields in the following Pipeliner CRM objects: Account, Contacts, Leads, Opportunities, Tasks, Appointments, Products, Projects . Square Now you can select all data from the Square Orders object including nested objects: LineItems, Fulfillments, Discounts, Taxes, ServiceCharges . Customer.io Skyvia now supports loading data into the Customer.io Customers object. Note that Skyvia uses Customer.io Tracking API for working with Customer.io, and thus, it now requires Tracking Site ID and Tracking API Key for connecting." }, { "url": "https://docs.skyvia.com/recent-releases/june-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases June 2024 New Connectors Follow Up Boss \u2014 a CRM platform designed for real estate business management. Connector Updates Asana We supported associating new or existing tasks to a specific project section. See how to do it in the Asana connector topic. Azure DevOps We added a new WorkItemRevisions object to our Azure DevOps connector. This object is read-only. More information about this and other objects is available in Azure DevOps topic. Freshservice We supported the multiple workspace accounts. Now you can get records from the Tickets, Assets, AgentGroups, Changes, Releases, Problems objects belonging to different workspaces. Global custom fields and custom fields belonging to specific workspaces are now available in our Freshservice connector. See more information about the update and other connector specifics in Freshservice topic. Google Ads We updated our Google Ads connector to v16.1 API version. We added new fields to the *Report objects. More information about the connector is available in Google Ads topic. Iterable From now on Skyvia supports EU endpoints of the Iterable API. To switch to the EU data center, set the corresponding parameter in the Iterable connection. More details are available in the Iterable topic. Insightly CRM Now, you can conveniently map the nested objects of the custom fields in the Opportunities, Contacts, Leads, OpportunityLineItems, Products, Tasks, and Projects objects. More information is available in Insightly CRM topic. Jira Service Management We added more stored procedures to our Jira Service Management connector. Now, you can add customers and organizations to the service desk and remove organizations from it. Information about these and other stored procedures is available in the Jira Service Management topic. Klaviyo We updated our Klaviyo connector to the latest API version, v2024-05-15. Due to the deprecation of the v2023-01-24 version, we removed the CreateProfileAndEvent stored procedure. The detailed connector description is available in Klaviyo topic. Magento You can now map the nested objects of the complex structured fields in the Customers, SalesOrders , and Products objects using the Nested Objects option in Import integrations. You can also replicate them into separate tables with our new replication runtime. Use the Unwind component to map them in data flow integrations. See more details in the Magento topic. Onfleet You can now work with worker analytics in Skyvia using the new WorkerAnalytics object. We added the From and To fields to the TeamTasks, WorkerAnalytics and WorkerTasks objects. Use these fields to filter when querying data from these objects. See more information in the Onfleet topic. Outreach We have upgraded the native filters for our Outreach connector. The CreatedDate and UpdatedDate fields in specific objects support the < , <= , > , >= filter operators. Earlier they supported only the = operator. The details on filtering specifics are available in Outreach topic. Pipedrive We updated our Pipedrive connector metadata according to the v1 version of Pipedrive API: We added a new CallLogs object. It supports the INSERT and DELETE operations. We also added a new NoteComments object. It supports the INSERT, UPDATE, and DELETE operations. When updating this object, map the NoteId and the Id fields for better performance. We supported the INSERT, UPDATE, and DELETE operations for the DealFields, OrganizationFields, PersonFields, and ProductFields objects, which were earlier read-only. We changed the VisibleTo field data type to numeric in all objects where it is available. We added new fields to many objects according to the API. We did not remove the obsolete fields for compatibility. We supported the stored procedures in our Pipedrive connector. We also supported the v2 API version for the following Pipedrive objects: DealProducts, Products, ProductVariations, Stages, Pipelines . You can switch between the API versions in the Connection Editor. Detailed information about the connector and its objects is available in the Pipedrive topic. Podio Now you can map the Groupings_Groups field in the Views object in convenient format using the Nested Objects Feature.\nSee more details in Podio topic. ShipStation We supported more operators for filters in our ShipStation connector. From now on, you can use the > , >= , < , and <= operators in filters by date fields in the Orders, OrderItems, Shipments and ShipmentItems objects. See more details in the ShipStation topic. Shopify The GiftCards object is now available in our Shopify connector. To apply the update, log in to Shopify again in the Connection Editor. More information about the connector is available in the Shopify topic. Smartsheet Skyvia now supports EU endpoints of the Smartsheet API. To switch to the EU data center, set the corresponding parameter in the Smartsheet connection. More details are available in the Smartsheet topic. Zammad You can now connect to Zammad using the selfhosted URLs and Zammad URLs. We changed the connection parameter in Zammad connection editor from Subdomain to Domain. To connect to Zammad, specify the full domain, for example mysite.mydomain.com or mysite.zammad.com . The existing connections with specified subdomain parameter become invalid. Update your exsisting connection specifying the full domain. You can find more details about Zammad connector in Zammad topic." }, { "url": "https://docs.skyvia.com/recent-releases/march-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases March 2023 New Connectors Pipeline CRM \u2014 a cloud, easy-to-use CRM platform for small and mid-size businesses. Outreach \u2014 a sales execution platform designed to increase sales performance. Connector Updates ConvertKit Updates We added new fields to the Sequences object in ConvertKit connector: Hold, Repeat, SendTime, SendTimeZone, RecipientRules_LandingPages, RecipientRules_Courses, RecipientRules_Tags, RecipientRules_Lists, Mon, Tue, Wed, Thr, Fri, Sat, Sun, SendTimeZoneAbbr, EmailTemplates . For user convenience the EmailTemplates field of the Sequences object is represented as a separate object SequencesEmailTemplates ." }, { "url": "https://docs.skyvia.com/recent-releases/march-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases March 2024 New CSV Source file modes in Data Flow We added alternative file modes to the Data Flow CSV Source component. Now you can use masks to work with date in file names and upload the newest files automatically. Also, you can use expressions for a more flexible file name configuration. It allows to build custom file name templates using available functions, variables and parameters. \nSee the CSV Source topic for more details. New Connectors Zoho Sprints Zoho Sprints is a cloud-based solution for agile teams designed for collaborative project planning. Connector Updates Constant Contact You can now replicate the following Constant Contact objects with Incremental Updates option enabled: EmailBouncesReport, EmailDidNotOpensReport, EmailForwardsReport, EmailOpensReport, EmailOptoutsReport, EmailSendsReport, EmailUniqueOpensReport . \nSee more details in the Constant Contact topic. Freshservice Skyvia now supports custom objects in all existing workspaces, which belong to an account. GitHub We added new objects to our GitHub connector: Branches, Gists, OrganizationMembers, PublicGists, UserMemberships, RepositoryPullRequestReviews . We changed the primary key in the RepositoryPullRequests object to the composite key RepositoryName + Number . More information is available in the GitHub topic. ShipStation We added new fields for filtering in the Orders object. Use them to improve performance of your queries and integrations. \nMore details are available in the ShipStation topic. Smartsheet Now you can use your sheets in your integrations. Skyvia represents Smartsheet sheets as separate objects with *_Rows suffix in their names. Use these objects in your integrations, conveniently map their columns, replicate and synchronize them. We also supported new DML operations for existing objects, added new AutomationRules, Dashboards and Webhooks objects and new procedures to the connector. \nSee more details in Smartsheet topic. Zammad Skyvia now supports Zammad custom fields. Custom fields represent custom attributes for the Tickets, Users, Organizations , and Groups objects. Custom fields support the INSERT and UPDATE operations. See the Zammad topic for more details." }, { "url": "https://docs.skyvia.com/recent-releases/march-2025.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases March 2025 Control Flow and Data Flow Updates We have changed the look of Data Flow and Control Flow tools. Schedule settings and integration parameters are moved from the toolbar to the right. When editing data flows and control flows, we provide more space for the diagram by hiding menu and making toolbar more compact. Read documentation on the corresponding topics for more information. Connection Validation for Active Automations We added validation and notifications for active automations that use a connection trigger. When your automation is enabled but the connection is no longer valid, you will receive an error message on the monitor tab and an email notification (if notifications are turned on). Schedule Status Display for Integrations Now you can see whether integrations are scheduled for automatic execution while exploring them in the list view. Point to the schedule icon to see detailed information on the schedule. New Connectors Scoro \u2014 an all-in-one business management software that combines project management with time and team management, sales, billing, etc. Connector Updates Float New fields ProjectCode, ProjectCode, BudgetPriority, Status, StartDate, EndDate, CalculatedStartDate, CalculatedEndDate are now available in the Projects object. \nSee more details about the connector in the Float topic. Freshsales Classic Added support for retrieving data from the SalesAccountID field in the Deals object. Full information about the connector is available in Freshsales Classic topic. Google Ads We updated our Google Ads connector to [v18 API version](https://developers.google.com/google-ads/api/docs/release-notes) . The detailed information about the connector is available in Google Ads topic. HubSpot The Orders object is now available in our HubSpot connector. See the connector details in the HubSpot topic. Jira Service Management New objects are now available in our Jira Service Management connector: AssetSchemas, AssetSchemaObjecttypes, AssetSchemaObjecttypeAttributes, AssetObjects, AssetObjectAttributes, AssetSchemaAttributes, AssetSchemaObjecttypesFlat . See more details in Jira Service Management topic. Paymo The SubTasks object has been added to the Paymo connector. The object supports INSERT, UPDATE, and DELETE DML operations, as well as synchronization and incremental replication updates. For more details see Paymo topic. Sage Accounting Now you can connect to the specific organization, you are authorized to access. We added the Business connection parameter to the Connection Editor. \nThe connector details are available in Sage Accounting topic. Stripe We added a new PriceProductId field to the InvoiceLineItems object. See the connector details in the Stripe topic. Zendesk Skyvia now supports the new API version for Zendesk custom objects. You can switch to the new API in the Zendesk connection settings. Skyvia now also supports the Talk API. This change adds the following new objects: Address, GreetingCategory, DefaultGreeting, Greeting, PhoneNumber, DigitalLine, AccountOverview, AgentsActivity, AgentsOverview, CurrentQueueActivity, Call, CallLeg, AgentAvailabilityState, VoiceSettings, IVR, IVRMenu, IVRRoute . Learn more in the Zendesk topic. Zoho Books We changed the primary key for the InvoicePayments object from Id to the InvoicePaymentId field. InvoicePayments now supports the DELETE DML operation. The connector details are available in Zoho Books topic." }, { "url": "https://docs.skyvia.com/recent-releases/may-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases May 2023 Query Parameters From now on, Skyvia allows configuring parameters in Query . Query parameter is a placeholder for varying values you can use instead of constant values in query filters. You can change query parameter values without modifying the query itself. You can use parameters in the Query Builder , in filters, or in SQL code. Also query parameters are supported for Execute Query action in [Automation](https://docs.skyvia.com/automation/) , Import , Export integrations, Source components of Data Flow and Control Flow integrations. You can add parameters to your queries and preview a query before running the integration. Copy and Paste Components in Control Flow and Automation Now you can copy and paste components by using hotkeys and component management menu in [Control Flow](https://docs.skyvia.com/data-integration/control-flow/) and [Automation](https://docs.skyvia.com/automation/) . New Connectors TMetric TMetric \u2014 a powerful time management tool that helps companies to increase productivity, monitor team performance, manage projects and tasks. Connector Updates SendGrid SendGrid Contacts object now supports INSERT, UPDATE and DELETE operations. Also Skyvia now supports custom fields for the Contacts object. Zoho CRM Subforms are supported for Zoho CRM . You can work with them either as with fields of the corresponding parent objects or as with separate objects. Nested objects feature in import and data flow is also supported for them. Note that you need to select API version Ver4 in Zoho CRM connection settings to work with subforms. HubSpot You can now work with HubSpot custom object properties via the corresponding *Properties objects. Zoho Books Added nested objects support for Zoho Books connector. Now you can map complex structured fields more easily. Visit [ZohoBooks connector](https://docs.skyvia.com/connectors/cloud-sources/zohobooks_connections.html) page for more details. Magento Magento store views are supported. You can now select the store view against which the API requests are executed in the connection settings. This parameter is available for Magento 2.x. Confluence Cloud Skyvia now supports the [API V2](https://developer.atlassian.com/cloud/confluence/rest/v2/intro/#about) version for the Confluence Cloud connector. More Confluence objects are now available. The Content object is replaced with BlogPosts, Pages and related objects." }, { "url": "https://docs.skyvia.com/recent-releases/may-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases May 2024 New Connectors Dynamics 365 Business Central \u2014 an ERP solution that provides extensive financial management tools for accurate expense and inventory tracking and budget management. My Hours \u2014 a project time-tracking solution designed to manage performance and efficiency. Connector Updates LinkedIn Ads We supported the latest LinkedIn Ads API [Marketing April 2024](https://learn.microsoft.com/en-us/linkedin/marketing/integrations/recent-changes?view=li-lms-2024-04#april-2024) version. The AdForms, CampaignRecommendations and CampaignInsights objects are deprected. More details about the updates are available in LinkedIn Ads topic. SendPulse SendPulse objects now support Incremental Replication and Synchronization. You can now work with ListEmails custom fields, using the separate dynamic <MailingListName>_ListEmails objects. More details are available in SendPulse topic. Shopify We updated our Shopify connector to API version [2024-04](https://shopify.dev/docs/api/release-notes/2024-04) . Wrike You can now work with the custom fields for the Tasks, Projects , and FolderAndSpaces objects. The FoldersAndSpaces and Projects objects now support INSERT, UPDATE, and DELETE operations. We added the Timelogs object to our connector. The Dates_Start and Dates_Due fields in the Tasks object are now available for import. Skyvia no longer performs additional requests to the Folders object when querying. However, it now does for some fields in the Projects and Timelogs . You can find more details about these and other Wrike connector specifics in Wrike topic." }, { "url": "https://docs.skyvia.com/recent-releases/may-2025.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases May 2025 New Connectors Avalara \u2014 a cloud-based platform that automates tax compliance for businesses, including tax calculation, filing, exemption management, and e-invoicing. Connector Updates ActiveCampaign We supported Custom Objects for the ActiveCampaign connector. We also supported Custom Fields for Accounts, Contacts and Deals objects. See more in the ActiveCampaign topic. BigCommerce We added a new CustomerMetafields object to the BigCommerce connector. Learn more in the BigCommerce topic. Exact Online We added three new options to the Region setting: France, Germany, Spain . Learn more about this connector in the Exact Online topic. Freshsales Classic New fields are now available in the Contacts and Deals objects: Contacts : CampaignId, LeadSourceId, SalesAccountId, LifecycleStageId, ContactStatusId Deals : LeadSourceId, SalesAccountId, DealStageId, DealTypeId, DealReasonId, DealPaymentStatusId, DealProductId, CurrencyId See more information in the Freshsales Classic topic. GetResponse We added a new filtering field for the ContactActivities object: FilterCreatedOn . Learn more in the GetResponse topic. Google Ads We updated the API version to V19.1. The following objects are no longer available: AdGroupFeeds, CampaignFeeds, CustomerFeeds, ExtensionFeedItems, ExtensionFeedItemsReport, FeedItemSetLinks, FeedItemSets, FeedItemTargets, FeedItems, FeedItemsReport, FeedMap, FeedPlaceholderView, Feeds . For more information about this connector, refer to the Google Ads topic. LinkedIn Ads We updated the LinkedIn Ads connector to API version 202504. As part of this update: The CampaignRecommendations and CampaignInsights objects have been removed. The legacy AdForms object is replaced by LeadForms , which is read-only for now. New fields have been added to: Campaigns: ConnectedTelevisionOnly , FrequencyOptimizationPreferenceTimeSpanDuration , FrequencyOptimizationPreferenceTimeSpanUnit , FrequencyOptimizationPreferenceOptimizationType , FrequencyOptimizationPreferenceFrequency CampaignGroups: BudgetOptimizationBidStrategy , BudgetOptimizationBudgetOptimizationStrategy , DailyBudget , ObjectiveType Ads objects: Name Report objects: AudiencePenetration , AverageDwellTime , SubscriptionClicks , ViralSubscriptionClicks Learn more in the LinkedIn Ads topic. MailerLite We added\u00a0a Suppress Extended Requests checkbox for\u00a0the Groups field of\u00a0the Subscribers object. Learn more in the MailerLite topic. NetSuite V2 (REST) We supported custom fields for Transactions, TransactionLines, RevenueElements, SystemNotes and SystemNotesV2 objects. See more in the Netsuite V2 topic. Snowflake We added support for Internal storage in the Files Storage for Bulk setting. Learn more about this change in the Snowflake topic. Zendesk We added\u00a0the SectionId and AuthorId fields to the Article object and the CategoryId field to the Section object. These fields support native filtering, so you can quickly filter Article and Section data by their parent IDs. Learn more in the Zendesk topic." }, { "url": "https://docs.skyvia.com/recent-releases/november-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases November 2023 New Connectors HelpDesk \u2014 a simple ticketing system for tracking, prioritizing and resolving customer support tickets, developed by LiveChat Software. Zoho Billing \u2014 an invoicing, expense management, project billing, and recurring billing tool from Zoho. ServiceNow \u2014 is an intelligent platform for digital transformation that enables companies to digitize any process across their organization with pre-built and customizable workflow solutions. Acumatica \u2014 is an enterprise resource planning (ERP) platform helping small and midsize businesses with financial management, customer relationship management (CRM), project accounting, and distribution management. Connector Updates AWeber We have added new objects to our AWeber connector, SubscriberActivities, SentBroadcastOpens , and SentBroadcastClicks .\nThese objects are read-only. BigCommerce New Objects We have added new objects to our BigCommerce connector: CategoryTrees, Channels, CustomerFormFieldValues, CustomerAddressFormFieldValues, CustomerAttributes, Carts, CartLineItems, CartCustomItems, CartGiftCertificateItems, CartMetafields, SystemLogs, TaxRates, TaxZones, TaxProperties . The CustomerCustomFields and CustomerAddressCustomFields objects are obsolete. They remain in the connector for compatibility purpose. Instead the latest update we added the CustomerFormFieldValues and CustomerAddressFormFieldValues . Existing Object Updates We have added the following new fields to the Customers object: Channels, FormFields, Attributes , and Addresses . Besides, the CustomFields field is removed from the Customers object. It is replaced with the FormFields field. Please note that this is a breaking change, and you may need to modify your existing BigCommerce integrations and backups and manually exclude the CustomFields field from them. For more details, please see our BigCommerce connector topic. Todoist Todoist API has migrated from version v1 to v2. Due to this, we updated our Todoist connector. \nYou can find detailed information about the updates [here](https://developer.todoist.com/rest/v2/#migrating-from-v1) . Salesforce We have added the Ignore Blank Values for Update checkbox to the Salesforce advanced connection parameters. It determines whether the Update operation ignores blank (null) values and keeps data in the corresponding Salesforce fields or it writes null values to Salesforce. Google BigQuery We have supported authentication via service accounts for Google BigQuery. For more details, please see our Google BigQuery connector topic. FreshBooks Now you can work with the Projects and Services data in FreshBooks. We have supported both reading and writing to these objects." }, { "url": "https://docs.skyvia.com/recent-releases/november-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases November 2024 New Connectors BambooHR \u2014 a all-in-one HR platform that provides employee database service and reporting, payroll, time, and benefits, hiring and onboarding solutions. Heymarket \u2014 a messaging app providing services for teams communication and collaboration. PagerDuty \u2014 is a SaaS incident management platform designed to help businesses handle alerts, automate workflows, and notify teams about issues. Connector Updates Clockify Custom fields are now available in our Clockify connector. See more information in Clockify topic. Freshsales Suite We extended the Contacts object with the following fields: FirstCampaign, FirstMedium, FirstSeenChat, FirstSource, LastCampaign, LastMedium, LastSeenChat, LastSource, LatestCampaign, LatestMedium, LatestSource, Locale, McrId OtherUnsubscriptionReason, RecordTypeId, SmsSubscriptionStatus, SystemTags, TotalSessions, UnsubscriptionReason, WhatsappSubscriptionStatus . Freshservice Our Freshservice connector now supports binary attachments. Binary attachments are located in the Content field in the SolutionArticleAttachments and SolutionArticleImages objects. More information about Freshservice connector is available in Freshservice topic. Google BigQuery Our Google BigQuery connector now supports flexible column names. Enable the Flexible Column Names connection parameter to accept object names with any character when replicating to Google BigQuery. More details are available in Google BigQuery topic. Kit (formerly ConvertKit) Now you can work with custom fields in Kit. More information is available in Kit topic. Microsoft Ads We added the following objects to our connector: AdDynamicTextPerformanceReport, AdDynamicTextPerformanceDailyReport, AdDynamicTextPerformanceMonthlyReport, ConversionPerformanceReport, ConversionPerformanceDailyReport, ConversionPerformanceMonthlyReport, DestinationUrlPerformanceReport, DestinationUrlPerformanceDailyReport, DestinationUrlPerformanceMonthlyReport, DSAAutoTargetPerformanceReport, DSAAutoTargetPerformanceDailyReport, DSAAutoTargetPerformanceMonthlyReport, DSASearchQueryPerformanceReport, DSASearchQueryPerformanceDailyReport, DSASearchQueryPerformanceMonthlyReport, DSACategoryPerformanceReport, DSACategoryPerformanceDailyReport, DSACategoryPerformanceMonthlyReport . See more information about the connector in Microsoft Ads topic. Monday.com We\u2019ve added a new parameter: Board Item Columns Naming Rule , and renamed the previous parameter to Board Item Tables Naming Rule for clearer customization of table and column names. For more details, see the Monday.com topic. Paymo We supported native filters in our Paymo connector. You can now optimize your queries without exceeding API limits. See more information in Paymo topic. Podio The Calculated type of Podio custom fields now inherits the type of the variable selected for this field. \nMore information is available in Podio topic. Stripe We added two new objects: AllSubscriptions and AllSubscriptionItems , which contain all subscriptions, including inactive ones. See more information in Stripe topic. Trello The Boards object custom fields are now available in Skyvia. More details are available in Trello topic. Zoho Books We added the ItemBatches object to the Zoho Books connector. More information is available in Zoho Books topic. Zoho Inventory We added the ItemBatches object to the Zoho Inventory connector, and the Batches field is now available in both the Items and ItemAdjustmentLineItems objects. See more information in Zoho Inventory topic." }, { "url": "https://docs.skyvia.com/recent-releases/october-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases October 2023 New Connectors Sendcloud \u2014 a shipping platform that helps e-commerce businesses manage deliveries and provides seamless shipping workflow. Connector Updates Customer.io We have added new stored procedures to allow working with custom objects. From now on you can create or remove custom objects records, and add, modify or remove records associations with customers. For more details, see the Customer.io connector topic. Affinity We have added the Suppress Extended Requests connection parameter to Affinity connection. Use this parameter to enable using the additional web requests to query some special fields from the Persons and Organizations objects. \nfor more details see the Affinity connector topic. Shippo Selecting data from the TrackingStatus object now requires filtering by the Carrier and TrackingNumber fields to query data from it. HubSpot We have added the Column-wise chunking checkbox to the HubSpot connection editor. This checkbox enables splitting requests to ContactListContacts , Contacts , and Deals objects into multiple requests with different fields when request URI length exceeds 8000 characters. This allows you to avoid the 414 Request-URI Too Large error. Greenhouse We have added the CreatedDate and UpdatedDate fields to the Applications object. From now on Skyvia supports Incremental Replication and Synchronization for the Applications object. Productive.io Deals and Budgets The Deals object has been divided into two separate objects Deals and Budgets for correct custom fields processing.\nThus, the foreign keys which earlier referred to the Deals object were removed from the Comments, Invoices, Expenses and Activities objects. Custom Fields We have added support for Productive.io custom fields . The following Productive.io objects contain custom fields: Bookings, Budgets, Companies, Deals, Expenses, Invoices, People, Projects, Tasks . Klaviyo Due to the latest Klaviyo API updates, we have updated our Klaviyo connector. Objects Updates We added the following objects to the connector: Accounts, EmailCampaignTags, SMSCampaignTags, EmailCampaignMessages, SMSCampaignMessages, Coupons, CouponCodes, Images . The EmailCampaigns and SMSCampaigns objects replaced the Campaigns object. Procedures updates The SuppressProfiles and UnsuppressProfiles procedures require a new format for the SuppressionEmail parameter.\nSee more details in our Klaviyo connector topic." }, { "url": "https://docs.skyvia.com/recent-releases/october-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases October 2024 Skyvia API Beta We are excited to introduce the Skyvia API Beta. It provides programmatic access to a variety of Skyvia platform features. The beta version enables the User, Workspace, Integration, Automation, Backup, and endpoints management. The API supports a set of features across Skyvia products, and will be extended in future releases. For more details check API Reference . Automation Test Mode We implemented a Test mode in Automation. Now you can test and debug your automation while building or editing it. No billed tasks involved. Learn more about the Test mode here . Backup UI Updates The Backup UI has been updated: All backup details are now available in one place. The search through the snapshots is more convenient. You can add a description to your backups and check the history of changes. New filter behaviour in backup objects. Changes to Schedule and Last Snapshot functionalities. Connector Updates Harvest Harvest connector now supports native filtering for some of the objects. For more details, see the Harvest connector page. Podio We enabled the metadata caching for Podio. It helps to avoid rate limits errors in your integrations. Full information about this connector is available in Podio topic. Productive.io We have added two new objects to the Productive.io connector: Reports_FinancialItem and Reports_Time .\nFor more details, see the Productive.io connector page." }, { "url": "https://docs.skyvia.com/recent-releases/september-2023.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases September 2023 New Features Automation Versions We added Automation versioning to give you better control over the past automation modifications. Now you can check all previous versions of the current automation, filter and comment on them, and restore your automation to any previous version. To learn more, visit Managing Automation Versions documentation page. New Connectors Teamwork \u2014 a project management tool for resources and workload planning, progress and milestones monitoring, and collaboration. Teamwork CRM \u2014 a CRM tool that helps users effectively manage their sales, focused on ease of use and visibility. Teamwork Desk \u2014 a Teamwork integrated help desk tool for customer communication. Zulip \u2014 a communication tool providing messaging services. Microsoft Ads \u2014 a tool for advertising on the Bing search network and its partner networks (Yahoo and AOL). Connector Updates HubSpot Since HubSpot Calendar API has been sunset on August 31, 2023, we have removed Events and EventTask objects and all the relations involving them. Zendesk We have switched to [cursor-based pagination (CBP)](https://developer.zendesk.com/documentation/api-basics/pagination/paginating-through-lists-using-cursor-pagination/?_ga=2.45865444.1719505781.1695221851-701757852.1650540075) for Zendesk objects that support it. This fixes any errors with the [new offset-based pagination limits](https://support.zendesk.com/hc/en-us/articles/5591904358938-New-limits-for-offset-based-pagination) ." }, { "url": "https://docs.skyvia.com/recent-releases/september-2024.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Recent Releases September 2024 Target File Name Reworked and slightly extended Target File Name functionality in Export. Added a separate Target Definition step, where you can use pre-built file name templates and file name masks to quickly generate custom names for the files you export. New Connectors Booqable \u2014 an All-in-one rental software for order management, online booking and inventory. Connector Updates ActiveCampaign Now, you can conveniently map the nested objects of the complex structured OrderProducts field in the ECommerceOrders . See more details in ActiveCampaign topic. AfterShip We updated our AfterShip connector to [07-2024](https://www.aftership.com/docs/tracking/changelog#2024-07-2024-07-11) API version. The length of the tracking ID has changed from 24 characters to 32 characters. The Id field values in the Trackings object now differ from the values in previous API versions. We added new Stores, Orders, Product , and Fulfillments objects. The Orders, Product , and Fulfillments objects have a complex structure and now support the Nested Objects feature in Import and the Unwind Nested Objects feature in Replication. The Notifications object is deprecated. The Emails and SMSes fields are now available in the Trackings object. The LastCheckPoint object is deprecated. All checkpoints are now stored in the CheckPoints object. More information is available in AfterShip topic. ChargeOver We optimized the custom fields setup in our ChargeOver connector. The custom fields are now available under the UI names instead of inconvenient Custom1, Custom2, Custom3, \u2026, Custom20 names. More information about custom fields is available in ChargeOver topic. Freshdesk We supported custom Freshdesk objects in Skyvia. More information about the support and the connector is available in the Freshdesk topic. HubSpot From now on you can get all the needed HubSpot objects associations using the Customize Associations parameter in the Connection Editor. See more details in HubSpot topic. Jira Service Management Now you can remove users from organization via API using our new stored procedure. More details are available in Jira Service Management topic. Shopify We supported Admin API Access Token authentication for Shopify. To use it, you should create and install the custom app in your Shopify account and obtain the corresponding access token." }, { "url": "https://docs.skyvia.com/skyvia-query-excel-add-in/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Excel Add-in Skyvia Query Excel Add-in is intended for those our clients who use Office on Windows (subscription), Office on Mac (subscription), or Office on the web. The add-in allows you to easily build Excel reports based on your data from a wide variety of databases and cloud applications. It enables you to easily get these data directly to Excel and works with them in the Excel interface. You may share these reports with your colleagues and refresh data in them with a single click whenever needed. The data can be retrieved as they stored in the data source or undergo filtering, aggregations, performing calculations, joins, etc. Thus, the add-in allows you to immediately get live cloud or database data in the form you need for your reports. How It Works Skyvia Query Excel Add-in works with the data sources via Skyvia. The add-in uses Skyvia Query \u2014 our online SQL client and query builder tool for querying and managing cloud and database data. You create connections to the data sources you want to query data from in Skyvia, and Skyvia stores your connections. Queries are executed by the Skyvia service, which sends the returned data to Excel. The queries you save in Query Gallery are available both in Query Add-in and in Skyvia. You can open, run, edit, or delete them both in Query Add-in and in Skyvia Query. It means that you need a Skyvia account to use the add-in. You can register at [https://app.skyvia.com/register](https://app.skyvia.com/register) . Please note that registration in Skyvia is free, and there is a free plan available. You can also view Skyvia pricing on our [pricing page](https://skyvia.com/pricing/) . Before using Skyvia Query Add-in you will need to log in to Skyvia from Excel. Connections For querying data, first you need to create a connection to the corresponding data source. Connections are created and stored in Skyvia, not in Excel. Note that Skyvia supports creating OAuth connections for all the supported data sources that allow connecting via OAuth, like Salesforce, and thus, you do not need to store your credentials in Skyvia for such data sources. Connections are stored in an encrypted form. Skyvia Query Excel Add-in supports getting data from various cloud applications, databases and cloud data warehouses. Please see the Cloud Sources and Databases topics for the complete list. You can find more information on creating the corresponding connections in the Connections section of our documentation. Queries In Skyvia Query add-in, you get data from a data source to Excel using queries. A query determines what data you want to get and in which form. While Skyvia Query add-in queries data using SQL SELECT statements, you are not required to know SQL. You can configure a query visually in Query Builder. Of course, if you are familiar with SQL, you may simply enter an SQL statement. After you retrieve data, the query is linked to the Excel workbook. You can later modify it or re-execute it and refresh data on your workbook with the actual data from the data source in a single click. You can use different queries on different worksheets of an Excel workbook, and thus, have data from different sources in one workbook. Queries can be also saved in Query Gallery to be later reused in other Excel workbooks. Query Gallery offers a convenient interface and a quick search of necessary queries. It also contains a number of predefined queries to various data sources for common use cases that you can use as is or study in order to understand better how to create your own queries. Query Editor Skyvia offers a powerful Query Editor with visual Query Builder. It allows both creating a query visually or entering SQL statements. For this, it has two tabs \u2014 Builder and SQL . First, you select a connection you have created in Skyvia (or you can open the Connection page and start creating a new connection from the query editor directly). Then you can select an object (table) to query data from, select its fields to get data from, and configure data processing and filtering. Though the Query Builder provides a wide range of features, you can configure your query in much more details using SQL. At any time you can switch from visual query creating to editing SQL code, and you can see the generated SQL code for your query that you have created visually. However, if you modify the SQL code, you will not be able to edit this query visually anymore, because not every SQL feature can be interpreted in a visual editor. For databases, Skyvia Query uses their native SQL Syntax. For cloud applications, you use SQLite SQL syntax. After you have finished your query, either in Query Builder or as SQL code, you can immediately run it and get the returned data into your Excel workbook. Please note that query execution can take some time, especially for cloud applications, if the query contains complex processing, queries objects with a lot of data in them, etc. To find out how to query data in more details, see the How to Query Data to Excel topic. Query Gallery Useful queries that you plan to reuse in other workbooks can be saved in Query Gallery. Later you may quickly open the saved query in just a few clicks. Query Gallery displays queries grouped by different criteria. You can view queries, grouped by data sources, by connections, or just view all queries without grouping. To find the necessary query by any of the mentioned criteria, click in the Skyvia Query toolbar and select the corresponding group from the drop-down list. You can also enter a part of the query name in the Search box. To use a query from the gallery, simply click it. Then select a connection for the query and click Run in order to run it immediately without opening the query in the editor. You can also open it for editing, delete it from the Gallery or open it in the Skyvia service. In addition to queries saved by you, Query Gallery contains a number of predefined public queries for different data sources and use cases. If you do not want to see them in the gallery, you can click Hide public queries in the Skyvia Query toolbar to hide them. You can read more about Query Gallery in the Query Gallery in Excel Add-in topic." }, { "url": "https://docs.skyvia.com/skyvia-query-excel-add-in/filtering-data-in-skyvia-query-add-in.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Excel Add-in Filtering Data in Skyvia Query Add-in Skyvia Query Add-in allows you to configure filters in Query Builder in order to filter queried data. Skyvia Query Add-in supports three kinds of filters: value filters, list filters and relative filters. Value filters are supported for all field types. List filters are supported for fields with string data only. Relative filters are supported for the fields with date or datetime data only. Adding Filter Condition To add a filter condition, perform the following steps: In the Filters section, click . In the first drop-down list, you need to select the object name to filter data by its field. By default, the current object, which is selected in the Object list, is used. You can also filter data by fields of its related objects. In the second drop-down list, select the name of the field to filter the data by. It\u2019s not required that the field used in a filter is present in the returned data. You can filter data by any field of the object selected in the previous list. In the third drop-down list, select the kind of the filter to use: value , relative , or list . Value filters are the simplest ones. They simply compare the selected field with some value (or values) or check whether the field is null. For example, the following filter checks if a string field \u201cType\u201d begins with the word \u201cCustomer\u201d: Relative filters are available only for fields with date or datetime data . They check if the field value belongs to a time period that is relative to the current date. For example, whether the date belongs to current, next, or previous week, month, etc. For example, the following filter checks if values of a datetime field \u201cCreatedDate\u201d belong to the previous month: List filters are available only for fields with textual data . They allow you to specify a list of strings and to check if the field value belongs to this list. For example, the following filter checks if a string field \u201cType\u201d is equal to one of the list values: To enter values, select them from the drop-down list. In the third drop-down list select the comparison operator to use \u2014 equal to , not equal to , greater than , is null , etc. Note that different comparison operators are available for different field types. Depending on the selected kind of the filter, specify a value to correspond to, enter a list of values, or select a relative time period. You can add as many filter conditions as you need. To remove a condition , click next to the line of the corresponding condition on the right. AND and OR Filter Groups By default, all the filter conditions are united with the AND logical operator. This means that a row must meet every filter condition to be retrieved to Excel for the web. If you have several filter conditions and you want to query rows that match some of the query conditions, you may use OR operator instead. For this, click the drop-down list with And and select Or . Adding or Removing Filter Groups For more complex cases, conditions can be united into subgroups with their own AND or OR logical operators. Nested filter groups can contain their own nested filter groups and so on. To add a condition group , click button to the right of the parent group logical operator. After this you can add conditions and condition groups to the new condition group. To remove a group from the filter, click next to the line of the corresponding group on the right. Filter Example We would like to show how to configure filters on a specific example. Let us suppose we need to get Salesforce accounts with Customer - Direct or Customer - Channel types added in the previous month. We can do it in two ways \u2014 with usual value filters or with list and relative filters. Using relative and list filters in this case is more convenient, however we show both ways in order to demonstrate how to use value filters and filter groups. Example with Relative and List Filters If you are not logged in to Skyvia, log in. For this, hover over Skyvia Query and click Sign In . After this, in the opened window, enter your Skyvia credentials (email and password) and click Sign In again. In the Skyvia Query window, click . In the topmost box of the Query Editor, select the connection to Salesforce. In the Object box, select Account. In the list below, select the Account fields to query. In the Filters section, click . In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select CreatedDate . In the third drop-down list, select relative . In the fourth drop-down list, keep is in . In the rightmost drop-down list, select Previous Month . In the Filters section, click . In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select Type . In the third drop-down list, select list . In the fourth drop-down list, keep is in list . In the rightmost drop-down list, select Previous Month . Then click Previous month in the Details pane. Enter Customer-Direct in the rightmost box and press Enter. Enter Customer-Channel in the rightmost box and press Enter. That\u2019s all, our query is ready. It looks like the following in the Query Editor: Example with Value Filters If you are not logged in to Skyvia, log in. For this, hover over Skyvia Query and click Sign In . After this, in the opened window, enter your Skyvia credentials (email and password) and click Sign In again. In the Skyvia Query window, click . In the topmost box of the Query Editor, select the connection to Salesforce. In the Object box, select Account. In the list below, select the Account fields to query. In the Filters section, click . In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select CreatedDate . In the third drop-down list, value is already selected. In the fourth drop-down list, select between . In the rightmost boxes, select the start and end date of the previous month. Now we need to specify the filter conditions for selecting records with Customer-Direct or Customer-Channel types. Note that we select records with any of these values, so we need to create a filter subgroup, which is satisfied if any of its conditions is met. Click the button. In the new group, click (the upper one). In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select Type . In the third drop-down list, value is already selected. In the fourth drop-down list equal to is already selected. In the rightmost box, type Customer-Direct . Click in this group again. In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select Type . In the third drop-down list, value is already selected. In the fourth drop-down list equal to is already selected. In the rightmost box, type Customer-Channel . In the drop-down list with the logical operator of this group, select Or . After this our query is ready. It looks like the following in the Query Editor:" }, { "url": "https://docs.skyvia.com/skyvia-query-excel-add-in/how-to-query-data-to-excel.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Excel Add-in How to Query Data to Excel In Skyvia Query Excel Add-in, you get data from a data source to Excel for the web by creating and running queries. Query is an inquiry into the data source, using an SQL SELECT statement. Query is used to extract data from the source in a readable format according to the user\u2019s request. Skyvia Query Excel Add-in allows creating a query to retrieve data to Excel either visually in Query Builder or simply by entering an SQL statement. You can create and run your query in a few simple steps: 1. Log In to Skyvia You must log in to Skyvia before you start using the add-in. If you are not logged in yet, hover over Skyvia Query and click the Sign In button. After this, in the opened window, enter your Skyvia credentials (email and password) and click Sign In again. Then you may close this tab and return to your Excel workbook. If you do not have a Skyvia account yet, you can register for free at [https://app.skyvia.com/register](https://app.skyvia.com/register) . Note that you can query data 5 times per day completely free. 2. Open Query Editor To open Query Editor, click in the Skyvia Query window. In the Query Editor, you can select a connection to the data source, configure and run your query. The Query Editor has two tabs \u2014 Builder and SQL . The Builder tab allows you to configure query visually, while the SQL tab allows you to specify an SQL statement to query your data. 3. Connect to Data Source To query data, first you need to connect to the data source you want to query data from. For this, click the topmost drop-down list in the Query Editor. If you already have the necessary connection created in Skyvia, select this connection from the list. You may enter a part of the connection name to filter the list and quickly find a connection. If you haven\u2019t created the necessary connection yet, you can create a new connection. A connection allows Skyvia to access your data in a database or cloud application. Each connection has the name that you specify, by this name you find the necessary connection. To create a connection, click the New link at the bottom of the opened list. The Select Connector page opens up, which contains all the available connectors \u2014 their names and types. Skyvia supports various connectors, including popular cloud applications, relational databases, cloud data warehouses and file storage services. On this page, you can select the data source you want to work with. You may filter data sources by selecting the data source category in the All list on the left or by entering part of the connection name in the Type to filter box. After you click a data source, a connection editor will open. In the connection editor, you will need to enter the connection name and other required parameters. What kind of parameters you need to enter will depend on the cloud app or database you select. You can find more details on certain connections in the Cloud Sources and Databases sections. After you create a connection, it will be available in the connection drop-down list of the Query Editor. Note that you can create an unlimited number of connections to various data sources in your Skyvia account. 4. Configuring Query Visually Skyvia offers a convenient visual Query Builder to define your queries visually. It allows you to quickly select which data to retrieve, configure data filters, sorting, and aggregations. When you configure a query visually, Skyvia Query Add-in automatically generates the underlying SQL SELECT statement. You can view it by switching to the SQL tab of the Query Editor. Selecting Data to Retrieve When configuring a query, the first step is to determine what data you need. After you have selected a connection to query data from, you need to select the necessary table or cloud object in the Object list. Then simply select checkboxes for the object fields in the Columns box. Querying Data from Multiple Tables (Objects) Query Builder allows you to query data from multiple tables (objects) from a data source. For this, the objects must have a relation between them in the data source \u2014 a foreign key. You can select columns not just from the object that is currently selected in the Object list, but also from all the objects, to which the current object has foreign keys. Thus, if you want to select data from more than one object, you should select an object that is lower in the hierarchy in the Object list. For example, if you need to get product information with prices from Salesforce, you will need to select the PricebookEntry object in the Object list, not Product. When you select data from multiple objects, Skyvia Query Add-in automatically joins these objects by the corresponding foreign keys. Columns List The Columns list displays the selected object columns, as well as the columns of other objects, to which the current one has foreign keys. The columns are grouped by their objects. You can select or clear checkboxes for the columns to include or exclude columns from the query. Note that the groups correspond to the foreign keys/relations, not to the objects. If an object has multiple foreign keys to the same object, the latter one will be present in the list multiple times, with names including the corresponding foreign key names. Thus, you can explicitly choose by which foreign key the objects are joined. By default, the current table column group is expanded, and groups for all the foreign keys are collapsed. You can expand/collapse these groups by clicking down/up arrow in the left part of their header. The checkboxes, which are also located in the left part of a group header, allows you to quickly select and clear all the columns in the group. Grouping and Aggregating Data Skyvia Query Add-in allows you to group and aggregate data by some criteria. For example, you may quickly retrieve a list of customers with the numbers or sums of their orders, etc. To group and aggregate data, you simply select aggregation functions for columns you want to aggregate. Skyvia Query Add-in automatically groups data by all other selected columns. Skyvia supports the following aggregation functions: COUNT, AVG, MIN, MAX, SUM. COUNT is supported for most of the column types, while other aggregations \u2014 only for columns with numeric data. Applying Aggregations To apply an aggregation, click the Value list near the name of a selected column and then select the necessary function from the list. Note that only the functions applicable to the type of the selected column are displayed in this list. To remove a function from a column, click Value in the drop-down list. Grouping Data By Time Periods It is even more convenient to group and aggregate data by time periods because of handy functions that can be applied to datetime data. Skyvia Query Add-in offers a number of functions that extracts a portion of date in a form, which is convenient for grouping by different time periods. For example, Day function returns only the date portion of a value, like \u201c2020-03-23\u201d, allowing you to group results by a day. The following table lists functions, available for datetime columns: Function Name Description Example of a Returned Value Day Returns the date portion of a value 2020-03-23 Month Returns the year and month portions of a value 2020-03 Quarter Returns the year and quarter number 2020,1 Year Returns the year portion of a value 2020 Day of Month Returns the number of day in a month 23 Month of Year Returns the number of a month 3 Quarter of Year Returns the number of a quarter 1 To group and aggregate data by a time period, simply select checkbox for a column (or columns) you want to aggregate data from and for date or datetime column, which you want to use for grouping. Then select the corresponding aggregation function for the column that you want to aggregate, and the corresponding date function (like Day, Month, Quarter, or Year) for the datetime column. The result will be grouped, respectively, by day, month, quarter or year. See an example of creating such report below. Example of Aggregating Data Suppose we want to get total sum of the QuickBooks invoices per month. So we need to get and sum the invoices\u2019 total amount, grouping the result by the invoice date. We can do it in the following way: First, we need to log in to Skyvia and open the Query Editor as described above. Then we select our QuickBooks connection and the Invoice object from it in the corresponding lists. Clear checkbox for the Invoice table. We don\u2019t need all the Invoice fields, only the TxnDate and TotalAmt fields. Select checkboxes for the TxnDate and TotalAmt fields. For the TotalAmt field, click Value and select Sum in the list. For the TxnDate field, click Value and select Month in the list. This is how we get the sums for all invoices per month. That\u2019s all, we can run our query. Sorting Data Configuring data sorting in Query Builder is easy. All you need is to select one or more columns for sorting either from the selected object or from its related objects and specify the sorting order \u2014 ascending or descending. To add a sorting column, simply click the button under the Order By heading. Then select an object and its field to order data by in the first and second lists, respectively. You may sort data by any column from the selected object or the objects it references, regardless of whether you select data from this column or not. If necessary, you may add more columns to the Order By list. By default, Skyvia Query Add-in uses ascending sorting order. Click the DESC toggle if you need to change order for descending. Filtering Data To filter data, you can define various filter conditions for your queries in Query Builder. Read Filtering Data in Skyvia Query Add-in for more information about various filters and how to add them, etc. Working with Query SQL Code Note that this step is not required, and you may skip it. While Query Builder is usually enough for most cases, sometimes you may need to configure your query more deeply and flexibly. In this case you may switch to direct editing of the query SQL code. Switching between SQL and Builder Tabs You can switch to the SQL code at any time \u2014 either immediately and create your SQL query from the scratch or configure your query in Query Builder first, and then tinker the automatically generated SQL code. To view and edit the SQL code of the query, click the SQL tab in the top right corner of Query Editor. Please note that if you edit the generated SQL, you won\u2019t be available to edit this query visually in Query Builder any more. SQL Syntax When working with the database data, use the SQL syntax of the corresponding database. For cloud applications, use SQLite SQL syntax. Running Your Query After you finished creating your query, whether visually or by typing SQL code, you can run your query by clicking the Run button. Skyvia will load the queried data to the Excel document. Note that it can take some time, depending on the number of the records, query complexity, and the speed of the data source itself. What\u2019s Next Refreshing Data Skyvia Query Add-in remembers the query you have used for the sheet. Whenever necessary, you can refresh the queried data from the data source. For this, in the Skyvia Query toolbar, click Refresh list. You can choose to refresh either Current Sheet or All Sheets . Modifying the Query You can edit your query in the Skyvia Query editor simply by clicking in the Skyvia Query window on the left. The Query Editor with the current query will open, and you will be able to modify it, add filters, re-execute it, etc. Skyvia Account Connection If you share the workbook, your data source connection is not shared with it. Your data source connection is stored in Skyvia and is linked to your Skyvia account. If the users you share your workbook with have Skyvia Query Excel Add-in installed, they will be able to see the query SQL code, but they won\u2019t be able to change it or refresh data, unless they have the same connection in their own Skyvia accounts. Saving Queries for Future Use If you want to re-use the query you created for future use in other Excel workbooks, you can save it in our Query Gallery . Saved queries are stored in Skyvia, linked to your Skyvia account as well as connections. To save your query for future use, perform the following actions: Click Save in the Query Editor. If you use an already saved query, you may then click Save to overwrite it or Save As to save the current query under a new name. Enter the query name in the Query Name box. Optionally specify a description for the query in the Description box. Optionally select the Save as a New Query checkbox if you want to save a query as a new one. You can open or delete a saved query in Skyvia later. You can also open query from Query Gallery in Excel Add-in . Note that the Gallery already contains some predefined queries for different data sources for common use cases, which you can use directly or study in order to learn how to create your own queries." }, { "url": "https://docs.skyvia.com/skyvia-query-excel-add-in/query-gallery-in-excel-add-in.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Excel Add-in Query Gallery in Excel Add-in When you need to reuse the same query in multiple Excel workbooks or just want to store it somewhere for future use, you can save your query to Query Gallery. The saved queries are stored in Skyvia (not in Excel) and are available both from Skyvia and from Skyvia Query Excel Add-in. In addition to your own queries, Query Gallery stores a number of predefined public queries to different data sources. Saving Queries for Future Use To save your query for future use, perform the following steps: Click Save in the Query Editor. If you use an already saved query, you may then click Save to overwrite it or Save As to save the current query under a new name. Enter the query name in the Query Name box. Optionally specify a description for the query in the Description box. Optionally select the Save as a New Query checkbox if you want to save a query as a new one. Running, Editing, or Deleting Queries When you open a query in the Query Gallery, Skyvia displays its name, description and connection. You can select another connection for your query in the Connector drop-down list, run the query, open it for editing or delete it from Query Gallery by clicking Run , Edit buttons or respectively, at the bottom of the query. You can also open it in Skyvia by clicking the Open Query in Skyvia button in the query header. Public Queries Skyvia offers a number of predefined public queries in the Query Gallery. Our professionals have selected a number of useful queries for most common use cases for different data sources. You can use these queries immediately or create your own queries, using them as templates. Additionally you can study some aspects of the SQL language on these queries. Public queries are available for all query users, and they have some specific features: They cannot be deleted from Query Gallery. If they obstruct finding your own queries, you can simply hide them by clicking Hide public queries in the Skyvia Query toolbar. Click Show public queries to display queries again. Public queries are not linked with a specific connection. You will need to select a connection manually for them. It means, public queries are not displayed in the Connections category. You can access them only when clicking Sources or Queries in the Query Gallery list." }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on Skyvia Query Google Sheets Add-on allows you to easily build Google Sheets reports based on your data from a wide variety of databases and cloud applications. It enables you to easily get these data directly to Google Sheets and work within the Google Sheets interface. You may share these reports with your colleagues and refresh data in them with a single click whenever needed. The data can be retrieved as it is stored in the data source or undergoes filtering, aggregations, performing calculations, joins, etc. Thus, the add-on allows you to immediately get live cloud or database data in the form you need for your reports. How It Works Skyvia Query Google Sheets Add-on works with the data sources via Skyvia. The add-on uses Skyvia Query \u2014 our online SQL client and query builder tool for querying and managing cloud and database data. First, you select a workspace to work with. Second, you select a connection to the data source you want to query data from in Skyvia. Third, you configure your query. The configured query is executed by the Skyvia service, and the returned data is sent to Google Sheets. The queries you save in the Query Gallery are available both in the Query Add-on and in Skyvia. You can open, run, edit, or delete them both in the add-on and in the Skyvia Query. Read more on how to query data to Google Sheets here . This means that you need a Skyvia account to use the add-on. Note that registration at Skyvia is free. You can register at [https://app.skyvia.com/register](https://app.skyvia.com/register) . There is a free plan available for users. Besides, you can visit the [Pricing page](https://skyvia.com/pricing/) to get familiar with all pricing plans in Skyvia. Before using our Google Sheets Add-on you will need to log in to Skyvia from Google Sheets. Connections For querying data, first you need to create a connection to the corresponding data source. Connections are created and stored in Skyvia, not in Google Sheets. Note that Skyvia supports creating OAuth connections for all the supported data sources that allow connecting via OAuth, like Salesforce, and thus, you do not need to store your credentials in Skyvia for such data sources. Connections are stored in an encrypted form. Skyvia Query Google Sheets Add-on supports getting data from various cloud applications, databases, and cloud data warehouses. Please check the Cloud Sources and Databases topics for the complete list. You can find more information on creating the corresponding connections in the Connections section of our documentation. Queries In Skyvia Query add-on, you get data from a data source to Google Sheets using queries. A query determines what data you want to get and in which form. While Skyvia Query add-on queries data using SQL SELECT statements, you are not required to know SQL. You can configure a query visually in the Query Builder. Of course, if you are familiar with SQL, you may simply enter an SQL statement. After you retrieve data, the query is linked to the Google Sheets workbook. You can later modify it or re-execute it and refresh data on your workbook with the actual data from the data source in a single click. You can use different queries on different worksheets of the Google Sheets workbook and, thus, have data from different sources in one workbook. Queries can be also saved in the Query Gallery for reuse in other Google Sheets workbooks. The Query Gallery offers a convenient interface, which allows you to quickly find the necessary query. It also contains a number of predefined queries to various data sources for common use cases that you can use as is or study in order to better understand how to compose your own queries. Query Editor Skyvia offers a powerful query editor with visual Query Builder. It allows both creating a query visually or entering SQL statements. For this, it has two tabs \u2014 Builder and SQL . First, you select a connection you have created in Skyvia (or you can open the Skyvia page and start creating a new connection from the query editor directly). Then you can select an object (table) to query data from, select its fields to get data from, and configure data processing and filtering. While Query Builder provides a wide range of features, you can configure your query in much more details using SQL. At any time you can switch from visual creating of a query to editing SQL code, and you can see the generated SQL code for your query that you created visually. However, if you modify the SQL code, you will not be able to edit this query visually anymore, because not every SQL feature can be interpreted in the visual Query Builder. For databases, Skyvia Query uses their native SQL Syntax. For cloud applications, you use SQLite SQL syntax. After you finish your query, either in the Query Builder or as SQL code, you can immediately run it and get the returned data into your Google Sheets workbook. Please note that query execution can take some time, especially for cloud applications. That is because the query might contain complex processing or objects with a lot of data in them, etc. To find out how to query data in more details, read the How to Query Data to Google Sheets topic. Query Parameters When you need to use a query with some variables, for example, to create a report on orders for a specified period, and then make the same report over different time periods, you can create a query that uses parameters. A parameter is a variable that you can set without modifying the query itself. Parameters can be used in the Query Builder, in filters, or directly in SQL code. You can find more information on how to use parameters in the Query Parameters in Google Sheets Add-on topic. When you need to use the same query with different parameter values in different reports, save the query to Query Gallery. Whenever you need to use this query in the current or another workbook with different parameter values, simply specify the necessary parameter values directly in the Query Gallery and run the query without opening it for editing. Query Gallery Useful queries that you plan to reuse in other workbooks can be saved in Query Gallery. Then you may quickly open the saved query in just a few clicks. A query can be saved together with a connection reference. Queries in the Query Gallery are grouped by data sources, by connections or can be dispalyed ungrouped. It means you can always find the necessary query by any of the mentioned grouping or simply enter part of query name in the Search box. In addition to queries created and saved by you, Query Gallery contains a number of predefined public queries for different data sources and use cases. If you do not want to see them in the Gallery, you can use the drop-down list in the top-right corner of the Query Gallery to hide them. To use a query from the Gallery, simply click it. Then you can select a connection for the query, specify parameters , which are described below, and click Run in order to run it immediately without opening the query in the editor. You can also open it for editing, delete it from the gallery or open it in Skyvia. For more information about Query Gallery, go to the Query Gallery in Google Sheets Add-On topic." }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/filtering-data-in-skyvia-query-add-on.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on Filtering Data in Skyvia Query Add-on Skyvia Query Add-on allows you to configure filters in Query Builder in order to filter queried data. Adding Filter Condition To add a condition, perform the following steps: In the Filters section, click Add Filter Condition . In the first drop-down list, you need to select the object name to filter data by its field. By default, the current object, which is selected in the Object list, is used. You can also filter data by fields of its related objects. In the second drop-down list, select the name of the field to filter the data by. It is not necessary to have this field selected in the Columns list, and thus, present in the returned data. You can filter data by any field of the object selected in the previous list. In the third drop-down list, select the kind of the filter to use: value , parameter , relative , or list . Value filters are the simplest ones. They simply compare the selected field with some value (or values) or check whether the field is null. For example, the following filter checks if a string field \u201cType\u201d begins with the word \u201cCustomer\u201d: Parameter filters compare the field with a parameter instead of the constant value. You can later specify parameter values before running the query without modifying the query itself. See Query Parameters in Google Sheets Add-On for more information. For example, the following filter checks if values of a datetime field \u201cCreatedDate\u201d lays within the interval specified by two parameters: Relative filters are available only for fields with date or datetime data . They check if the field value belongs to a time period that is relative to the current date. For example, whether the date belongs to current, next, or previous week, month, etc. For example, the following filter checks if values of a datetime field \u201cCreatedDate\u201d belong to the previous year: List filters are available only for fields with textual data. They allow you to specify a list of strings and to check if the field value belongs to this list. For example, the following filter checks if a string field \u201cType\u201d is equal to one of the list values: To enter a list of values, after each value press Enter. In the third drop-down list select the comparison operator to use \u2014 equal to , not equal to , greater than , is null , etc. Note that different comparison operators are available for different field types. Depending on the selected kind of the filter, specify a value or the name of a parameter to compare with, enter a list of values, or select a relative time period. You can add as many filter conditions as you need. To remove a condition , click the Remove Filter cross button in the line of the corresponding condition, on the right. AND and OR Filter Groups By default, all the filter conditions are united with the AND logical operator. This means that a row must meet every filter condition to be retrieved to Google Sheets. If you have several filter conditions and you want to query rows that match some of the query conditions, you may use OR operator instead. For this, click the drop-down list with And and select Or . Adding or Removing Filter Groups For more complex cases, conditions can be united into subgroups with their own AND or OR logical operators. Nested filter groups can contain their own nested filter groups and so on. To add a condition group , click the Add Filter Group button to the right of the parent group logical operator. After this you can add conditions and condition groups to the new condition group. Note that logical operator of a group is not displayed if it has only one filter condition or subgroup or no conditions and groups at all. To remove a group from the filter, click the Remove Group cross button in the line of the corresponding group, on the right. Filter Example Let\u2019s show how to configure filters on a specific example. Suppose we need to get Salesforce accounts with Customer-Direct or Customer-Channel types, added in the previous month. We can do it in two ways \u2014 with usual value filters or with list and relative filters. Using relative and list filters in this case is more convenient, however we show both ways in order to demonstrate using value filters and filter groups. Example with Relative and List Filters If you are not logged in to Skyvia, log in. For this, on the Add-ons menu point to Skyvia Query and then click Log In . Open the query editor - on the Add-ons menu point to Skyvia Query and then click Query . In the topmost box of the query editor, select the connection to Salesforce. In the Object box, select Account. In the list below, select the Account fields to query. Click Add Filter Condition . In the first drop-down list of the condition, Account is already selected. In the second drop-down list, select CreatedDate . In the third drop-down list, select relative . In the fourth drop-down list, keep is in . In the rightmost drop-down list select Previous Month . Click Add Filter Condition . In the first drop-down list of the condition, Account is already selected. In the second drop-down list select Type . In the third drop-down list, select list . In the fourth drop-down list, keep is in list . In the rightmost drop-down list select Previous Month . Then click Previous month in the Details pane. Enter Customer-Direct in the rightmost box and press Enter. Enter Customer-Channel in the rightmost box and press Enter. That\u2019s all, our query is ready. It looks like the following in the editor: Example with Value Filters If you are not logged in to Skyvia, log in. For this, on the Add-ons menu point to Skyvia Query and then click Log In . Open the query editor \u2014 on the Add-ons menu point to Skyvia Query and then click Query . In the topmost box of the query editor, select the connection to Salesforce. In the Object box, select Account. In the list below, select the Account fields to query. Click Add Filter Condition . In the first drop-down list of the condition, Account is already selected. In the second drop-down list select CreatedDate . In the third drop-down list, value is already selected. In the fourth drop-down list select between . In the rightmost boxes, select the start and end date of the previous month. Now we need to specify the filter conditions for selecting records with Customer-Direct or Customer-Channel types. Note that we select records with any of these values, so we need to create a filter subgroup, which is satisfied if any of its conditions is met. Click the Add Filter Group button. In the new group, click Add Filter Condition (the upper one). In the first drop-down list of the condition, Account is already selected. In the second drop-down list select Type . In the third drop-down list, value is already selected. In the fourth drop-down list equal to is already selected. In the rightmost box, type Customer-Direct . Click Add Filter Condition in this group again. In the first drop-down list of the condition, Account is already selected. In the second drop-down list select Type . In the third drop-down list, value is already selected. In the fourth drop-down list equal to is already selected. In the rightmost box, type Customer-Channel . In the drop-down list with the logical operator of this group, select Or . After this our query is ready. It looks like the following in the editor:" }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/how-to-query-data-to-google-sheets.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on How to Query Data to Google Sheets In Skyvia Query add-on, you get data from a data source to Google Sheets by creating and running queries. Query is an inquiry into the data source, using an SQL SELECT statement. Query is used to extract data from the source in a readable format according to the user\u2019s request. Skyvia Query add-on allows creating a query to retrieve data to Google Sheets either visually, in the visual Query Builder, or simply by entering an SQL statement. You can create and run a query in a few simple steps: 1. Log In to Skyvia You must log in to Skyvia before you start using the add-on. If you are not logged in yet, on the Add-ons menu, select Skyvia Query and click Login . In the open Login to Skyvia window, click Login . After this, in the open tab, enter your Skyvia credentials (email and password) and click Sign In . Then you may close this tab and return to your Google Sheets workbook. If you do not have a Skyvia account yet, you can register for free at [https://app.skyvia.com/register](https://app.skyvia.com/register) . Note that you can query data 5 times per day completely for free. 2. Open Query Editor When you are already logged in to Skyvia, on the Add-ons menu, select Skyvia Query and click Query to open Query Editor. In the Query Editor, you select a workspace, select a connection to the required data source, configure and run your query. Query Editor has two tabs \u2014 Builder and SQL . The Builder tab allows you to configure query visually, while the SQL tab allows you to specify an SQL statement to query data. 3. Select Workspace You need to select a workspace to work with. Workspace is a working area of a user or a team of users. Workspace may represent your private workspace if your work alone or may represent the workspace of the company/team you were invited to. Workspace combines all the objects (connections, integrations, etc), created by you or by company, in one place. To select a workspace, click the first drop-down list on the top of the Skyvia Query window and select a necessary workspace. 4. Select Connection To query data, first you need to connect to the data source you want to query data from. For this, click the second drop-down list on the top of the Skyvia Query window. If you have already created the necessary connection in Skyvia, select this connection from the list. You may enter part of the connection name to filter the list and find the necessary connection quicker. A connection allows Skyvia to access your data in a database or cloud application. If you haven\u2019t created the necessary connection yet, create it by clicking New at the bottom of the drop-down list. The Select Connector page will open. On this page, you will see all the data sources supported by Skyvia. Read more on how to manage connections to your data sources here . You may filter data sources by selecting the data source category on the left. After you click a certain data source, the Connection Editor window will open. In this window, you specify the connection name and enter other parameters. The parameters you need to enter depend strictly on the cloud app or database you select. You can find more details about specific connections and their parameters in the Cloud Sources and Databases topics. After you create a connection, it will become available in the drop-down list of the Query Editor. Note that you can create an unlimited number of connections to various data sources in your Skyvia account. 4. Configure Query Visually Skyvia offers a convenient visual Query Builder for defining your queries visually. It allows you to quickly select which data to retrieve, configure data filters, sorting, and aggregations. When you configure a query visually, Skyvia Query Add-on automatically generates the underlying SQL SELECT statement. You can view it by switching to the SQL tab of the query editor. Selecting Data to Retrieve When configuring a query, the first step is to determine what data you need. After you\u2019ve selected a connection to query data from, you need to select the necessary table or cloud object in the Object list. Then simply select checkboxes for the object fields in the Columns box. Querying Data from Multiple Tables (Objects) Query Builder allows you to query data from multiple tables (objects) from a data source. For this, the objects must have a relation between them in the data source \u2014 a foreign key. You can select columns not just from the object that is currently selected in the Object list, but also from all the objects, to which the current object has foreign keys. Thus, if you want to select data from more than one table, you should select a table that is lower in the hierarchy in the Object list. For example, if you need to get product information with prices from Salesforce, you will need to select the PricebookEntry object in the Object list, not Product. When you select data from multiple objects, Skyvia Query Add-on automatically joins these objects by the corresponding foreign keys. Columns List The Columns list displays the selected object columns, as well as the columns of other objects, to which the current one has foreign keys. The columns are grouped by their objects. You can select or clear checkboxes for the columns to include or exclude columns from the query. Note that the groups correspond to the foreign keys/relations, not to the objects. If an object has multiple foreign keys to the same object, the latter one will be present in the list multiple times, with names including the corresponding foreign key names. Thus, you can explicitly choose by which foreign key the objects are joined. By default, the current object column group is expanded, and groups for all the foreign keys are collapsed. You can expand/collapse these groups by clicking down/up arrow in the right part their header. The checkbox in the left part of a group header allows you to quickly select and clear all the columns in the group. Grouping and Aggregating Data Skyvia Query Add-on allows you to group and aggregate data by some criteria. For example, you may quickly retrieve the list of customers with the numbers or sums of their orders, etc. To group and aggregate data, you simply select aggregation functions for columns you want to aggregate. Skyvia Query Add-on automatically groups data by all other selected columns. Skyvia supports the following aggregation functions: COUNT, AVG, MIN, MAX, SUM. COUNT is supported for most of the column types, while other aggregations \u2014 only for columns with numeric data. Applying Aggregations To apply an aggregation, click the value link near the name of a selected column and then select the necessary function from the list. Note that only the functions, applicable to the type of the selected column, are displayed in this list. To remove a function from a column click the fn link near the column name and then click Value . Grouping Data By Time Periods It is even more convenient to group and aggregate data by time periods because of handy functions that can be applied to datetime data. Skyvia Query Add-on offers a number of functions that extracts a portion of date in a form, convenient for grouping by different time periods. For example, Day function returns only the date portion of a value, like \u201c2017-02-16\u201d, allowing you to group results by a day. The following table lists functions, available for datetime columns: Function Name Description Example of a returned value Day Returns the date portion of a value 2017-02-16 Month Returns the year and month portions of a value 2017-02 Quarter Returns the year and quarter number 2017,1 Year Returns the year portion of a value 2017 Day of Month Returns the number of day in a month 16 Month of Year Returns the number of a month 2 Quarter of Year Returns the number of a quarter 1 To group and aggregate data by a time period, simply select checkbox for a column (or columns) you want to aggregate data from and for date or datetime column, which you want to use for grouping. Then select the corresponding aggregation function for the column that you want to aggregate, and the corresponding date function (like Day, Month, Quarter, or Year) for the datetime column. The result will be grouped, respectively, by day, month, quarter or year. See an example of creating such report below. Example of Aggregating Data Suppose we want to get total sum of the QuickBooks invoices per month. So we need to get and sum the invoices\u2019 total amount, grouping the result by the invoice date. We can do it in the following way: First, we need to log in to Skyvia and open the query editor as described above. Then we select our QuickBooks connection and the Invoice object from it in the corresponding lists. Clear checkbox for the Invoice table. We don\u2019t need all the Invoice fields, only the TxnDate and TotalAmt fields. Select checkboxes for the TxnDate and TotalAmt fields. For the TotalAmt field, click the value link and select Sum in the list. For the TxnDate field, click the value link and select Month in the list. This is how we get the sums for all invoices per month. That\u2019s all, we can run our query. Sorting Data Configuring data sorting in Query Builder is easy. All you need is to select one or more columns for sorting either from the selected object or from its related objects and specify the sorting order - ascending or descending. To add a sorting column, simply click the button under the Order By heading. Then select an object and its field to order data by in the first and second lists, respectively. You may sort data by any column from the selected object or the objects it references, regardless of whether you select data from this column or not. If necessary, you may add more columns to the Order By list. By default, Skyvia Query Add-on uses ascending sorting order. Click the Sort button if you need to change order. Filtering Data To filter data, you can define various filter conditions for your queries in Query Builder. Read Filtering Data in Skyvia Query Add-on for more information about filter kinds, adding filters, etc. Working with Query SQL Code Note that this step is not required, and you may skip it. While Query Builder usually is enough for most cases, sometimes you may need to configure your query more deeply and flexibly. In this case you may switch to direct editing of the query SQL code. Switching between SQL and Builder Tabs You can switch to the SQL code at any time \u2014 either immediately and create your SQL query from the scratch or configure your query in the Query Builder first, and then tinker the automatically generated SQL code. To view and edit the SQL code of the query, click the SQL tab in the top right corner of the query editor. Please note, however, that if you edit the generated SQL, you won\u2019t be available to edit this query visually in Query Builder any more. SQL Syntax When working with the database data, use the SQL syntax of the corresponding database. For cloud applications, use SQLite SQL syntax. Running Your Query After you finished creating your query, whether visually or by typing SQL code, you can run your query by clicking the Run button. Skyvia will load the queried data to the Google Sheets document. Note that it can take some time, depending on the number of the records, query complexity, and the speed of the data source itself. What\u2019s Next Refreshing Data Skyvia Query Add-on remembers the query you have used for the sheet. Whenever necessary, you can refresh the queried data from the data source. For this, on the Add-ons menu, select Skyvia Query and click Refresh Current Sheet or Refresh All Sheets . Modifying the Query You can edit the query in Skyvia Query editor in the following way: on the Add-ons menu, select Skyvia Query and click Query . This query will open in the Query Editor, and you may modify it, set parameter values, re-execute it, etc. Skyvia Account Connection If you share this workbook, your data source connection is not shared with it. Your data source connection is stored on Skyvia and is linked to your Skyvia account. If the users you share your workbook with have Skyvia Query Google Sheets Add-on installed, they will be able to see the query SQL code, but they won\u2019t be able to change it or refresh data, unless they have the same connection in their own Skyvia accounts. Saving Query for Future Use If you want to re-use the query you created for future use in other Google Sheets workbooks, you can save it in our Query Gallery . Saved queries are stored in Skyvia, linked to your Skyvia account as well as connections. To save your query for future use, perform the following actions: Click Save in the Query Editor. If you use an already saved query, you may then click Save to overwrite it or Save As to save the current query under a new name. Enter the name of the query to the Name box. Optionally specify a description for the query in the Description box. You can open or delete a saved query in Skyvia later. You can also open query from Query Gallery in Google Sheets Add-on . Note that the Gallery already contains some predefined queries for different data sources for common use cases, which you can use directly or study in order to learn how to create your own queries." }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/query-gallery-in-google-sheets-add-on.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on Query Gallery in Google Sheets Add-on When you need to reuse the same query in multiple Google Sheets workbooks or just want to store it somewhere for future use, you can save your query to the Skyvia Query Gallery. These saved queries are stored in Skyvia (not in Google Sheets) and are available both from Skyvia and from Skyvia Query Google Sheets Add-on. In addition to your own queries, Query Gallery stores a number of predefined public queries to different data sources. With public queries you can automate most common query tasks using predefined templates in just a few clicks. Saving Query for Future Use To save your query for future use, perform the following steps: Click Save in the Query Editor. If you use an already saved query, you may then click Save to overwrite it or Save As to save the current query under a new name. Enter the query name in the Name box. Optionally specify a description for the query in the Description box. Query Gallery User Interface Elements The topmost Query Gallery part contains the Workspace drop-down list. It displays accounts you have or were invited to and workspaces, which belong to these accounts and you have rights to work with. Please note you need to select a workspace you want to work with. Below there are two other drop-down lists, which help you arrange queries the way you need. In the drop-down list on the left, you can select how you want to display the stored queries. You can display them grouped by data sources, by connections or you can choose to show all queries ungrouped. To quickly find a necessary query in the list, select No grouping from the drop-down list and type part of query name in the Search box. In the drop-down list on the right, you can choose either to display only private queries in the Query Gallery window or display all queries \u2014 private and public ones together. Brief Description of Elements Grouping by Sources The stored queries are grouped by data source they query data from. When you select Group by Sources option from the drop-down list, data sources, for which there are stored queries in the gallery, are displayed. After you click a certain data source, it displays a list of queries available for this data source (public and private queries). The list header will display the data source you have clicked. To navigate back to the list of data sources, click the button in the data source header. Grouping by Connections The stored queries are grouped by connections they use. When you select Group by Connections option from the drop-down list, connections, for which there are stored queries in the gallery, are displayed. After you click a certain connection, it displays a list of queries created with this connection. The list header will display the name of the connection you have clicked. To navigate back to the list of connections, click the button in the connection header. This option does not display queries saved without the connection information (with the Save connection reference checkbox not selected). No Grouping All the stored queries without grouping are displayed here. When you select No grouping option from the drop-down list, all public and privately created queries are displayed here at once. When you select other options from the drop-down list, you need to click a data source or connection and only afterwards queries will be displayed. You can enter part of a query name to the Search box to quickly find the necessary query. Click a query in the list to display its details in the Gallery. Running, Editing, or Deleting Queries When you open a query in the Gallery, Skyvia displays its name, description, and connection. Skyvia also displays the query parameters, if any, allowing you to configure their types and set their values without opening the query in the query editor. You can select another connection for your query in the Connection drop-down list, run the query, open it for editing, or delete it from the Gallery by clicking Run , Edit , or Delete button, respectively, at the bottom of the Gallery. You can also open it on Skyvia by clicking the Open Query in Skyvia button in the Gallery header. Public Queries Skyvia offers a number of predefined public queries in the Query Gallery. Our professionals have selected a number of useful queries for most common use cases for different data sources. You can use these queries immediately or create your own queries, using them as templates. Additionally you can study some aspects of the SQL language on these queries. Public queries are available for all Query users, and they have some specific features: They cannot be deleted from the Query Gallery. If they obstruct finding your own queries, you can simply hide them by selecting Personal queries only from the right drop-down list in the Query Gallery header. Select Personal and public queries to display them again. Public queries are not linked with a specific connection. You will need to select a connection manually for them. Public queries are displayed when you select Group by Sources or No grouping options from the drop-down list on the left in the Query Gallery window." }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/query-parameters-in-google-sheets-add-on.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on Query Parameters in Google Sheets Add-on When you need to run the same query often, but with some different values (for example, build the same report but for different time periods), you can use queries with parameters. Using Parameters A parameter is a variable that you can use in query filters or in the SQL code of a query instead of a constant value. After you configure your query, you will be able to set new parameter values without modifying the query itself. Parameters are especially useful if you save the query with parameters in the Query Gallery and then reuse it in this or another workbook. Skyvia Query add-on allows you to set parameter values directly from the gallery without even opening query in the editor. Parameter Names When you specify parameter name in a filter condition, in visual query builder, you must specify the name containing alphanumeric characters or underscore. Skyvia will automatically add the necessary prefix to it. When you use a parameter in the SQL code, you should prefix the alphanumeric name with the colon \u201c:\u201d character for MySQL or with the \u201c@\u201d character for all other data sources. You can use the colon \u201c:\u201d character for other sources too, if necessary. Additionally, for most data sources you can use unnamed parameters in the SQL statements, that are added as the \u201c?\u201d character. Setting Parameter Values Parameter values can be set and modified whenever necessary without changing the whole query. You can set parameters for a query either in the Query Editor or in Query Gallery (if you saved your query with parameters in it). To specify values for query parameters in the Query Editor, click the Parameters button. Skyvia Query will detect all the parameters used in the query and display their list, allowing you to select a data type and specify a value for each of them. After this, you can either apply parameter changes or cancel them, using the Apply or Cancel buttons respectively. In Query Gallery the parameters are displayed when you click a query with parameters. You can quickly configure their types and values before running the query without even opening it in the Query Editor." }, { "url": "https://docs.skyvia.com/skyvia-query-google-sheets-add-on/skyvia-query-add-on-options.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Skyvia Query Google Sheets Add-on Skyvia Query Add-On Options Skyvia Query Add-on for Google Sheets allows you to customize some settings regarding the add-on behavior and data display. To configure these settings, on the Add-ons menu, select Skyvia Query and click Settings . The Skyvia Query Settings dialog box will open. It contains the following options: Use current cursor position as the first cell of the table Determines whether the top left cell of the result table with data in Google Sheets will be located at the current cursor position. Assign query name to worksheet name Determines whether to assign a query name to the worksheet name when you execute a query stored in the Query Gallery . Close the query window after execution Determines whether to close the query dialog box after the query is successfully executed." }, { "url": "https://docs.skyvia.com/supported-sql-for-cloud-sources/", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Supported SQL for Cloud Sources Skyvia supports SQL SELECT , INSERT , UPDATE , and DELETE statements for cloud sources. Where SQL Is Supported You can execute SQL in different Skyvia tools: Execute Command action , which can be used in different Data Flow components, in Control Flow , in Import and Export in advanced mode, etc. Query product SQL endpoints Basic SQL Aspects Before proceeding to the statements here are some basic aspects of Skyvia\u2019s SQL for cloud applications: Identifier Quoting Table and field names are quoted with double quotation marks. Note that quoting table and field names are required only when they contain spaces or other \u201cexotic\u201d characters or when they coincide with SQL keywords. For example (a query for Zoho CRM): 1\n2 SELECT \"t\" . \"Account Name\" , \"t\" . \"Account Site\" FROM \"Accounts\" AS t Constant Quoting String and date constants are quoted with single quotation marks. For example (a query for Salesforce): 1\n2\n3 SELECT \"t\" . \"Name\" FROM \"Account\" AS t WHERE ( \"t\" . \"CreatedDate\" > '2014-11-30 21:57:32' ) Datetime Constant Format Date constants are written in the following format: \u2018YYYY-MM-DD hh:mm:ss\u2019. Time part or seconds may be omitted. See the example above. SQL Parameters SQL parameters for cloud apps are declared using the colon character. For example: 1\n2\n3 SELECT \"t\" . \"Account Name\" , \"t\" . \"Account Site\" FROM \"Accounts\" AS t WHERE ( \"t\" . \"CreatedDate\" > : Date ) Supported SQL Statements SELECT Statements DML Statements CALL Statements and Stored Procedures" }, { "url": "https://docs.skyvia.com/supported-sql-for-cloud-sources/call-statements-and-stored-procedures.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Supported SQL for Cloud Sources CALL Statements and Stored Procedures Cloud app connectors can support operations that cannot be presented as objects with SELECT/INSERT/UPDATE/DELETE operations. Examples of such operations include sending an SMS in ClickSend , copying a document in Formstack Documents , etc. Skyvia allows running such operations as stored procedures. Skyvia presents certain features of some cloud apps as stored procedures. They can be specific operations that cannot be presented via objects and usual SELECT/INSERT/UPDATE/DELETE operations or just DML operations for objects with specifics. For example, such operations like sending an SMS in or campaign You can find the list of supported stored procedures for a connector in the documentation of the corresponding connector. Using CALL Statements You can use CALL SQL statements against cloud apps to run stored procedures. You can use them in the same tools as any other supported SQL statement . However, there is no point to call them when Skyvia expects SQL statement to return data, because stored procedures in cloud app connectors do not return any data. They only perform DML or other operations in the data source. For example, it is not recommended to call them in: Export and Import tasks, in the Advanced mode of the task editor Data Flow Source and Lookup components Besides, using stored procedures in some of these places could lead to undesired stored procedure calls when designing the integration. Stored Procedure Parameters Stored procedures have parameters of different data types. Some parameters are required . This means that you must specify non-null values for them, otherwise the stored procedure call will fail. Others are optional (not required), and you may specify null values for them. However, even if a parameter is optional, you cannot omit it in the CALL statement at all. CALL Statement Example Here is an example of calling a stored procedure that copies a document in Formstack Documents : 1 call CopyDocument ( 903384 , 'Cloned Document' )" }, { "url": "https://docs.skyvia.com/supported-sql-for-cloud-sources/dml-statements.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Supported SQL for Cloud Sources DML Statements INSERT Skyvia supports INSERT statements. You don\u2019t need to specify values for all table fields in the INSERT statement, you can omit fields that are not required by the target cloud source. The RETURNING clause in INSERT statements is supported. INSERT \u2026 SELECT statement is NOT supported. The following example creates a new Salesforce Contact: 1\n2\n3\n4\n5\n6\n7\n8\n9 INSERT INTO Contact ( FirstName , LastName , Phone , Email ) VALUES ( 'John' , 'Smith' , '(650) 450-8820' , 'johnsmith@devart.com' ) UPDATE Skyvia supports creating UPDATE statements with complex WHERE clauses with numerous conditions united with logical operators. It supports various comparison operators for conditions. The FROM clause in UPDATE statements is not supported. The RETURNING clause in UPDATE statements is supported. UPDATE statements can update more than one record in Skyvia. Specifying Id values in the WHERE clause is not required, though adding them improves the performance of UPDATE statements. The following example increases all prices in a specific Salesforce Pricebook2 by 5%: 1\n2 UPDATE PricebookEntry SET unitprice = round ( unitprice * 1 . 05 , 2 ) WHERE pricebook2id = '01sA000000026tGIAQ' DELETE Skyvia supports creating DELETE statements with complex WHERE clauses with numerous conditions united with logical operators. It supports various comparison operators for conditions. DELETE statements can delete more than one record in Skyvia. Specifying Id values in the WHERE clause is not required, though adding them improves the performance of DELETE statements. The following example deletes all Salesforce contacts not assigned to accounts: 1 DELETE FROM Contact WHERE AccountID IS NULL The following example deletes several Salesforce accounts with specified Ids: 1\n2 DELETE FROM \"Account\" WHERE \"Id\" in ( '001A000001HSpZUIA1' , '001A000001HSpZoIAL' , '001A000001HSpdTIAT' )" }, { "url": "https://docs.skyvia.com/supported-sql-for-cloud-sources/select-statements.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation Supported SQL for Cloud Sources SELECT Statements Skyvia supports most SQL features for SELECT statements. Here you can see some of the supported features and examples of the queries. Simple SELECT Statements The following example selects account names and their types from Salesforce: 1 SELECT \"Name\" , \"Type\" FROM \"Account\" SELECT Statements with * The following example selects all the account information from Salesforce: 1 SELECT * FROM \"Account\" Quoted Identifiers, Aliases. The following example selects account names and sites from Zoho CRM: 1\n2 SELECT \"t\" . \"Account Name\" , \"t\" . \"Account Site\" AS Site FROM \"Accounts\" AS t WHERE Clause, LIKE Operator Skyvia supports WHERE clauses and LIKE operator. The following query selects all Salesforce users with the name, containing John. 1\n2\n3 SELECT \"t\" . * FROM \"User\" AS t WHERE t . \"Name\" LIKE '%John%' Operations with Dates Skyvia allows performing various operations with dates, using date and strftime SQL functions. The following query demonstrates use of these functions. It queries Salesforce accounts, inactive for last 30 days, and returns the year of the last activity, the date of the first day of inactivity, and account name: 1\n2\n3\n4\n5\n6\n7 SELECT a . \"LastActivityDate\" 'Last Activity' , strftime ( '%Y' , a . \"LastActivityDate\" ) 'Last Activity Year' , DATE ( a . \"LastActivityDate\" , '+1 day' ) 'Next Day' , a . \"Name\" 'Account Name' FROM \"Account\" a WHERE DATE ( \"LastActivityDate\" ) < DATE ( 'now' , 'localtime' , '-30 days' ) ORDER BY and LIMIT Clauses Skyvia supports ORDER BY and LIMIT clauses. The following query takes all campaigns TOP 10 opportunities based on the estimated revenue from Dynamics 365: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 SELECT o . \"name\" 'Topic' , o . \"estimatedvalue\" 'Est. Revenue' FROM \"opportunity\" o INNER JOIN \"owner\" AS owner_o ON o . \"ownerid\" = owner_o . \"ownerid\" WHERE o . \"statecode\" = 'Open' AND owner_o . \"name\" = 'Devart Corp' ORDER BY o . \"estimatedvalue\" DESC LIMIT 10 SELECT statements with JOINs Skyvia supports uniting more than two tables with JOINs. Skyvia supports INNER, OUTER, and CROSS JOINs. The following query selects detailed information on the opportunities from Salesforce \u2014 which products at which quantity and which price are sold within each opportunity, the date when the opportunity is expected to close, and whether the opportunity is won: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12 SELECT Opportunity . Name , Opportunity . CloseDate , Opportunity . IsWon , Product2 . Name AS Product , OpportunityLineItem . Quantity , OpportunityLineItem . UnitPrice FROM OpportunityLineItem INNER JOIN Opportunity ON OpportunityLineItem . OpportunityId = Opportunity . Id INNER JOIN Product2 ON OpportunityLineItem . Product2Id = Product2 . Id Complex WHERE Clauses Skyvia supports complex WHERE clauses with numerous conditions united with logical operators are also supported. The following query selects detailed information on the opportunities from Salesforce \u2014 which products at which quantity and which price are sold within each opportunity, the date when the opportunity is expected to close, filtered by pricebooks used, the date when the opportunity is expected to close, and opportunity names: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20 SELECT Opportunity . Name , Opportunity . CloseDate , Product2 . Name , OpportunityLineItem . Quantity , OpportunityLineItem . UnitPrice FROM OpportunityLineItem INNER JOIN Opportunity ON OpportunityLineItem . OpportunityId = Opportunity . Id INNER JOIN Product2 ON OpportunityLineItem . Product2Id = Product2 . Id INNER JOIN PricebookEntry ON OpportunityLineItem . PricebookEntryId = PricebookEntry . Id INNER JOIN Pricebook2 ON PricebookEntry . Pricebook2Id = Pricebook2 . Id WHERE OpportunityLineItem . PricebookEntryId IS NOT NULL AND ( Pricebook2 . Name = 'Standard' OR Pricebook2 . Name = 'Discounted' ) AND Opportunity . CloseDate BETWEEN '2015-01-01' AND '2015-12-31' AND Opportunity . Name LIKE '%order%' Aggregation Skyvia supports aggregation in SELECT statements using GROUP BY and HAVING clauses. The following query returns the number of opportunities per account where account is of type \u2018Customer\u2019: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10 SELECT Account . Name , COUNT ( Opportunity . Id ) AS expr1 , Account . Type FROM Opportunity INNER JOIN Account ON Opportunity . AccountId = Account . Id GROUP BY Account . Name , Account . Type HAVING Account . Type LIKE 'Customer%' Expressions Including Several Columns The following query gets total revenue from Salesforce opportunities per account: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 SELECT Account . Name , Account . Type , SUM ( OpportunityLineItem . UnitPrice * OpportunityLineItem . Quantity ) AS Total FROM Opportunity INNER JOIN Account ON Opportunity . AccountId = Account . Id INNER JOIN OpportunityLineItem ON OpportunityLineItem . OpportunityId = Opportunity . Id GROUP BY Account . Name , Account . Type CASE, IN Expressions, SQL Functions The following query takes all campaigns for the current fiscal year from Dynamics 365 and compares the budgeted amount versus the actual costs recorded to run the campaign: 1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14 SELECT ( CASE WHEN ( strftime ( '%m' , c . \"actualend\" ) IN ( '01' , '02' , '03' )) THEN 'Q1' WHEN ( strftime ( '%m' , c . \"actualend\" ) IN ( '04' , '05' , '06' )) THEN 'Q2' WHEN ( strftime ( '%m' , c . \"actualend\" ) IN ( '07' , '08' , '09' )) THEN 'Q3' WHEN ( strftime ( '%m' , c . \"actualend\" ) in ( '10' , '11' , '12' )) THEN 'Q4' ELSE 'blank' END ) \"Quarter\" , SUM ( c . \"budgetedcost\" ) \"Budget Allocated\" , SUM ( c . \"totalactualcost\" ) \"Total Cost of Campaign\" FROM campaign c WHERE ( strftime ( '%Y' , c . \"actualstart\" ) = strftime ( '%Y' , 'now' ) OR strftime ( '%Y' , c . \"actualend\" ) = strftime ( '%Y' , 'now' )) AND c . \"statuscode\" not in ( 'Canceled' , 'Inactive' , 'Suspended' ) GROUP BY 1 ORDER BY 1 You can find more query examples in our [Gallery](https://skyvia.com/gallery/) ." }, { "url": "https://docs.skyvia.com/user-interface-basics.html", "product_name": "Unknown", "content_type": "Documentation", "content": "Documentation User Interface Basics Skyvia interface offers an easy object creation, its fast execution and convenient location. The advanced navigation and various filtering options are also an advantage. You can easily switch between objects, filter objects by categories, display objects in different views, make changes to objects, delete and restore them (if necessary). In this topic, we briefly describe the basic interface functionality of Skyvia. Login to Skyvia When you sign in to Skyvia, you are automatically dropped into your Skyvia default workspace, which you receive during your registration in our platform. If later you create another workspace or you are invited to the account and workspace of another user or company, you can select one of the available workspaces to set it to default. When next time you log in to Skyvia, you will be dropped into this workspace automatically. After you finish working with one workspace, you can anytime change the current workspace for another one. For this, click the Workspace drop-down list and select Change workspace from it. You will appear on the Select Your Workspace page, which contains your default account and accounts you were invited to as well as your default workspace and other workspaces you were assigned roles in. Select a workspace you want to work with from the list. New Objects in Skyvia You can create any new object in Skyvia via the +NEW menu. The Skyvia Gallery with common use cases is also accessible via the +NEW menu. A number of new objects, you create in Skyvia, can be unlimited, mostly it depends on the pricing plan you or your company subscribed to. More info about subscriptions you can find in the Subscriptions, Payment and Trials topic. To create a new object, click +NEW in the top menu and select product options in the columns. Please note that there are two different approaches of creating a new object in Skyvia \u2014 through Wizard or through Editor . Find examples of these approaches below. Please also note that after you have created a new object, you can rename and make changes to it. Creating Objects in the Convenient Wizard Backups and Endpoints in Skyvia are created in Wizard through a series of well-defined steps. The steps slightly differ for backups and endpoints. In the Skyvia documentation, we will consider creating an OData Endpoint as an example. After you have selected an OData Endpoint in the +NEW menu and the mode you would like to use \u2014 Simple or Advanced , you are transferred to the OData Endpoint Wizard page, which looks like below. The Wizard\u2019s four steps include selecting connection , setting up model , configuring security and adjusting settings . On the first step, you select an existing connection or create a new one by clicking +Add new button. Then you define the endpoint model by selecting the objects to publish. When configuring endpoint security settings, you can optionally allow only authenticated users to access endpoint data and optionally limit IP addresses, from which the endpoint data can be accessed. On the last step, you specify the endpoint name and adjust other settings. After you have completed all four steps, the OData Endpoint is created. You can read more about endpoints in the Connect product. Creating Objects in the Convenient Editor Integrations and Queries in Skyvia are created in the convenient Editor. The procedure of creating a new object in the Editor is more or less standard. In the Skyvia documentation, we will consider creating an export as an example. First , in the integration editor, you should specify a source connection you load data from and a target connection you load data to. Second , you create task(s) with objects you want to export. In our example, we have selected Account and Contact objects to be exported as CSV files. Please note that the procedure for creating tasks within the integration is different for each integration type. Read more about it in the Data Integration section in the corresponding topic. Third , you can optionally schedule your integration for automatic execution. After you have specified all the data, you can create an integration. Viewing or Editing Newly Created Objects Let us consider an OData endpoint as an example of newly created object. After you have created it, the page with endpoint details opens automatically. The page consists of three tabs: Overview , Model , and Log . You can immediately copy the endpoint URL and use it in your OData consumer applications. On the Overview tab, you can view and edit the created connection, modify endpoint security by clicking the down arrow buttons near the information about users and IP addresses . On the Model tab, you can add or remove entities, configure them, edit associations between them (if the endpoint was created in Advanced mode ), change the default OData version and endpoint access mode from the toolbar. The changes are immediately applied when you click the Save button on the toolbar. On the Log tab, you can monitor all the endpoint requests. Read more information about logs below . All Objects in Skyvia On the OBJECTS page, you can find everything that you created in Skyvia \u2014 agents, connections, integrations, backups, endpoints, queries, etc. The All tab displays all objects together on the page. By clicking other tabs in a row, you \u0441an group objects by certain products, by connections and agents. Skyvia objects can be displayed differently on the page depending on the selected view. To select a certain view, click on the toolbar on the right. You can select Grid view or List view , with or without grouping. When you select Grid view or List view , the objects are displayed all together as grid or as list in an alphabetic order by their names. When you select Grid view with grouping or List view with grouping , all objects are grouped by categories \u2014 agents in the Agents group, connections in the Connections group. Whether in the Grid view or in the List view , the displayed objects share common interface elements such as object names, object kinds, connectors used, creation and modification dates. Managing Object List When you hover over a certain object in the list, Skyvia displays Quick Action buttons of this object. The quick action buttons are different for different object kinds. The quick action buttons allow you to work with objects immediately, for example to test a connection or open an integration log. Besides that each Skyvia object has common interface elements such as object name, object kind, connector used, creation and modification dates. When you click on the name of a certain object (backup, query or any other) in the object list, Skyvia displays its details, allowing you to edit this object or view its additional information. Object details are different for different kinds of objects. Sorting and Filtering Objects Skyvia offers several ways to help you quickly find the necessary objects in the object list. You can add various filters or sort objects by a number of parameters. Please note that this advanced filtering and sorting functionality is available when you display only a specific object kind: only integrations or only backups, etc. It is not available when you view all the objects on the All tab. Connections, Queries You can filter both connections and queries by their names, using the Filter by name box, or by the used connector, selecting it in the Connector drop-down list. Sorting functionality is not foreseen for connections and queries. Integrations You can filter and sort integrations by a number of parameters. Skyvia interface allows filtering integrations by type (Import, Export, Syn\u0441hronization, etc.), by connector (Salesforce, Dynamics 365, Podio, etc) and integration state (new, succeeded, failed, etc.). You can also distinguish which integrations are scheduled for automatic execution and which are not by applying the Schedule filter. We have also added the ability to sort integration list by the newly added parameters: integration name, type, state, latest integration run and modification date. Endpoints Endpoints can be filtered by their types (OData/SQL), state (active/inactive) and connectors used. On the Endpoints tab, you can view endpoint token, status, and information on endpoint security both in the grid and list views. Security icons show what kind of access each endpoint has \u2014 private or public (with or without authentication/IP restrictions). You can also sort endpoints by a number of parameters by clicking the Unsorted drop-down list on the right. Backups Backups can be filtered either by their names or by the used connectors. You can also sort backups by a number of parameters including their size. When viewing a list of backups, you can immediately see how much space each backup consumes, determine which backup takes the most space and clean it if necessary. You can also check when the most recent snapshot was made, and when the next snapshot is scheduled. In the list view, Skyvia additionally displays the number of totally backed up records and the number of records modified since the previous snapshot. Organizing Objects in Folders Folders allow you to organize Skyvia objects. Create folders and drag Skyvia objects to them to get quick access to specific object categories. For example, you may create separate folders for Skyvia objects related to each of your customers, or create folders that group objects related to specific tasks. By default all objects are created in a workspace root folder. To create a new folder, hover your cursor over Workspace on the left and click icon. You can create subfolders, by clicking next to parent folder name. Use drag and drop to move objects between folders. Additionally, you can manage the object\u2019s location from inside the object: Click on the arrow icon on the upper left. Choose the folder to move the object to. To select an object, click on its tile. If you click on the name of the object Skyvia will open it instead of selecting. To select multiple objects, hold SHIFT or CTRL key while selecting. To view all the workspace objects, regardless of the folders, click the All Objects link above the folder tree. Viewing Object Dependencies Skyvia objects may depend on other objects. For example, integrations, backups, endpoints depend on their connections, so if a connection is modified, it affects all the objects. Connections in their turn depend on the agents (if applicable). It is important to know dependencies between objects, because if a connection becomes invalid, all the integrations, backups, endpoints, and queries that depend on it stop working. Dependencies also affect deleting objects \u2014 if you delete a connection, you automatically delete all the objects that depend on it. You can view objects that depend on a connection or agent, simply by pointing it and clicking the View Dependencies quick action button. If you try to delete a connection or agent, and there are other objects, depending on this connection or agent, Skyvia offers to delete the dependent objects together with this connection or agent. If this doesn\u2019t suit you, you may cancel the deletion. When you restore an object from Trash , and if this object depends on other objects in Trash , Skyvia offers restoring all of them together. If you do not agree, the object won\u2019t be restored. Deleting and Restoring Objects Any object in Skyvia can be deleted in several ways. You can use drag-n-drop to send selected objects directly to the Trash on the left. You can select an object in the object list and click the Delete to trash button on the toolbar on the right or you can go to the details page of certain object, connection, agent and click icon there. The bulk delete of objects is also available. You can select either all or only specific objects for deletion as shown on the screen above. Objects are removed from the object list, but they are not immediately deleted. For 2 weeks after their deletion they are available in Trash , and can be safely restored from there. Simply click the Trash link on the left, below the folders. This displays the list of recently deleted object. Here you can select objects and restore them. To restore object, select it and click the Restore from trash button on the toolbar. You can also delete an object completely, by selecting it and clicking the Delete from trash button. Viewing Log of Objects Skyvia allows you to view the activity of some objects, which have the Log functionality. These objects are integrations, backups and endpoints. You can view their activity, i.e. integration runs, backup snapshots, endpoint requests when clicking the Log tab on the toolbar of the object details page or quick action button. For endpoints, you can check all your data access and manipulation operations as well as view all users, their operations and from which IPs these operations are performed. You can select different time periods to display your endpoint requests and filter requests depending on their final result \u2014 failed, succeeded or all requests. User Profile User Profile contains Profile and Account tabs, which you can see when clicking the User icon in the top right corner of the Skyvia page. Profile On the Profile page, you can view and edit your personal information such as first name, last name, company, phone number and job title. You can subscribe/unsubscribe to Skyvia\u2019s newsletters, change your password to a new one, configure email notifications and two-factor notification . Account On the Account page, you can view, manage and upgrade your subscriptions , check used Skyvia resources, such as processed records, scheduled integrations, etc., view and print out invoices, and manage account users if any invited. You can invite other users to your account. Users join your account as account members. Being account members, they cannot manage (upgrade, downgrade or anyhow change) your subscriptions or view any payment details, but they can use your Skyvia subscriptions and resource limitations. You can also add users to your workspace(s) and assign roles to them. They can use connections, integrations, queries, etc. in the workspaces only to the extent allowed by their roles. For example, a user with the workspace admin role will have full rights in the workspace, while the developer will be able to manage workspace objects (create, run, delete them), but not manage workspace settings, etc. Read Account Management for more information. Skyvia Help Center When you click icon in the top right corner of the Skyvia page, the Help Center window opens up. Here you can find links to useful resources such as Skyvia documentation, Skyvia blog with latest news and Support portal with FAQs. If you have any technical questions and cannot find them in the Skyvia documentation, you can contact Skyvia support team by filling out the Contact form or writing directly to the Support chat." }, { "url": "https://skyvia.com/data-integration/alteryx", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Alteryx Alteryx Alteryx is a powerful and flexible end-to-end analytics platform for creating data partnerships between IT, analytic teams, and the lines of business. Analyze Your Data with Alteryx Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Alteryx to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/answerrocket", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. AnswerRocket AnswerRocket AnswerRocket is a data analytics solution designed to allow users to ask business questions in natural language and get answers in seconds. Analyze Your Data with AnswerRocket Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect AnswerRocket to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/beakerx", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. BeakerX BeakerX BeakerX is an open source collection of kernels and extensions to the Jupyter interactive computing environment with interactive plots, tables, forms, publishing, etc. Analyze Your Data with BeakerX Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect BeakerX to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/bime", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. BIME BIME BIME by Zendesk is an analytics platform to help businesses measure and understand the entire customer experience. Analyze Your Data with BIME Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect BIME to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/birst", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Birst Birst Birst is an cloud BI and analytics solution to optimize most complex business processes. Analyze Your Data with Birst Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Birst to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/board", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Board Board Board is a decision-making platform for analysis, simulation, and planning to make business decision-making more efficient and effective. Analyze Your Data with Board Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Board to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/carto", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Carto Carto Carto is a platform for building powerful Location Intelligence apps and analyze complex business problems with spatial data science techniques. Analyze Your Data with Carto Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Carto to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/chartio", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Chartio Chartio Chartio is a cloud-based data analysis tool with charts and interactive dashboards for business and data teams. Analyze Your Data with Chartio Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Chartio to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/cluvio", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Cluvio Cluvio Cluvio is a cloud analytics platform for startups and data-driven teams with SQL and R support. Analyze Your Data with Cluvio Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Cluvio to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/databricks", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Databricks Databricks Databricks is a unified data analytics platform for massive scale data engineering and collaborative data science. Analyze Your Data with Databricks Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Databricks to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/datapine", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. datapine datapine datapine is a self-service bi tool with flexible charts and interactive dashboards to make data discovery simple. Analyze Your Data with datapine Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect datapine to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/datarobot", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. DataRobot DataRobot DataRobot is an enterprise AI platform that is available in the cloud, on-premise, or as a fully-managed AI service. Analyze Your Data with DataRobot Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect DataRobot to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/datawrapper", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Datawrapper Datawrapper Datawrapper is a web service for creating interactive charts, maps and tables without coding. Analyze Your Data with Datawrapper Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Datawrapper to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/domo", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Domo Domo Domo is a cloud platform for providing direct, simplified, real-time access to business data for decision makers across the company without the need of IT specialist. Analyze Your Data with Domo Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Domo to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/dundas", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Dundas BI Dundas BI Dundas BI is a dashboard, reporting, and analytics platform to empower users with the data insights. Analyze Your Data with Dundas BI Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Dundas BI to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/fusioncharts", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. FusionCharts FusionCharts FusionCharts is a JavaScript charting library for developers, suitable for web and mobile apps. Analyze Your Data with FusionCharts Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect FusionCharts to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/gooddata", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. GoodData GoodData GoodData is an enterprise-grade embedded intelligence platform as a service, providing high scalability and security. Analyze Your Data with GoodData Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect GoodData to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/grafana", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Grafana Grafana Grafana is an open source analytics and monitoring solution for databases. Analyze Your Data with Grafana Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Grafana to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/grow", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Grow Grow Grow is a BI solution for connecting and visualizing business data from various marketing, CRM, and financial data sources and sharing the reports. Analyze Your Data with Grow Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Grow to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/h2o", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. H2O H2O H2O is a open source distributed data science and machine learning platform with support of the most widely used statistical and machine learning algorithms. Analyze Your Data with H2O Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect H2O to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/highcharts", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. HighCharts HighCharts HighCharts is a charting library, written in Javascript and free for non-commercial use that allows creating interactive JavaScript charts for web pages. Analyze Your Data with HighCharts Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect HighCharts to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/holistics", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Holistics Holistics Holistics is a data platform that allows setting up an end-to-end, scalable, and reusable data analytics stack without engineering resources. Analyze Your Data with Holistics Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Holistics to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/ibm-cognos", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. IBM Cognos IBM Cognos IBM Cognos is an AI-fueled BI platform that supports the entire analytics cycle, from discovery to operationalization. Analyze Your Data with IBM Cognos Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect IBM Cognos to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/idashboards", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. iDashboards iDashboards iDashboards is a self-service dashboard software for creating mobile-friendly dashboards, connected to live data, via drag-n-drop. Analyze Your Data with iDashboards Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect iDashboards to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/indicative", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Indicative Indicative Indicative is a powerful behavioral analytics platform for marketers, product managers, and analysts to better understand their customers. Analyze Your Data with Indicative Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Indicative to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/infogram", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Infogram Infogram Infogram is a web-based data visualization and infographics platform that allows making and sharing digital charts, infographics, and maps. Analyze Your Data with Infogram Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Infogram to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/izenda", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Izenda Izenda Izenda is an embedded BI and reporting tool that enables real-time data exploration through ad hoc reports and visually-rich dashboards. Analyze Your Data with Izenda Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Izenda to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/jackdb", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. JackDB JackDB JackDB is a database client for creating and executing SQL statements against databases and sharing SQL snippets with advanced access control and logging. Analyze Your Data with JackDB Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect JackDB to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/jupyter", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Jupyter Jupyter Jupyter is an open-source web-based environment for creating documents with live code, equations, and visualizations. Analyze Your Data with Jupyter Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Jupyter to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/knime", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. KNIME Analytics Platform KNIME Analytics Platform KNIME Analytics Platform is a free and open-source enterprise-grade data analytics, reporting, and integration platform for creating data science. Analyze Your Data with KNIME Analytics Platform Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect KNIME Analytics Platform to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/logi", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Logi Logi Logi is an embedded analytics platform for embedding dashboards and reports into your application. Analyze Your Data with Logi Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Logi to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/looker-studio", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Looker Studio (formerly, Google Data Studio) Looker Studio Looker Studio (formerly, Google Data Studio) is a data analysis solution that enables users to convert their data into detailed reports or high-level dashboards. Analyze Your Data with Looker Studio Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Looker Studio to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/looker", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Looker Looker Looker is a data platform for data analysis, exploration, and insights for every business function. Analyze Your Data with Looker Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Looker to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/magento-bi", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Magento Business Intelligence Magento Business Intelligence Magento Business Intelligence is a BI solution solution that provides data pipeline, warehouse and visualization capabilities. Analyze Your Data with Magento Business Intelligence Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Magento Business Intelligence to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/metabase", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Metabase Metabase Metabase is an open-source business intelligence and analytics software to ask questions and learn from data. Analyze Your Data with Metabase Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Metabase to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/microstrategy", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. MicroStrategy MicroStrategy MicroStrategy is an analytics platform with business intelligence, predictive analytics, and other features that lets users explore data and answer questions. Analyze Your Data with MicroStrategy Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect MicroStrategy to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/mode", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Mode Mode Mode is a data analytics platform for in-depth analysis and visualization of business data. Analyze Your Data with Mode Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Mode to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/oracle-bi", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Oracle Business Intelligence Oracle Business Intelligence Oracle Business Intelligence is a set of business intelligence tools by Oracle that combines ML and AI to enrich decision-making processes. Analyze Your Data with Oracle Business Intelligence Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Oracle Business Intelligence to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/owox-bi", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. OWOX BI Smart Data OWOX BI Smart Data OWOX BI Smart Data is a BI solution for creating visualized reports on raw data from Google BigQuery and exporting them to Google Data Studio and Google Sheets. Analyze Your Data with OWOX BI Smart Data Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect OWOX BI Smart Data to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/pentaho", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Pentaho Pentaho Pentaho is a software for data integration, OLAP services, reporting, information dashboards, data mining and ETL. Analyze Your Data with Pentaho Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Pentaho to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/periscope", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Periscope Data Periscope Data Periscope Data is an end-to-end data analysis platform designed specifically for data scientists. Analyze Your Data with Periscope Data Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Periscope Data to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/popsql", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. PopSQL PopSQL PopSQL is a collaborative SQL editor for teams that allows writing queries, visualizing data, and sharing the result. Analyze Your Data with PopSQL Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect PopSQL to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/powerbi", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Power BI Power BI Power BI is a suite of business analytics tools from Microsoft for delivering insights throughout your organization. Analyze Your Data with Power BI Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Power BI to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/pyramid", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Pyramid Analytics Pyramid Analytics Pyramid enterprise analytics is an adaptive analytic platform that provides one analytics solution for everyone, across all user types and skill levels. Analyze Your Data with Pyramid Analytics Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Pyramid Analytics to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/python", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Python Python Python is an interpreted high-level programming language for general-purpose programming with a dynamic type system and automatic memory management. Analyze Your Data with Python Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Python to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/qliksense", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Qlik Sense Qlik Sense Qlik Sense is a platform for modern, self-service oriented analytics. Analyze Your Data with Qlik Sense Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Qlik Sense to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/qubole", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Qubole Qubole Qubole is a cloud data platform for self-service AI, machine learning, and analytics. Analyze Your Data with Qubole Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Qubole to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/quicksight", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Amazon QuickSight Amazon QuickSight Amazon QuickSight is a fast cloud business intelligence service from Amazon with pay-per-session pricing. Analyze Your Data with Amazon QuickSight Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Amazon QuickSight to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/r", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. R R R is a programming language for statistical computing and graphics, developed by R Foundation for Statistical Computing. Analyze Your Data with R Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect R to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/rapidminer", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. RapidMiner RapidMiner RapidMiner is a data science software platform, providing an integrated environment for data preparation, machine learning, deep learning, text mining, and predictive analytics. Analyze Your Data with RapidMiner Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect RapidMiner to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/razorsql", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. RazorSQL RazorSQL RazorSQL is a cross-platform SQL Editor and SQL database query tool with support for over 40 databases. Analyze Your Data with RazorSQL Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect RazorSQL to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/redash", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Redash Redash Redash is an open source data analytics tool for teams to query, visualize and collaborate. Analyze Your Data with Redash Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Redash to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/reveal", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Reveal Reveal Reveal is an embedded data analytics solution for visualizing data via drag-n-drop. Analyze Your Data with Reveal Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Reveal to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/rockdaisy", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. RockDaisy RockDaisy RockDaisy is a responsive web app for connecting to, transforming, and visualizing data. Analyze Your Data with RockDaisy Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect RockDaisy to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/sagemaker", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Amazon SageMaker Amazon SageMaker Amazon SageMaker is a cloud machine-learning platform that allows users to create, train, and deploy machine-learning (ML) models. Analyze Your Data with Amazon SageMaker Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Amazon SageMaker to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/sap-lumira-discovery", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. SAP Lumira Discovery SAP Lumira Discovery SAP Lumira Discovery is a self-service visualization analytics software with on-premise and cloud deployment capabilities. Analyze Your Data with SAP Lumira Discovery Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect SAP Lumira Discovery to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/seekwell", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. SeekWell SeekWell SeekWell is a Google Sheets add-on to query database data using SQL and creating and sharing reports on the results in Google Sheets. Analyze Your Data with SeekWell Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect SeekWell to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/shiny", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Shiny Shiny Shiny is an R package for building interactive web apps straight from R. Analyze Your Data with Shiny Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Shiny to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/sigma", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Sigma Sigma Sigma is a live visual interface for data warehouse that enables users to explore, analyze, and visualize data without the help of a data specialist. Analyze Your Data with Sigma Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Sigma to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/sisense", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Sisense Sisense Sisense is a single-stack BI and analytics software with personalized dashboards and interactive visualizations via a drag-and-drop interface. Analyze Your Data with Sisense Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Sisense to this data warehouse and perform the in-depth analysis you need. Read the detailed tutorial on how to move data to an on-premise database with Skyvia on ours partner Sisense [blog](https://www.sisense.com/blog/moving-data-to-an-on-prem-database-with-skyvia/?utm_campaign=sisensepartner&utm_medium=partner&utm_source=skyvia) . Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/slemma", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Slemma Slemma Slemma is a codeless data visualization software for creating simple, dynamic data reports and dashboards from multiple data sets. Analyze Your Data with Slemma Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Slemma to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/statsbot", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Statsbot Statsbot Statsbot is an intelligence platform for converting raw data to insights. Analyze Your Data with Statsbot Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Statsbot to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/superset", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Apache Superset Apache Superset Apache Superset is a modern, enterprise-ready business intelligence web application, incubating at The Apache Software Foundation. Analyze Your Data with Apache Superset Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Apache Superset to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/tableau", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Tableau Tableau Tableau is a robust data analytics and visualization platform that can be run on a desktop, the cloud, or customer servers. Analyze Your Data with Tableau Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Tableau to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/thoughtspot", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. ThoughtSpot ThoughtSpot ThoughtSpot is a business intelligence and big data analytics platform for exploring, analyzing and sharing real-time business analytics. Analyze Your Data with ThoughtSpot Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect ThoughtSpot to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/tibcospotfire", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. TIBCO Spotfire TIBCO Spotfire TIBCO Spotfire is an enterprise-class analytics platform with built-in data wrangling, AI-driven cognitive search & data discovery. Analyze Your Data with TIBCO Spotfire Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect TIBCO Spotfire to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/trevor-io", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Trevor.io Trevor.io Trevor.io is a no-coding data querying and analytics solution, able to live-stream results to Excel and Google Sheets. Analyze Your Data with Trevor.io Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Trevor.io to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/ubiq", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Ubiq Ubiq Ubiq is a web-based reporting and analytics tool for MySQL and PostgreSQL with capabilities to build custom dashboard and reports and export them in multiple formats. Analyze Your Data with Ubiq Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Ubiq to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/webfocus", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. WebFOCUS BI and Analytics WebFOCUS BI and Analytics WebFOCUS BI and Analytics is a set of products, created by the Information Builders company, that help companies use data more strategically across and beyond the enterprise. Analyze Your Data with WebFOCUS BI and Analytics Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect WebFOCUS BI and Analytics to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/yellowfin", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Yellowfin Yellowfin Yellowfin Suite is an enterprise suite of BI and analytics products that combines industry-leading automated analysis, storytelling, and collaboration. Analyze Your Data with Yellowfin Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Yellowfin to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/zeppelin", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Apache Zeppelin Apache Zeppelin Apache Zeppelin is an open-source web-based notebook enabling data-driven, interactive data analytics and collaborative documents using SQL, Scala, etc. Analyze Your Data with Apache Zeppelin Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Apache Zeppelin to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/zoho-analytics", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Zoho Analytics Zoho Analytics Zoho Analytics is a self-service BI and data analytics solution for creating visually appealing data visualizations and dashboards. Analyze Your Data with Zoho Analytics Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Zoho Analytics to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/data-integration/zoomdata", "product_name": "Data Integration", "content_type": "Website", "content": "This article is about BI Tool that does not have a direct source-specific connector in Skyvia. Visit this page to see the list of data sources and target databases or data warehouses you can use to integrate data with Skyvia, and later visualize it in this BI Tool. Zoomdata Zoomdata Zoomdata is a visual analytics for Big Data that provides interactive data visualization of up to billions of rows. Analyze Your Data with Zoomdata Skyvia can easily load data from all your cloud apps to a database or a cloud data warehouse. Then, it's easy to connect Zoomdata to this data warehouse and perform the in-depth analysis you need. Source Skyvia Data Warehouse BI Tool" }, { "url": "https://skyvia.com/etl-tools-comparison/airbyte-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Airbyte Skyvia and Airbyte both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Airbyte" }, { "url": "https://skyvia.com/etl-tools-comparison/alteryx-designer-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Alteryx Designer Skyvia and Alteryx Designer both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Alteryx Designer" }, { "url": "https://skyvia.com/etl-tools-comparison/apache-airflow-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Apache Airflow Skyvia and Apache Airflow both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Apache Airflow" }, { "url": "https://skyvia.com/etl-tools-comparison/apache-nifi-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Apache Nifi Skyvia and Apache Nifi both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Apache Nifi" }, { "url": "https://skyvia.com/etl-tools-comparison/aws-data-pipeline-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs AWS Data Pipeline Skyvia and AWS Data Pipeline both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs AWS Data Pipeline" }, { "url": "https://skyvia.com/etl-tools-comparison/aws-glue-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs AWS Glue Skyvia and AWS Glue both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs AWS Glue" }, { "url": "https://skyvia.com/etl-tools-comparison/azure-data-factory-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Azure Data Factory Skyvia and Azure Data Factory both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Azure Data Factory" }, { "url": "https://skyvia.com/etl-tools-comparison/boomi-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Boomi Skyvia and Boomi both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Boomi" }, { "url": "https://skyvia.com/etl-tools-comparison/cdata-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs CData Skyvia and CData both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs CData" }, { "url": "https://skyvia.com/etl-tools-comparison/celigo-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Celigo Skyvia and Celigo both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Celigo" }, { "url": "https://skyvia.com/etl-tools-comparison/dataloaderio-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Dataloader.io Skyvia and Dataloader.io both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Dataloader.io" }, { "url": "https://skyvia.com/etl-tools-comparison/fivetran-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Fivetran Skyvia and Fivetran both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Fivetran" }, { "url": "https://skyvia.com/etl-tools-comparison/funnel-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Funnel Skyvia and Funnel both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Funnel" }, { "url": "https://skyvia.com/etl-tools-comparison/hevo-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Hevo Skyvia and Hevo both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Hevo" }, { "url": "https://skyvia.com/etl-tools-comparison/hightouch-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Hightouch Skyvia and Hightouch both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Hightouch" }, { "url": "https://skyvia.com/etl-tools-comparison/informatica-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Informatica Skyvia and Informatica both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Informatica" }, { "url": "https://skyvia.com/etl-tools-comparison/integrateio-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Integrate.io Skyvia and Integrate.io both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Integrate.io" }, { "url": "https://skyvia.com/etl-tools-comparison/jitterbit-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Jitterbit Skyvia and Jitterbit both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Jitterbit" }, { "url": "https://skyvia.com/etl-tools-comparison/matillion-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Matillion Skyvia and Matillion both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Matillion" }, { "url": "https://skyvia.com/etl-tools-comparison/mulesoft-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Mulesoft Skyvia and Mulesoft both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Mulesoft" }, { "url": "https://skyvia.com/etl-tools-comparison/pentaho-data-integration-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Pentaho Data Integration Skyvia and Pentaho Data Integration both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Pentaho Data Integration" }, { "url": "https://skyvia.com/etl-tools-comparison/rivery-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Rivery Skyvia and Rivery both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Rivery" }, { "url": "https://skyvia.com/etl-tools-comparison/snaplogic-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs SnapLogic Skyvia and SnapLogic both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs SnapLogic" }, { "url": "https://skyvia.com/etl-tools-comparison/ssis-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs SSIS Skyvia and SSIS both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs SSIS" }, { "url": "https://skyvia.com/etl-tools-comparison/stitchdata-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Stitchdata Skyvia and Stitchdata both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Stitchdata" }, { "url": "https://skyvia.com/etl-tools-comparison/supermetrics-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Supermetrics Skyvia and Supermetrics both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Supermetrics" }, { "url": "https://skyvia.com/etl-tools-comparison/talend-alternative-skyvia", "product_name": "Unknown", "content_type": "Website", "content": "Visit this page to see the detailed comparison. Skyvia vs Talend Skyvia and Talend both offer a data integration solution. Compare the features and benefits, data sources and destinations, and see which meets your needs. Look at the side-by-side comparison chart of the two data integration solutions. Look at the side-by-side comparison chart Skyvia vs Talend" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/a-call-to-sspi-failed-see-internal-exception", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: PostgreSQL: A call to SSPI failed, see internal exception\n\nAnswer: \" A call to SSPI failed, see internal exception \" indicates that the security protocol (SSL/TLS) used in your PostgreSQL connector does not match the version supported by your server. To resolve this, we recommend trying different SSL/TLS protocol versions in your connection settings in Skyvia. You can also contact your database administrator to confirm the SSL/TLS version configured on your PostgreSQL server and ensure the same version is specified in the connector. Additionally, please double-check that all other [connection settings](https://docs.skyvia.com/connectors/databases/postgresql_connections.html) are configured correctly." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/a-quick-guide-to-importing-and-updating-audience-members-in-mailchimp", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: A Quick Guide to Importing and Updating Audience Members in Mailchimp\n\nAnswer: If your integration involves importing data into Mailchimp Audiences, this information might be helpful. To start, let\u2019s take a quick look at how the UPSERT operation works: If the Id is specified, the integration attempts an Update operation even if the Id is incorrect. If the Id is set to NULL, an Insert operation is triggered, and the Id is generated on the Mailchimp side. Therefore, mapping the Id field is necessary, but it can contain a NULL value. This applies if the source column is missing or if it\u2019s a lookup that returns NULL, in which case the Insert operation will proceed. For more details on UPSERT operations, please refer to [this article](https://docs.skyvia.com/data-integration/import/how-to-guides/performing-upsert-operation.html) in our documentation. When setting up the UPSERT operation for the ListMembers object, you have to map at least a few required fields: Id , ListId , and Email . Please follow these steps to configure UPSERT for the ListMembers table: 1. Set the Lookup Mapping for the Id Field: Lookup Object: ListMembers Resulted Column: Id Lookup Key Column: ListId Column: ListId (or set it as a constant value) Lookup Key Column: Email Column: Email 2. Open Options and enable \" Set Null When No Match Found \". Note : This option is necessary only for the UPSERT operation. \u200b 3.\u00a0 \u00a0Map ListId to the Constant (set the necessary ListId value) and set mapping for the Email field: 4. Set mapping for other fields, if necessary, etc. To get your ListId, you can use: [Find Your Audience article](https://mailchimp.com/help/find-audience-id/) on the Mailchimp website. [Skyvia Query Tool](https://docs.skyvia.com/query/) with this SQL command on your Mailchimp connection: SELECT \"Id\", \"Name\" FROM \"Lists\" If you run into any issues or have more questions, we\u2019re here to help with any technical questions you might have! You can reach out to us through our Support Portal, by Email, or via Live Chat." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/backup", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Backup Plan Storage Limits\n\nAnswer: When using a backup solution, monitoring storage capacity is critical for smooth operation. Once the backup plan\u2019s storage limit is reached, no further backups\u2014whether manual or scheduled\u2014will be performed until storage is freed, or the plan is upgraded. Here are the options available to manage storage: 1. Auto-Delete Snapshots: Available from the Standard plan onwards, auto-deletion can be enabled by navigating to the Subscriptions tab on your Account page, selecting Storage details in your Backup plan, and turning on the Autoclean toggle. You can customize which snapshots to delete or keep, and the Storage Manager provides an overview of storage usage. 2. Manual Deletion: You can delete snapshots manually in 2 ways: Multiple Snapshots: From the Overview tab of the backup, click Clean up and choose to delete all snapshots except the latest, or snapshots before or between specific dates. Single Snapshot: On the Snapshots tab, click Clean up and select the snapshot(s) to delete. 3. Deleting Entire Backup: If the backup is no longer needed, move it to the Trash or click Delete. Backups stay in Trash for 2 weeks, after which they can be permanently deleted. You can restore from Trash within that time period. If clearing storage is not sufficient or if frequent backups are required, consider upgrading your backup plan to a higher storage tier to avoid interruptions. You can check [available plans](https://skyvia.com/pricing) and choose one that best fits your needs. To stay informed about any issues, Skyvia [email notification feature](https://docs.skyvia.com/account-management/email-notifications.html) can alert you when backups fail or when your subscription nears storage limits. By default, these notifications are disabled, so ensure they are turned on to prevent any unexpected interruptions. For more details on managing storage space, please refer to [this\u00a0documentation](https://docs.skyvia.com/backup/working-with-backups/managing-storage-space-and-deleting-old-snapshots.html) ." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/bigcommerce-the-value-of-the-column-categories-has-incorrect-json-format", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: BigCommerce: \"The value of the column Categories has incorrect JSON format\" error\n\nAnswer: When working with the Categories field in the Product object within the BigCommerce API, it's important to understand how to format and import data correctly to ensure a successful import. In the BigCommerce API, the Categories field in the Product object is an array. For more information, on Products, please refer to [the following link](https://developer.bigcommerce.com/archive/store-operations/v2-catalog-products/v2-products#product-object-properties) . We have implemented the Categories field as a string in JSON format. Therefore, when importing data into the Products.Categories field, please specify the value in this format. The value should start with an opening bracket [ and end with a closing bracket ] , with each category ID separated by commas. Here are examples: [21, 23] \u2013 representing two category IDs. [22] \u2013\u00a0 representing one category ID. Make sure to pass values to the Categories field in this format for successful data import. You can pass data in array format in the following ways: - Format the value in brackets in your CSV file\u00a0as described above; - Use Constant mapping to pass fixed values to a target column. To do this, select \"Constant\" as the mapping type and specify the constant value directly: For more details, refer to the [Constant Mapping documentation.](https://docs.skyvia.com/data-integration/common-package-features/mapping/constant-mapping.html) - Use Expression Mapping as shown below. For example, if you have two Category IDs in separate columns, you can concatenate them into a JSON array format like this: '[' + Category1 + ',' + Category2 + ']' For more details on expression syntax, visit [Expression Syntax](https://docs.skyvia.com/expression-syntax/#identifiers) page." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/can-i-get-email-notifications-after-integration-is-executed", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Can I get email notifications after integration is executed?\n\nAnswer: Currently, there are no notifications specifically for the completion of an integration. However, you can enable email notifications for integrations by turning on error or usage notifications in your Skyvia account. These notifications are disabled by default, so you will need to go to your account settings and configure them under the Notifications tab. This will allow you to receive notifications when an integration fails or when certain usage thresholds are met. For detailed setup instructions, please refer to the [Email Notifications](https://docs.skyvia.com/account-management/email-notifications.html) documentation." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/collection-of-resources", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: \ud83d\udcda \u0421ollection of Help Resources\n\nAnswer: Explore our comprehensive collection of resources designed to help you learn everything you need to know about our product, all in one place! [Documentation](https://docs.skyvia.com/) \u2013 A comprehensive and detailed knowledge base covering all aspects of our platform. [Support Portal](https://support.skyvia.com/portal/en/community/skyvia) \u2013 Access a range of support options and community resources to assist you. [Blog](https://skyvia.com/blog/) \u2013 Find insightful articles on key tools and common use cases. [Webina rs](https://skyvia.com/webinars/) \u2013 Dive into various platform features with our informative webinars. [YouTube channel](https://youtube.com/@skyviaplatform?si=6rW3X2IzpYCV500u) \u2013 Watch video demos that showcase our products in action. [G](https://skyvia.com/gallery) [allery](https://skyvia.com/gallery) \u2013 Use queries and integrations for the most common scenarios. [R eference Materials](https://skyvia.com/learn) \u2013 Access valuable resources to enhance your knowledge and improve your data experience." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/connect-plan-does-not-support-secure-endpoints", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Connect: Plan Does Not Support Secure Endpoints\n\nAnswer: Secure endpoints (private endpoints with IP restrictions and user accounts) are available only in the Standard pricing plan. Public endpoints are the only option available in the Free and Basic pricing plans. If your endpoint is invalid or uses features not included in your subscription, it will be created in an inactive state and cannot be activated until you fix the issue or upgrade your Connect subscription to a plan that includes the necessary features. On the Free pricing plan for Skyvia Connect, you can enable only 1 endpoint to a cloud source\u2014not to a database or cloud data warehouse. This endpoint is limited to 100 KB of traffic per month and is designed to test the tool. Additionally, you may encounter an issue where an endpoint saved in an invalid or incomplete state (for example, lacking any data entity) is considered a draft endpoint. Draft endpoints are created inactive and cannot be activated until you correct their configuration or upgrade your Connect plan to include the required features. For more details, please refer to the Connect [Security Settings](https://docs.skyvia.com/connect/security-settings.html) documentation." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/data-integration", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Data Integration: You have reached the limit of records\n\nAnswer: The error \" You have reached the limit of records \" usually occurs when the number of records exceeds the limit specified in your current Data Integration product plan. To resolve this issue, you have a few options: Upgrade your plan to increase the record limit. If you're on the Free plan, you can request a free 14-day trial of any paid plan to gain access to additional records. Alternatively, you can wait for the records to reset according to your plan\u2019s reset schedule. For users on the Free plan , the limit is usually reset on your registration date each month. For users on Paid plans (Standard, Professional, or Enterprise), the limit is reset on your subscription date. If you\u2019re on the Standard , Professional , or Enterprise plan, you can process more records than your subscription includes for an additional cost. However, this feature is not enabled by default to prevent unexpected charges. To load records beyond your subscription limits, you'll need to enable the Paid Records feature on your Account page. To do so, go to the Subscriptions tab and toggle Data Integration on for Paid Records. This will allow you to process additional records as needed. Important: Please check whether you have set a limit on the number of available paid records in the settings ( 1000 by default) as it may affect your data processing flow. For this, click on the paid records toggle in the Subscriptions settings and check if a maximum amount of 1000 records is specified. If it is, you can either increase this amount or select the Unlimited checkbox to remove the limit. If you created your account some time ago and have not updated your plan, you might also encounter the error \"You have reached the limit of CSV records.\" This error means that the monthly limit for records used in export and import integrations from CSV files has been reached. To resolve this, consider renewing your plan to the updated version or waiting for the limits to reset. For more details on our plans, please visit the [Pricing page](https://skyvia.com/pricing) [.](https://support.skyvia.com/agent/skyvia/skyvia/help-center/page?articlestatus=latest#Solutions/edit/55382000037796898/en) This article [Subscription Limits and Plans in More Details](https://docs.skyvia.com/account-management/subscription-limits-and-plans-in-more-details.html) may also be useful." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/dynamics-crm-api-error-principal-user-is-missing-prvreadaccount-privilege", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Dynamics 365: Principal user is missing prvReadAccount privilege\n\nAnswer: The error message Principal user is missing prvReadAccount privilege is returned by the Dynamics 365 API. The main cause of this error is that the owner is assigned a security role other than System Administrator and lacks the necessary read-write privileges for the specific entity. When migrating data from one system to another, ensure that the security role assigned to the users is either System Administrator or one that provides sufficient privileges to own the data that the migration process is trying to assign. If you are already a System Administrator, we recommend checking if there are any conflicting licenses in the same environment. For instance, if you have both a Sales Enterprise license and a PowerApps license assigned, the lower privileges from the PowerApps license may take precedence, preventing your higher privileges from being recognized. If this does not resolve the issue, we recommend reaching out to Dynamics 365 support for more detailed information about the error and its resolution." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/freshdesk-the-limit-of-freshdesk-api-calls-per-hour-is-reached", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Freshdesk: The Limit of Freshdesk API Calls Per Hour Has Been Reached\n\nAnswer: This error occurs on the Freshdesk side, indicating that the API call limit for the hour has been exceeded. For more details on Freshdesk's API limits, refer to their [API documentation](https://developers.freshdesk.com/api/) . Freshdesk enforces a limit on the number of API calls that can be made per hour. Skyvia interacts with Freshdesk via its API, retrieving 100 records per API call when using Freshdesk API v2, and 30 records per API call when using Freshdesk API v1. When working with child objects (e.g., TicketNote, TicketReply, TicketConversation, etc.), Skyvia first retrieves all parent records (e.g., Tickets). Then, for each parent record, it retrieves all associated child records. This process can significantly increase the number of API calls required, leading to longer execution times. To mitigate this issue: Divide your backup/replication integration into multiple smaller integrations. Ensure these integrations do not run simultaneously, so the API calls are spread out over time. By implementing these steps, you can reduce API call usage and avoid exceeding the hourly limit." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/how-to-map-multi-select-field", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Mapping Multi-Select Fields\n\nAnswer: When working with Import, Synchronization, or Data Flow integrations and needing to map multi-select fields with different values between the source and target, you can use [Expression Mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/expression-mapping.html) . Expression mapping allows using simple and complex expressions and formulas to transform source data into target field values. For example, if the source column contains values such as 1 , 2 , 3 , and the target column requires corresponding values like A , B , C , you can use the following expression (adjusting it to match your specific column names and values): replace_null(string(SourceColumn), \u2018\u2019) == \u20181\u2019 ? \u2018A\u2019 : (replace_null(string(SourceColumn), \u2018\u2019) == \u20182\u2019 ? \u2018B\u2019 : (replace_null(string(SourceColumn), \u2018\u2019) == \u20183\u2019 ? \u2018C\u2019 : \u2018\u2019)) You can customize this expression for your use case by replacing the column name, modifying parameter values, or adding additional value mappings as needed." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/how-to-map-rowno", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Mapping the RowNo Column for UPDATE/UPSERT Operations in Google Sheets\n\nAnswer: When implementing UPDATE/UPSERT operations in [Google Sheets](https://docs.skyvia.com/connectors/cloud-sources/googlesheets_connections.html#establishing-connection) , you may need to set up mapping for the RowNo mandatory field. RowNo is a unique auto-generated row number in the Sheet. It is essential for performing UPSERT/UPDATE operations, as it determines whether the required row for updating exists or not, ensuring the update operation is carried out correctly. To map the RowNo column, please use [Target Lookup Mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html#terms) . You can find an example below: In this case, it is assumed that the Lookup Key Columns Name and Sku contain unique data. Therefore, in the target, we look for a record (RowNo from Sheet1 ) where the source Sku matches the target Name . If a record is found: With the UPDATE operation, we will update the found record using its RowNo value. If not found, an error will occur stating that the record was not found. With the UPSERT operation, we will update the found record using its RowNo value. If it is not found, the option \u201c Set null when no match found \u201d will be triggered (if selected), and a new record will be created in the target with an automatically generated RowN\u043e. NOTE: Please ensure that the fields used as Lookup Key Columns ( Sku and Name ) are also mapped to each other through [Column Mapping](https://docs.skyvia.com/data-integration/common-package-features/mapping/column-mapping.html) . For more information on the topic, check the following articles: [Performing UPSERT Operation](https://docs.skyvia.com/data-integration/import/how-to-guides/performing-upsert-operation.html) [Lookup Mapping: Target Lookup and Source Lookup](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html)" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/hubspot-400-request-header-or-cookie-too-large", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: HubSpot: 414 Request-URI Too Large & 400 Request Header or Cookie Too Large\n\nAnswer: This error occurs when the object you are querying contains too many custom fields. To resolve this issue for ContactListContacts , Contacts , and Deals objects, enable the [Column-wise chunking option](https://docs.skyvia.com/connectors/cloud-sources/hubspot_connections.html#column-wise-chunking) in the [HubSpot](https://docs.skyvia.com/connectors/cloud-sources/hubspot_connections.html#establishing-connection) connection editor (located under Advanced Settings ). Alternatively, you can exclude some fields from your query, replication, backup, or export\u2014depending on where the error occurs. If this error occurs with o ther HubSpot objects, please contact our support team . We can extend the Column-wise chunking option to support them as well." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/hubspot-hapikey-permission-error", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: HubSpot: hapikey Permission Error\n\nAnswer: This issue is common when working with HubSpot. It typically occurs for the following reasons: The customer does not have access to the Marketing or CRM portal due to HubSpot subscription limitations and is limited to the Developer portal. The customer does not have access to specific areas within the Marketing Hub. To resolve this issue, please remove the restricted objects from your backup. For more details, please refer to the following HubSpot Community threads: [This hapikey does not have proper permissions](https://community.hubspot.com/t5/APIs-Integrations/This-hapikey-xxxx-does-not-have-proper-permissions/m-p/240509) [hapikey requires all required OAuth scopes](https://community.hubspot.com/t5/APIs-Integrations/This-hapikey-XXXX-does-not-have-proper-permissions-requires-all/m-p/248898)" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/integration", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Integration Failure Types in Skyvia Data Integration\n\nAnswer: When using Skyvia Data Integration product, you may encounter two distinct types of integration failures: 1. Record-Level Errors These occur when one or more individual records fail to process during an integration task. For example, in operations like Synchronization and Replication , which rely on the LastSyncTime parameter, this can have specific implications. The LastSyncTime parameter tracks changes since the last successful replication. During the initial execution, Skyvia sets this parameter to the current time, and subsequent runs only process changes made after that point. If some records fail, the LastSyncTime advances despite the failure. As a result, failed records are skipped in future runs until they are updated. Once updated, these records will be processed again in the next execution. 2. Integration-Level Errors These occur when an error impacts the entire integration process, preventing it from completing successfully. In this case, no records are processed, and the LastSyncTime parameter remains unchanged. On the next successful run, no records will be skipped, and the integration will attempt to process all relevant data again. To better understand integration failures and handle them, we recommend reviewing the [Integration Run History documentation](https://docs.skyvia.com/data-integration/package-run-history.html) for detailed logs and checking your [Email Notifications](https://docs.skyvia.com/account-management/email-notifications.html) settings for alerts. \u200b Feel free to contact our support team directly for assistance. Also, if an error is unclear, we suggest searching for solutions in Skyvia\u2019s [support articles](https://support.skyvia.com/portal/en/kb/skyvia) or [community forums](https://support.skyvia.com/portal/en/community/skyvia) ." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/mysql", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: MySQL: Resolving \"Lost connection to MySQL server during query\" Error\n\nAnswer: To resolve the \"Lost connection to MySQL server during query\" error, please follow the steps below:\n\nIncrease the \"Command Timeout\" value for your MySQL connection:\n\nGo to the Connections page and select your MySQL connection.\nClick Edit.\nIn the connection editor, go to Advanced, then increase the value for the Command Timeout property.\n\n\n\nAlso, increase the following parameters on your MySQL server:\n\nmax_allowed_packet\nnet_read_timeout\nnet_write_timeout\nFor detailed information, please refer to the MySQL documentation: MySQL Server System Variables." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/need-to-check-data-in-your-data-source-connection-use-skyvia-query", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Skyvia Query: Easily Verify Data in Your Connection!\n\nAnswer: Please follow the steps below to query your data in [Skyvia Query](https://docs.skyvia.com/query/index.html) : Log in to Skyvia and navigate to + Create New \u2192 Query. Select an existing connection or establish a new one for your data source. Choose the desired table or object from the Connection Object List. Use the Builder view to visually select columns, apply filters, sort, and aggregate data, or switch to SQL view to write custom SQL statements. Run your query and immediately review the results. With just a few clicks, you can efficiently access and analyze your data! The Free Skyvia Plan allows up to 5 queries per day.\u00a0Check out the full pricing details [here](https://skyvia.com/pricing) ." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/netsuite-two-factor-authentication-required", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: NetSuite: 2FA required\n\nAnswer: Skyvia connects to NetSuite via the NetSuite API. Basic Authentication does not support accounts configured with 2FA (two-factor authentication). If your NetSuite user account has 2FA enabled, please use Token-Based Authentication to connect to Skyvia. For accounts with 2FA or highly privileged roles, NetSuite recommends using Token-Based Authentication. This ensures secure access and compliance with NetSuite's security policies." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/query", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Skyvia Query: You have reached the limit of queries\n\nAnswer: If you encounter the message \" You have reached the limit of queries ,\" it indicates that you have exceeded the maximum number of queries allowed for your Free plan on [Skyvia Query](https://docs.skyvia.com/query/) . The Free Query plan is designed for basic testing purposes and allows running only 5 queries per day. Once you hit this limit, you'll be unable to execute any additional queries for that day. If you need to run more queries without limitations, consider [upgrading to the Standard plan](https://skyvia.com/pricing) , which offers unlimited queries each day. This is ideal for users with higher query demands or those using Skyvia Query in business environments where frequent queries are required. Additionally, we offer a [14-day trial](https://docs.skyvia.com/account-management/subscriptions-payments-and-trials.html) of the Standard plan to check if it fits your needs. For more detailed information about the query limits and plans available, you can refer to the official [Skyvia subscription limits and plans page](https://docs.skyvia.com/account-management/subscription-limits-and-plans-in-more-details.html#:~:text=you%20free%20space.-,Query,-The%20free%20Query) ." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/queued", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Tips for Managing Integration Queue Times\n\nAnswer: Generally, when integrations start running, they are placed in a queue for execution before processing records begins. Integrations can be queued during server peak loads (such as at the beginning of every hour, e.g., 11:00, 12:00, etc.) when there are many integrations scheduled to run. Integrations on a paid plan automatically have a higher priority in the queue, depending on the plan. By scheduling your integration slightly later, such as at 12:15 instead of 12:00, you help distribute the load more evenly across the server\u2019s processing capacity.\u00a0This reduces the chances of your integration being queued at a peak time and helps to minimize its wait time before processing records begins. Also, we are actively working on a global solution to further reduce queue times. This involves optimizing our systems and processes to handle more integrations concurrently and efficiently, thereby decreasing the overall wait time for all users." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/replication-extra-tables-in-the-database", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Replication creates extra tables in the database\n\nAnswer: In some cases, as a result of Replication, in addition to the main tables (for example, dbo.Account ), empty additional tables are created in the database with names of the following types: dbo.Account_2ade4447_5e11_4308_9e81_5eab62d9ed52 dbo.Account_7c473919_0b04_4c82_a993_e8445843bac4 dbo.Account_8220931f_c341_45aa_a4ec_ddd3fce83936 dbo.Account_96dc6902_1dc6_413d_a206_f116f076130b etc. These are temporary tables that Skyvia creates during Replication, where data is temporarily transferred, and merged with the main table, after which these tables are deleted. The user should not see these tables, but if they do, there are two possible explanations: 1) Replication has not finished its work yet and that's why they have not been deleted yet, and when it finishes, it will clean up everything. 2) The user from his connection to the database, has no permissions to DROP TABLE operation and Replication can not delete them after itself. Then you need to give such rights to the user, and delete the already left tables manually. In such cases, it is also recommended to update the Skyvia Agent. More details can be found via [this link.](https://docs.skyvia.com/agents.html)" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/salesforce-audit-fields-insert", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Salesforce Audit Fields: Quick Insertion Guide\n\nAnswer: To use audit fields for the INSERT\u00a0operation in Skyvia, please enable the 'Create Audit Fields' permission in your Salesforce instance. More details can be found [here](https://help.salesforce.com/s/articleView?id=000386875&type=1) .\u00a0Below are the steps to enable the permission: Go to Setup \u2192 User Interface. Enable the permission: Navigate to Administration Setup \u2192 Permission Sets. Look for Audit Fields. Open the field editor: Click Manage Assignments \u2192 Add Assignments and add the user who should be granted this permission. After completing these steps, the audit fields will become available for mapping. Important Notes: This feature will be available only for connections created using the login of the user who has been granted this permission. The audit fields can only be mapped when performing an INSERT\u00a0operation. To ensure the audit fields are available for mapping, please clear the metadata cache in your connection." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/salesforce-backup-errors", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Skyvia Backup Errors Caused by Salesforce API Requirements\n\nAnswer: Skyvia interacts with Salesforce via its API. During a backup operation, you may encounter errors due to certain limitations or behaviors specific to the Salesforce API. For example, when Skyvia performs a backup or replication, it queries all the data from Salesforce using a SELECT query. However, there are objects in Salesforce that require the WHERE condition in the query (simple SELECT * FROM Table is not valid for such objects). These objects require different filter conditions (generally, the error message includes details regarding these filters): FlowVariableView: a filter on a reified column is required [FlowVersionViewId,DurableId] Implementation restriction: ContentFolderItem requires a filter by Id or ParentContentFolderId using the equals or 'IN' operator Can select only RecordId, a Has*Access field, and MaxAccessLevel FlexQueueItem | The WHERE clause must contain a JobType field expression. , etc. Please refer to this link for more details: [How to query FieldDefinition records?](https://salesforce.stackexchange.com/questions/73572/how-to-query-fielddefinition-records) Generally, objects with such restrictions are system tables and contain not user's data, but data about other objects, columns or service information. That's why these tables require specific filters in order to retrieve the necessary information. Please note, usually, these tables are read-only and thus cannot be used in a restore operation. Also, Skyvia cannot add filters automatically because each object has its own peculiarities and most filters are based on specific Id values, etc. It is recommended to exclude these objects from the backup. If these objects are required for your scenario, you can manually [add filters](https://docs.skyvia.com/data-integration/common-package-features/filter-settings.html#configuring-filter) to the tasks." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/salesforce-connect-can-t-sync-the-schema-metadata-for-your-external-system-check-that-the-schema-metadata-is-valid-error-code-null", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Salesforce Connect: Can't Sync the Schema Metadata for Your External System \u2013 Check That the Schema Metadata Is Valid. Error Code: Null\n\nAnswer: This error occurs because the object used in your endpoint does not have a primary key column. Salesforce requires a primary key for external objects, which is why you encounter an error when validating and syncing your External Data Source in Salesforce. At this stage, External Objects are being created. If the object is a table (rather than a database view), add a primary key column to the table in the database and then re-add it to the endpoint model. If the object is a database view , or if adding a primary key column is not possible, you can manually designate any column as the [Entity Key](https://docs.skyvia.com/connect/odata-endpoints/adjusting-entities.html#entity-key) directly in the endpoint model." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/salesforce", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Salesforce Report exports only 2000 records\n\nAnswer: Skyvia integrates with Salesforce via its API, inheriting both its features and limitations. One key limitation is the Run Report restriction, which limits data extraction to the first 2,000 rows of a report. For more details on this limitation, we recommend [reaching out to Salesforce support](https://help.salesforce.com/s/) . Additionally, you can explore the following resources for more information: [Increase number of records can be exported from Salesforce Reports API](https://developer.salesforce.com/forums/?id=9062I000000UeegQAC) [How to return full report (>2000 rows) using rest api?](https://salesforce.stackexchange.com/questions/187396/how-to-return-full-report-2000-rows-using-rest-api) [Salesforce Report API \u2013 How to Fetch More Than 2000 Records](https://stackoverflow.com/questions/71065008/salesforce-report-api-how-to-fetch-more-than-2000-records-from-salesforce)" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/skyvia-ips-for-whitelist", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: \ud83d\udd12Skyvia IPs for Whitelist\n\nAnswer: Skyvia will access your server from one of the following IP addresses: 40.118.246.204 13.86.253.112 52.190.252.0 To ensure proper functionality of your connections, all of these IP addresses must be whitelisted." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/the-specified-lookup-is-ambiguous-lookup-must-return-at-most-one-row-for-the-specified-lookup-condition", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: The specified lookup is ambiguous. Lookup must return at most one row for the specified lookup condition\n\nAnswer: This error occurs when a Lookup operation returns multiple matching records for the specified condition. Lookups are designed to find and act on only one record. When that\u2019s not possible, this error is triggered. Possible solutions are blow: 1. Use \"First Match\" Option Enable the \"Use first match when multiple results\" setting to proceed with the first matching record found. Use case: You're importing customer data and matching records based on the email address. But some customers share the same email (e.g., shared support inboxes). If uniqueness isn\u2019t critical, you can enable this option to proceed with the first match, for example, to just retrieve a contact ID or status. 2. Use Different Lookup Key Columns Review your Lookup condition. Instead of relying on a single column, try using a more specific one that guarantees a unique match. Use case: You're using Product Name to look up product data, but multiple products might share the same name (e.g., \"T-shirt\"). Try using Product SKU or Product ID instead, these are more likely to uniquely identify the record. 3. Use Composite Lookup Keys Add one or more set of columns to your existing lookup keys to form a composite key that narrows down to a single row. Use case: In\u00a0a multi-branch company, you\u2019re looking up employees by Employee ID, but IDs are reused across branches. Combine Employee ID and Branch Code in your Lookup, this makes the match unique within the context of each branch. Best Practices: \u200b\u200b\u200b\u200b\u200b Always aim to construct Lookup conditions that return exactly one matching record. Composite keys are particularly useful when dealing with hierarchical data (e.g., same names or IDs across different regions, teams, or business units). Avoid using non-unique columns like \"Name\" or \"Status\" alone for Lookups. Please refer to the [Lookup](https://docs.skyvia.com/data-integration/common-package-features/mapping/lookup-mapping-target-lookup-and-source-lookup.html) article\u00a0for more details." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/totalrequests-limit-exceeded", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Salesforce API errors: TotalRequests Limit Exceeded/ApiBatchItems Limit Exceeded\n\nAnswer: The TotalRequests Limit Exceeded and ApiBatchItems Limit Exceeded errors are Salesforce-specific errors indicating that you have reached specific API limits on the Salesforce platform. Please find explanations of each error and relevant documentation below. TotalRequests Limit Exceeded error means that you have reached the Total API Request Limit on the Salesforce side. It is not related to Skyvia limits. Salesforce imposes a 24-hour rolling limit on API calls, which depends on your edition and licenses. For more information, please refer to [this article](https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_api.htm) . ApiBatchItems Limit Exceeded error is returned by Salesforce when its API batch limit is reached for asynchronous operations, such as Bulk API jobs. You can run your integration again the next day when the limit resets. You can check the number of batch jobs running within the 24-hour period in Salesforce by navigating to: Setup \u2192 Jobs \u2192 Bulk Data Load Jobs For more information on Salesforce API limits, please refer to [the following link](https://developer.salesforce.com/docs/atlas.en-us.206.0.api_asynch.meta/api_asynch/asynch_api_concepts_limits.htm) . For further details, see here: [ApiBatchItems Limit exceeded](https://trailhead.salesforce.com/trailblazer-community/feed/0D54V00007T4TlISAV) Additionally, we recommend reviewing the following link: [Salesforce API and API Calls Documentation](https://docs.skyvia.com/connectors/cloud-sources/salesforce_connections/salesforce_api_and_api_calls.html)" }, { "url": "https://support.skyvia.com/portal/en/kb/articles/understanding-records-in-skyvia-data-integration", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Understanding Records in Skyvia Data Integration\n\nAnswer: 1.1 Understanding Records in Skyvia Data Integration A record in Data Integration represents a single row in a database table. Every integration process (Import, Replication, Export, Synchronization, etc.) handles individual records. Records also serve as a measurement unit. You can track processed records in the run history, which shows how many were successfully processed versus how many encountered errors. For more details, refer to the [Data Integration](https://docs.skyvia.com/data-integration/) documentation and the [Integration Run History](https://docs.skyvia.com/data-integration/package-run-history.html) section. 1.2 How Skyvia Tracks Records Skyvia pricing plans are based on the number of records processed per month. This includes: Import & Replication : Records created, updated, or deleted in the target system. (Even if an update doesn\u2019t change the data, it still counts.) Export : Rows included in CSV files generated during export operations. Synchronization : Records successfully created, updated, or deleted in both the source and target systems. Data and Control Flows : All successfully processed rows in Target components, except those using cache or log connections. For a complete breakdown, check the links below. [Subscription Management, Payments, and Trials](https://docs.skyvia.com/account-management/subscriptions-payments-and-trials.html) [Subscription Limits and Plans in More Details](https://docs.skyvia.com/account-management/subscription-limits-and-plans-in-more-details.html) 1.3 Checking Your Record Usage You can monitor record consumption using the Usage Summary feature. This tool lets you: Track used and remaining resources View usage statistics for a specific period Group data by workspace or integration Break down record usage by time frame for better tracking For more details, visit our [Usage Summary documentation](https://docs.skyvia.com/account-management/usage-summary.html) ." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/why-don-t-skyvia-data-flow-logs-show-errors-or-successful-records-by-default", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: \ud83d\udcca Skyvia Data Flow Logs Explained: The Reason Errors and Successes Aren't Shown by Default\n\nAnswer: In [Data Flow](https://docs.skyvia.com/data-integration/data-flow/index.html) , tracking execution success or failure can be challenging due to the nature of row processing. Rows that fail in one component may succeed in another, complicating error detection. By default, Skyvia logs do not provide detailed error or success information, which can lead to confusion. To enhance visibility, please implement the following components in your Data Flow: Row Count Component: It counts the records passing through specific points in your Data Flow. It helps track the number of rows processed successfully or failed at each step. Please refer to [this article](https://docs.skyvia.com/data-integration/data-flow/components/row-count.html) for more details on Row Count. Log Component: It is designed to record specific information during the execution. You can configure it to log essential details like errors, names, row statuses, or even specific data values, which is crucial for deeper analysis. How to Add Components: Row Count : Drag and drop the Row Count component at key points of your Data Flow to count processed records. Log : Insert a Target Log Component to capture additional details like errors or IDs. For example: By setting up these components, you can effectively monitor each step, simplifying troubleshooting and tracking. Feel free to explore this in more detail in our documentation. Here is [the article](https://docs.skyvia.com/data-integration/data-flow/results.html) that dives deeper into the topic." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/workspace-roles-in-skyvia", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Account and Workspace Roles in Skyvia\n\nAnswer: In Skyvia, we distinguish between account and workspace roles for seamless teamwork: 1. Account Roles The account admin , also known as the account owner, has full control over the account. They can manage subscriptions to Skyvia products, modify auto-renewal settings, enable auto-cleaning for backups, update payment details, and create the initial account structure. This includes adding workspaces, creating objects, inviting users, assigning roles, granting administrative privileges, and managing users (including revoking access or deleting them). If the admin represents a company, their primary responsibility is to ensure smooth collaboration among teammates, manage subscriptions and payments, and maintain overall account security. Upon registration for Skyvia, users are automatically granted account admin status. Important: If your account administrator stops working with the account, they must transfer their admin role to another member. Failing to do so may result in difficulties managing the account subscription, inviting users, and configuring objects in the future. Account members have limited access, primarily to view subscriptions and account resources but cannot manage them. They do not have access to payment information, invoices, or other users unless granted access to a specific workspace. Members cannot modify account settings but may be assigned workspace administrator rights. This grants them full control within specific workspaces, including the ability to modify settings or delete the workspace and its users. When a user joins an account through an invitation, they automatically receive account member status. To request additional permissions, they must ask the account admin. 2. Workspace Roles A workspace role defines a user\u2019s permissions within a workspace. Users can be assigned standard roles with predefined permissions or custom roles tailored to specific needs. Roles help manage user permissions, ensuring effective collaboration within the workspace. Workspace administrators can remove users, manage and delete workspace objects, or even delete the workspace entirely. Note: When inviting a user to the account, ensure you carefully select both the Account Role and Workspace Role . This will help prevent permission issues related to creating or running objects. Skyvia offers 4 predefined workspace roles: Administrator \u2013 Full control over workspace settings, objects, and user management. Developer \u2013 Can manage objects but cannot modify workspace settings or manage users. Member \u2013 Can execute existing integrations, view logs, and access backed-up data. Supporter \u2013 Limited to viewing and downloading integration logs for troubleshooting. Standard roles cannot be deleted, but they can be used as a base for creating custom roles. Additionally, admins can create custom roles with specific permissions. These roles can be edited or deleted\u00a0at any time. If a custom role is deleted, the user remains in the workspace with their previous permissions. You can find more details on the topic here: [Account Management](https://docs.skyvia.com/account-management/) . This article [Collaboration](https://docs.skyvia.com/collaboration.html#introduction-to-users-roles-and-privileges) may also be useful." }, { "url": "https://support.skyvia.com/portal/en/kb/articles/you-have-reached-the-limit-of-tasks", "product_name": "Unknown", "content_type": "FAQ", "content": "Question: Automation: Task Limit Reached\n\nAnswer: The \" You have reached the limit of tasks \" error in Automation indicates that you have exceeded the number of billed tasks allowed in your pricing plan for the current billing period. A billed task is any successful execution of an Action component within an automation run. The key factors affecting task consumption include: The complexity of the automation. The number of action components executed. The presence of loops, such as Foreach components. Note: Skipped or failed actions do not count as billed tasks. For example: If an automation executes 3 action components successfully while skipping 3 others, it will count as 3 billed tasks. If an automation runs 4 action components but 2\u00a0of them fail, it will count as 2 billed tasks. You can monitor the number of billed tasks used in your automation logs under the Monitor and Log tabs. This error typically occurs due to one of the following reasons: Exceeding the Plan Limit : If your automation has executed more successful action components within a month than are available in your plan. Complex Automations with Loops : Automations that involve loops (Foreach components) may execute action components multiple times, consuming more tasks than expected. High-Frequency Execution : If your automations run frequently, they can quickly reach the monthly limit. If you encounter this error, consider upgrading the plan or optimizing your automation workflows: Minimize Foreach component usage where possible. Review logs to identify high-consumption automations and optimize them. If upgrading is not an option, you will need to wait until your task limit resets at the beginning of the next billing period." } ]